Connect with us

Business

Anthropic’s ‘Red Team’ team pushes its AI models into the danger zone—and burnishes the company’s reputation for AI safety

Published

on



Last month, at the 33rd annual DEF CON, the world’s largest hacker convention in Las Vegas, Anthropic researcher Keane Lucas took the stage. A former U.S. Air Force captain with a Ph.D. in electrical and computer engineering from Carnegie Mellon, Lucas wasn’t there to unveil flashy cybersecurity exploits. Instead, he showed how Claude, Anthropic’s family of large language models, has quietly outperformed many human competitors in hacking contests — the kind used to train and test cybersecurity skills in a safe, legal environment. His talk highlighted not only Claude’s surprising wins but also its humorous failures, like drifting into musings on security philosophy when overwhelmed, or inventing fake “flags” (the secret codes competitors need to steal and submit to contest judges to prove they’ve successfully hacked a system).

Lucas wasn’t just trying to get a laugh, though. He wanted to show that AI agents are already more capable at simulated cyberattacks than many in the cybersecurity world realize – they are fast, and make good use of autonomy and tools. That makes them a potential tool for criminal hackers or state actors — and means, he argued, that those same tools need to be deployed for defense. 

The message reflects Lucas’ role on Anthropic’s Frontier Red Team, an internal group of about 15 researchers tasked with stress-testing the company’s most advanced AI systems—probing how they might be misused in areas like biological research, cybersecurity, and autonomous systems, with a particular focus on risks to national security. Anthropic, which was founded in 2021 by ex-OpenAI employees, has cast itself as a safety-first lab convinced that unchecked models could pose “catastrophic risks.” But it is also one of the fastest-growing technology companies in history: This week Anthropic announced it has raised a fresh $13 billion at a $183 billion valuation and had passed $5 billion in run-rate revenue. 

Unlike similar groups at other labs, Anthropic’s red team is also explicitly tasked with publicizing its findings. That outward-facing mandate reflects the team’s unusual placement inside Anthropic’s policy division, led by co-founder Jack Clark. Other safety and security teams at Anthropic sit under the company’s technical leadership, including a safeguards team that works to improve Claude’s ability to identify and refuse harmful requests, such as those that might negatively impact a user’s mental health or encourage self-harm.

According to Anthropic, the Frontier Red Team does the heavy lifting towards the company’s stated purpose of “building systems that people can rely on and generating research about the opportunities and risks of AI.” Its work underlies Anthropic’s Responsible Scaling Policy (RSP), the company’s governance framework that triggers stricter safeguards as models approach various dangerous thresholds. It does so by running thousands of safety tests, or “evals,” in high-risk domains—results that can determine when to impose tighter controls. 

For example, it was the Frontier Red Team’s assessments that led Anthropic to release its latest model, Claude Opus 4, under what the company calls “AI Safety Level 3”—the first model released under that status—as a “precautionary and provisional action.” This designation says the model significantly enhances a user’s ability to obtain, produce or deploy chemical, biological, radiological or nuclear weapons, by providing better instructions than existing, non-AI resources like search engines. It also is a system that begins to show signs of autonomy, which include the ability to act on a goal. By designating Opus 4 as ASL-3, Anthropic flipped on stronger internal security measures to prevent someone from obtaining the model weights, or the neural network “brains” of the model—and visible safeguards to block the model from answering queries that might help someone build a chemical or nuclear weapon.. 

Telling the world about AI risks is good for policy—and business

The red team’s efforts to amplify its message publicly have grown louder in recent months: It launched a standalone blog last month, called Red, with posts ranging from a nuclear-proliferation study with the Department of Energy to a quirky experiment in which Claude runs a vending machine business. Lucas’ DEF CON talk was also its first public outing at the conference.

“As far as I know, there’s no other team explicitly tasked with finding these risks as fast as possible—and telling the world about them,” said Frontier Red Team leader Logan Graham, who, along with Lucas, met with Fortune at a Las Vegas cafe just before DEF CON. “We have worked out a bunch of kinks about what information is sensitive and not sensitive to share, and ultimately, who’s responsible for dealing with this information. It’s just really clear that it’s really important for the public to know about this, and so there’s definitely a concerted effort.” 

Experts in security and defense point out that the work of the Frontier Red Team, as part of Anthropic’s policy organization, also happens to be good for the company’s business—particularly in Washington, DC. By showing they are out front on national-security risks, Anthropic turns what could be seen as an additional safety burden into a business differentiator. 

“In AI, speed matters — but trust is what often accelerates scale,” said Wendy R. Anderson, a former Department of Defense staffer and defense tech executive. “From my years in the defense tech world, I’ve observed that companies that make safety and transparency core to their strategy don’t just earn credibility with regulators, they help shape the rules…it determines who gets access to the highest-value, most mission-critical deployments.” 

Jen Weedon, a lecturer of Columbia University’s school of International and Public Affairs, who researches best practices in red teaming AI systems, pointed out that where a red team sits in the organizational chart shapes its incentives.

“By placing its Frontier Red Team under the policy umbrella, Anthropic is communicating that catastrophic risks aren’t just technical challenges—they’re also political, reputational, and regulatory ones,” she said. “This likely gives Anthropic leverage in Washington, but it also shows how security and safety talk doubles as strategy.” The environment for AI business in the US right now, particularly for public sector use cases, “seems to be open for the shaping and taking,” pointing to the Trump Administration’s recently-announced AI Action Plan, which is “broad in ambition but somewhat scant in details, particularly around safeguards.” 

Critics from across the industry, however, have long taken aim at Anthropic’s broader efforts on AI safety. Some, like Yann LeCun, chief scientist at Meta’s Fundamental AI Research lab, argue that catastrophic risks are overblown and that today’s models are “dumber than a cat.” Others say the focus should be on present-day harms (such as encouraging self-harm or the tendency of LLMs to reinforce racial or gender stereotypes), or fault the company for being overly secretive despite its safety branding. Nvidia’s Jensen Huang has accused CEO Dario Amodei of regulatory capture—using his stance on AI safety to scare lawmakers into enacting rules that would benefit Anthropic at the expense of its rivals. He’s even claimed Amodei is trying to “control the entire industry.” (Amodei, on a recent technology podcast, called Huang’s comments “an outrageous lie” and a “bad-faith distortion.”)

On the other end of the spectrum, some researchers argue Anthropic isn’t going far enough. UC Berkeley’s Stuart Russell told the Wall Street Journal, “I actually think we don’t have a method of safely and effectively testing these kinds of systems.” And studies carried out by the nonprofits SaferAI and the Future of Life Institute (FLI) said that top AI companies such as Anthropic maintain “unacceptable” levels of risk management and show a “striking lack of commitment to many areas of safety.” 

Inside Anthropic, though, executives argue that the Frontier Red Team, working alongside the company’s other security and safety teams, exists precisely to surface AI’s biggest potential risks—and to force the rest of the industry to reckon with them. 

Securing the world from rogue AI models

Graham, who helped found Anthropic’s Frontier Red Team in 2022, has, like others in the group, a distinctive resume: After studying economics in college, he earned a Ph.D. in machine learning at Oxford as a Rhodes Scholar before spending two years advising the U.K. Prime Minister on science and technology. 

Graham described himself as “AGI-pilled,” which he defines as someone who believes that AI models are just going to keep getting better. He added that while the red team’s viewpoints are diverse,  “the people who select into it are probably, on average, more AGI-pilled than most.” The eclectic team includes a bioengineering expert, as well as three physicists, though Graham added that the most desired skill on the team is not a particular domain or background, but “craftiness” – which obviously comes in handy when when trying to outsmart an AI into revealing dangerous capabilities.

The Frontier Red Team is “one of the most unique groups in the industry,” said Dan Lahav, CEO of a stealth startup which focuses on evaluating frontier models (his firm conducted third-party tests on Anthropic’s Claude 4, as well as OpenAI’s GPT-5). To work effectively, he said, its members need to be “hardcore AI scientists” but also able to communicate outcomes clearly—“philosophers blended with AI scientists.”

Calling it a “red team” is a spin on traditional security red teams – security units that stress-test an organization’s defenses by playing the role of the attacker. Anthropic’s Frontier Red Team, Graham said, works differently. The key difference, he explained, is what they’re protecting against, and why. Traditional security red teams protect an organization from external attackers by finding vulnerabilities in their systems. Anthropic’s Frontier Red Team, on the other hand, is designed to protect society from the company’s own products, its AI models, by discovering what these systems are capable of before those capabilities become dangerous. They work to understand: “What could this AI do if someone wanted to cause harm?” and “What will AI be capable of next year that it can’t do today?”  

For example, Anthropic points out that nuclear know-how, like AI, can be used for good or for harm — the same science behind power plants can also inform weapons development. To guard against that risk, the company recently teamed up with the Department of Energy’s National Nuclear Security Administration to test whether its models could spill sensitive nuclear information (they could not). More recently, they’ve gone a step further, co-developing a tool with the agency that flags potentially dangerous nuclear-related conversations with high accuracy.

Anthropic isn’t alone in running AI safety-focused “red team” exercises on its AI models: OpenAI’s red-team program feeds into its “Preparedness” framework, and Google DeepMind runs its own safety evaluations. But at those other companies, the red teams sit closer to technical security and research, while Anthropic’s placement under policy underscores what can be seen as a triple role — probing risks; making the public aware of them; and as a kind of marketing tool, reinforcing the company’s safety bona fides.

The right incentive structure

Jack Clark, who before co-founding Anthropic led policy efforts at OpenAI, told Fortune that the Frontier Red Team is focused on generating the evidence that guides both company decisions and public debate—and placing it under his policy organization was a “very intentional decision.”  

Clark stressed that this work is happening in the context of rapid technological progress. “If you look at the actual technology, the music hasn’t stopped,” he said. “Things keep advancing, perhaps even more quickly than they did in the past.” In official submissions from Anthropic to the White House, he pointed out that the company has been consistent in saying it expects “really powerful systems by late 2026 or early 2027.”

That prediction, he explained, comes directly from the kinds of novel tests the Frontier Red Team are running. Some of what the team is studying are things like complex cyber-offense tasks, he explained, which involve long-horizon, multi-step problem-solving. “When we look at performance on these tests, it keeps going up,” he said. “I know that these tests are impossible to game because they have never been published and they aren’t on the internet. When I look at the scores on those things, I just come away with this impression of continued, tremendous and awesome progress, despite the vibes of people saying maybe AI is slowing down.” 

Anthropic’s bid to shape the conversation on AI safety doesn’t end with the Frontier Red Team — or even with its policy shop. In July, the company unveiled a National Security and Public Sector Advisory Council stocked with former senators, senior Defense officials, and nuclear experts. The message is clear: safety work isn’t just about public debate, it’s also about winning trust in Washington. For the Frontier Red Team and beyond, Anthropic is betting that transparency about risk can translate into credibility with regulators, government buyers, and enterprise customers alike.

“The purpose of the Frontier Red Team is to create better information for all of us about the risks of powerful AI systems – by making this available publicly, we hope to inspire others to work on these risks as well, and build a community dedicated to understanding and mitigating them,” said Clark. “Ultimately, we expect this will lead to a far larger market for AI systems than exists today, though the primary motivating purpose is for generating safety insights rather than product ones.”

The real test

The real test, though, is whether Anthropic will still prioritize safety if doing so means slowing its own growth or losing ground to rivals, according to Herb Lin, senior research scholar at Stanford University’s Center for International Security and Cooperation and Research Fellow at the Hoover Institution.  

“At the end of the day, the test of seriousness — and nobody can know the answer to this right now — is whether the company is willing to put its business interests second to legitimate national security concerns raised by its policy team,” he said. “That ultimately depends on the motivations of the leadership at the time those decisions arise. Let’s say it happens in two years — will the same leaders still be there? We just don’t know.”

While that uncertainty may hang over Anthropic’s safety-first pitch, inside the company, the Frontier Red Team wants to show there’s room for both caution and optimism. 

“We take it all very, very seriously so that we can find the fastest path to mitigating risks,” said Graham. 

Overall, he adds, he’s optimistic: “I think we want people to see that there’s a bright future here, but also realize that we can’t just go there blindly. We need to avoid the pitfalls.” 



Source link

Continue Reading

Business

Gen Z’s nostalgia for ‘2016 vibes’ reveals something deeper: a protest against the world and economy they inherited

Published

on


Gen Z’s “2016 vibes” fixation is less about pastel Instagram filters and more about an economic and cultural shift: they are coming of age in a world where cheap Ubers, underpriced delivery, and a looser-feeling internet simply no longer exist. What looks like a lighthearted nostalgia trend is something more structural: a reaction to coming of age against the backdrop of a fully mature internet economy.

On TikTok and Instagram, “2016 vibes” has become a full-blown aesthetic, with POV clips, soundtracks of mid‑2010s hits, and filters that soften the present into a memory. Searches for “2016” on TikTok jumped more than 450% in the first week of January, and more than 1.6 million videos celebrating the year’s look and feel have been uploaded, according to creator‑economy newsletter After School by Casey Lewis. Lewis noted that only a few months ago, “millennial cringe” was rebranded as “millennial optimism,” with Gen Zers longing to experience a more carefree era. Lin-Manuel Miranda’s Hamilton, although it debuted in 2015, arguably has a 2016 vibe, for instance. Some millennial optimism is downright bewildering to Gen Z, such as what it calls the “stomp, clap, hey” genre of neo-folk pop music, recalling millennials’ own rediscovery (and new naming) of “yacht rock.”

Meanwhile, Google Trends reports that the search hit an all-time high in mid-January, with the top five trending “why is everyone…” searches all being related to 2016. The top two were “… posting 2016 pics” and “... talking about 2016.”

Creators caption posts “2026 is the new 2016” and stitch side‑by‑side footage of house parties, festivals, and mall hangs, inviting viewers to imagine a version of young adulthood that feels more spontaneous and frictionless.​ At the risk of being too self-referential, the difference can be tracked in Fortune covers, from the stampeding of the unicorns, the billion-dollar startup that defined the supposedly carefree days of 2016, to the bust a decade later and the dawn of the “unicorpse” era.

And while the comparison may feel ridiculous to anyone who actually lived through 2016 as an adult and can remember the stresses and anxieties of that particular time, there is something going on here, with economics at its core. In short, millennials were able to enjoy the peak of a particular Silicon Valley moment in 2016, but 10 years later, Gen Z is late to the party, finding the price of admission is just too high for them to get in the door.

Everyone used to love Silicon Valley

For millennials, 2016 marked a time when technology expanded opportunity rather than eliminating it. Venture capital was cheap, platforms were underpriced, and software functioned to your personal advantage, with aforementioned unicorns flush with cash and willing to offer millennials a crazy deal. The early iterations of the gig-economy ecosystem—Uber, Airbnb, TaskRabbit—were at their peak affordability, lowering the cost of living and making urban life feel frictionless. And at work, new digital tools helped young employees do more, faster, standing out from the pack.

For older millennials, 2016 evokes a very specific consumer reality: Ubers that were often cheaper than cabs and takeout that arrived in minutes for a few dollars in fees. Both were the product of what The New York Times‘ Kevin Roose labeled the “millennial lifestyle subsidy” in 2021, looking back on the era “from roughly 2012 through early 2020, when many of the daily activities of big-city 20- and 30-somethings were being quietly underwritten by Silicon Valley venture capitalists.” Because Uber and Seamless were not really turning a profit all those years while they gained market share, as on a grander scale Amazon and Netflix were underpriced for years before cornering the market on ecommerce and streaming, these subsidies “allowed us to live Balenciaga lifestyles on Banana Republic budgets,” as Roose put it.

Gen Z never really knew what it felt like to take a practically free late-night ride across town, or feast on $50 worth of Chinese takeout while paying half that. And they certainly never knew what it felt like to see unlimited movies in theaters each month, for the flat rate allowed by one MoviePass app. For the generation seeking the 2016 vibe, $40 surge‑priced trips and double‑digit delivery fees are standard, not a shocking new inconvenience, and the frictionless urban lifestyle of the millennial heyday, before they entered their 40s, had (a declining number of) kids, and fought their way into the suburban housing market amid the pandemic housing boom, reads more like historical fiction than a realistic blueprint.​

Tech and digital culture was also just fun. Gen-Z remembers the heyday of Pokemon Go, the only app that somehow forced the youth outside and interacting with each other. Viral trends felt collective rather than segmented by algorithmic feeds. Back then, Vine jokes, Harambe memes, and Snapchat filters could sweep through timelines in a way that made the internet feel weirdly communal, even as politics darkened the horizon.

That helps explain why The New York Times‘ Madison Malone Kircher recently framed the new 2016 nostalgia as part of a broader reexamination of millennial optimism on social media. Celebrities like Kylie Jenner, Selena Gomez, and Karlie Kloss have joined in, uploading 2016 throwbacks that signal a desire to rewind to an era when influencer culture felt less high‑stakes and more experimental.

The moment tech stopped being fun

Then, something shifted. The attitude towards tech companies as nerdy but general do-gooders who “move fast and break things” for the sake of the world faded into a “techlash.” The Cambridge Analytica scandal rocked what was then called Meta and fueled panic around data privacy. Former tech insiders like Tristan Harris started popularizing the idea that the algorithms were addictive.

Thus, when Silicon Valley entered another boom cycle after the release of ChatGPT in 2022—producing a new generation of young, ambitious entrepreneurs and icons like Sam Altman and Elon Musk with a new breed of unicorns to go along with them—the moment was met with skepticism from Gen Z. Where millennials once found a quite literal free lunch, Gen Z increasingly sees threat.

The entry-level work that once functioned as a professional apprenticeship—research, synthesis, junior coding, coordination—is now being handled by autonomous systems. Companies are no longer hiring large cohorts of juniors to train up, often citing AI as the reason. Economists describe this as a “jobless expansion,” with data showing that the share of early-career employees at major tech firms has nearly halved since 2023. The result is a generation of so-called “digital natives” left to wonder whether the very skills they were told would future-proof them have instead been commoditized out of their reach.

Instead of innovation making technology feel communal and fun, as it did in 2016, generative AI has flooded platforms with low-quality content—what users now call “slop”—while raising alarms about addictive chatbots dispensing confident but dangerous advice to children. The promise of technology hasn’t vanished, but its emotional valence has flipped from something people used to get ahead to something they increasingly feel subjected to.

Gen Z’s view from the present

Commentators stress that this is largely a millennial‑led nostalgia wave—but Gen Z is the audience making it go massively viral. Many were children or young teens in 2016, old enough to remember the music and memes but too young to fully participate in the nightlife and freedom the year now symbolizes. For those now juggling college debt, precarious work, and a cost‑of‑living crisis, the grainy clips of suburban parking lots, festival wristbands, and crowded Ubers feel like evidence of a slightly easier universe that just slipped out of reach.​

In that sense, “2016 vibes” is a way for Gen Z to process a basic unfairness: they inherited the platforms without the perks. Casey Lewis argues that, even if Gen Z may be driving this trend’s surge to prominence, even a new kind of monocultural moment, it’s by definition a “uniquely millennial trend,” part of an ongoing reexamination of what is emerging with time as a culture created by the millennial generation. Lewis argues that 2016 has an “economic” hold on the cultural imagination, representing “a version of modern life with many of today’s technological advancements but greater financial accessibility.”

Chris DeVille, managing editor of the (surviving millennial-era) music blog Stereogum, tracked a similar trajectory in his introspective cultural history of indie rock, released in August 2025. He documented, at times with lacerating self-criticism, how the underground musical genre grew out of Gen X’s alternative music scene of the 1990s and turned into something that openly embraced synthesizers, arena sing-alongs and countless sellouts to nationally broadcast car commercials.

And that may be what the “2016 vibes” trend represents more than anything: an acknowledgement that the internet is fully professionalized and corporatized now, and the search for something organic, indie, and authentic will have to take place somewhere else.

This story was originally featured on Fortune.com



Source link

Continue Reading

Business

Billionaire Marc Benioff challenges the AI sector: ‘What’s more important to us, growth or our kids?’

Published

on



Imagine it is 1996. You log on to your desktop computer (which took several minutes to start up), listening to the rhythmic screech and hiss of the modem connecting you to the World Wide Web. You navigate to a clunky message board—like AOL or Prodigy—to discuss your favorite hobbies, from Beanie Babies to the newest mixtapes.

At the time, a little-known law called Section 230 of the Communications Safety Act had just been passed. The law—then just a 26-word document—created the modern internet. It was intended to protect “good samaritans” who moderate websites from regulation, placing the responsibility for content on individual users rather than the host company.

Today, the law remains largely the same despite evolutionary leaps in internet technology and pushback from critics, now among them Salesforce CEO Marc Benioff. 

In a conversation at the World Economic Forum in Davos, Switzerland, on Tuesday, titled “Where Can New Growth Come From?” Benioff railed against Section 230, saying the law prevents tech giants from being held accountable for the dangers AI and social media pose.

“Things like Section 230 in the United States need to be reshaped because these tech companies will not be held responsible for the damage that they are basically doing to our families,” Benioff said in the panel conversation which also included Axa CEO Thomas Buberl, Alphabet President Ruth Porat, Emirati government official Khaldoon Khalifa Al Mubarak, and Bloomberg journalist Francine Lacqua.

As a growing number of children in the U.S. log onto AI and social media platforms, Benioff said the legislation threatens the safety of kids and families. The billionaire asked, “What’s more important to us, growth or our kids? What’s more important to us, growth or our families? Or, what’s more important, growth or the fundamental values of our society?”

Section 230 as a shield for tech firms

Tech companies have invoked Section 230 as a legal defense when dealing with issues of user harm, including in the 2019 case Force v. Facebook, where the court ruled the platform wasn’t liable for algorithms that connected members of Hamas after the terrorist organization used the platform to encourage murder in Israel. The law could shield tech companies from liability for harm AI platforms pose, including the production of deepfakes and AI-Generated sexual abuse material.

Benioff has been a vocal critic of Section 230 since 2019 and has repeatedly called for the legislation to be abolished. 

In recent years, Section 230 has come under increasing public scrutiny as both Democrats and Republicans have grown skeptical of the legislation. In 2019 the Department of Justice under President Donald Trump pursued a broad review of Section 230. In May 2020, President Trump signed an Executive Order limiting tech platforms’ immunity after Twitter added fact-checks to his tweets. And in 2023, the U.S. Supreme Court heard Gonzalez v. Google, though, decided it on other grounds, leaving Section 230 intact.

In an interview with Fortune in December 2025, Dartmouth business school professor Scott Anthony voiced concern over the “guardrails” that were—and weren’t—happening with AI. When cars were first invented, he pointed out, it took time for speed limits and driver’s licenses to follow. Now with AI, “we’ve got the technology, we’re figuring out the norms, but the idea of, ‘Hey, let’s just keep our hands off,’ I think it’s just really bad.”

The decision to exempt platforms from liability, Anthony added, “I just think that it’s not been good for the world. And I think we are, unfortunately, making the mistake again with AI.”

For Benioff, the fight to repeal Section 230 is more than a push to regulate tech companies, but a reallocation of priorities toward safety and away from unfettered growth. “In the era of this incredible growth, we’re drunk on the growth,” Benioff said. “Let’s make sure that we use this moment also to remember that we’re also about values as well.”



Source link

Continue Reading

Business

Palantir CEO says AI “will destroy” humanities jobs but there will be “more than enough jobs” for people with vocational training

Published

on



Some economists and experts say that critical thinking and creativity will be more important than ever in the age of artificial intelligence (AI), when a robot can do much of the heavy lifting on coding or research. Take Benjamin Shiller, the Brandeis economics professor who recently told Fortune that a “weirdness premium” will be valued in the labor market of the future. Alex Karp, the Palantir founder and CEO, isn’t one of these voices. 

“It will destroy humanities jobs,” Karp said when asked how AI will affect jobs in conversation with BlackRock CEO Larry Fink at the World Economic Forum annual meeting in Davos, Switzerland. “You went to an elite school and you studied philosophy — I’ll use myself as an example — hopefully you have some other skill, that one is going to be hard to market.”

Karp attended Haverford College, a small, elite liberal arts college outside his hometown of Philadelphia. He earned a J.D. from Stanford Law School and a Ph.D. in philosophy from Goethe University in Germany. He spoke about his own experience getting his first job. 

Karp told Fink that he remembered thinking about his own career, “I’m not sure who’s going to give me my first job.” 

The answer echoed past comments Karp has made about certain types of elite college graduates who lack specialized skills.

“If you are the kind of person that would’ve gone to Yale, classically high IQ, and you have generalized knowledge but it’s not specific, you’re effed,” Karp said in an interview with Axios in November. 

Not every CEO agrees with Karp’s assessment that humanities degrees are doomed. BlackRock COO Robert Goldstein told Fortune in 2024 that the company was recruiting graduates who studied “things that have nothing to do with finance or technology.” 

McKinsey CEO Bob Sternfels recently said in an interview with Harvard Business Review that the company is “looking more at liberal arts majors, whom we had deprioritized, as potential sources of creativity,” to break out of AI’s linear problem-solving. 

Karp has long been an advocate for vocational training over traditional college degrees. Last year, Palantir launched a Meritocracy Fellowship, offering high school students a paid internship with a chance to interview for a full-time position at the end of four months. 

The company criticized American universities for “indoctrinating” students and having “opaque” admissions that “displaced meritocracy and excellence,” in their announcement of the fellowship. 

“If you did not go to school, or you went to a school that’s not that great, or you went to Harvard or Princeton or Yale, once you come to Palantir, you’re a Palantirian—no one cares about the other stuff,” Karp said during a Q2 earnings call last year.

“I think we need different ways of testing aptitude,” Karp told Fink. He pointed to the former police officer who attended a junior college, who now manages the US Army’s MAVEN system, a Palantir-made AI tool that processes drone imagery and video.  

“In the past, the way we tested for aptitude would not have fully exposed how irreplaceable that person’s talents are,” he said. 

Karp also gave the example of technicians building batteries at a battery company, saying those workers are “very valuable if not irreplaceable because we can make them into something different than what they were very rapidly.”

He said what he does all day at Palantir is “figuring out what is someone’s outlier aptitude. Then, I’m putting them on that thing and trying to get them to stay on that thing and not on the five other things they think they’re great at.” 

Karp’s comments come as more employers report a gap between the skills applicants are offering and what employers are looking for in a tough labor market. The unemployment rate for young workers ages 16 to 24 hit 10.4% in December and is growing among college graduates. Karp isn’t too worried. 

“There will be more than enough jobs for the citizens of your nation, especially those with vocational training,” he said. 



Source link

Continue Reading

Trending

Copyright © Miami Select.