Connect with us

Business

Professor leading OpenAI’s safety panel may have one of the most important roles in tech

Published

on



If you believe artificial intelligence poses grave risks to humanity, then a professor at Carnegie Mellon University has one of the most important roles in the tech industry right now.

Zico Kolter leads a 4-person panel at OpenAI that has the authority to halt the ChatGPT maker’s release of new AI systems if it finds them unsafe. That could be technology so powerful that an evildoer could use it to make weapons of mass destruction. It could also be a new chatbot so poorly designed that it will hurt people’s mental health.

“Very much we’re not just talking about existential concerns here,” Kolter said in an interview with The Associated Press. “We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems.”

OpenAI tapped the computer scientist to be chair of its Safety and Security Committee more than a year ago, but the position took on heightened significance last week when California and Delaware regulators made Kolter’s oversight a key part of their agreements to allow OpenAI to form a new business structure to more easily raise capital and make a profit.

Safety has been central to OpenAI’s mission since it was founded as a nonprofit research laboratory a decade ago with a goal of building better-than-human AI that benefits humanity. But after its release of ChatGPT sparked a global AI commercial boom, the company has been accused of rushing products to market before they were fully safe in order to stay at the front of the race. Internal divisions that led to the temporary ouster of CEO Sam Altman in 2023 brought those concerns that it had strayed from its mission to a wider audience.

The San Francisco-based organization faced pushback — including a lawsuit from co-founder Elon Musk — when it began steps to convert itself into a more traditional for-profit company to continue advancing its technology.

Agreements announced last week by OpenAI along with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings aimed to assuage some of those concerns.

At the heart of the formal commitments is a promise that decisions about safety and security must come before financial considerations as OpenAI forms a new public benefit corporation that is technically under the control of its nonprofit OpenAI Foundation.

Kolter will be a member of the nonprofit’s board but not on the for-profit board. But he will have “full observation rights” to attend all for-profit board meetings and have access to information it gets about AI safety decisions, according to Bonta’s memorandum of understanding with OpenAI. Kolter is the only person, besides Bonta, named in the lengthy document.

Kolter said the agreements largely confirm that his safety committee, formed last year, will retain the authorities it already had. The other three members also sit on the OpenAI board — one of them is former U.S. Army General Paul Nakasone, who was commander of the U.S. Cyber Command. Altman stepped down from the safety panel last year in a move seen as giving it more independence.

“We have the ability to do things like request delays of model releases until certain mitigations are met,” Kolter said. He declined to say if the safety panel has ever had to halt or mitigate a release, citing the confidentiality of its proceedings.

Kolter said there will be a variety of concerns about AI agents to consider in the coming months and years, from cybersecurity – “Could an agent that encounters some malicious text on the internet accidentally exfiltrate data?” – to security concerns surrounding AI model weights, which are numerical values that influence how an AI system performs.

“But there’s also topics that are either emerging or really specific to this new class of AI model that have no real analogues in traditional security,” he said. “Do models enable malicious users to have much higher capabilities when it comes to things like designing bioweapons or performing malicious cyberattacks?”

“And then finally, there’s just the impact of AI models on people,” he said. “The impact to people’s mental health, the effects of people interacting with these models and what that can cause. All of these things, I think, need to be addressed from a safety standpoint.”

OpenAI has already faced criticism this year about the behavior of its flagship chatbot, including a wrongful-death lawsuit from California parents whose teenage son killed himself in April after lengthy interactions with ChatGPT.

Kolter, director of Carnegie Mellon’s machine learning department, began studying AI as a Georgetown University freshman in the early 2000s, long before it was fashionable.

“When I started working in machine learning, this was an esoteric, niche area,” he said. “We called it machine learning because no one wanted to use the term AI because AI was this old-time field that had overpromised and underdelivered.”

Kolter, 42, has been following OpenAI for years and was close enough to its founders that he attended its launch party at an AI conference in 2015. Still, he didn’t expect how rapidly AI would advance.

“I think very few people, even people working in machine learning deeply, really anticipated the current state we are in, the explosion of capabilities, the explosion of risks that are emerging right now,” he said.

AI safety advocates will be closely watching OpenAI’s restructuring and Kolter’s work. One of the company’s sharpest critics says he’s “cautiously optimistic,” particularly if Kolter’s group “is actually able to hire staff and play a robust role.”

“I think he has the sort of background that makes sense for this role. He seems like a good choice to be running this,” said Nathan Calvin, general counsel at the small AI policy nonprofit Encode. Calvin, who OpenAI targeted with a subpoena at his home as part of its fact-finding to defend against the Musk lawsuit, said he wants OpenAI to stay true to its original mission.

“Some of these commitments could be a really big deal if the board members take them seriously,” Calvin said. “They also could just be the words on paper and pretty divorced from anything that actually happens. I think we don’t know which one of those we’re in yet.”



Source link

Continue Reading

Business

Trump demands $10,000 bonuses for air traffic controllers who worked during shutdown and pay cuts for those who didn’t amid flight chaos

Published

on



Air travelers should expect worsening cancellations and delays this week even if the government shutdown ends, as the Federal Aviation Administration moves ahead with deeper cuts to flights at 40 major U.S. airports, officials said Monday.

Day four of the flight restrictions saw airlines scrap over 2,100 flights Monday after cancelling 5,500 from Friday to Sunday. Some air traffic controllers — unpaid for more than a month — have stopped showing up, citing the added stress and need to take second jobs.

President Donald Trump pressured controllers Monday on social media to “get back to work, NOW!!!” He said he wants a $10,000 bonus for controllers who’ve stayed on the job and to dock the pay of those who didn’t.

The head of the controllers union said they’re being used as a “political pawn” in the fight over the shutdown.

Controller shortages combined with wintry weather led to four-hour delays at Chicago O’Hare International Airport on Monday, with the FAA warning that staffing at more than a dozen towers and control centers could cause disruptions in cities including Philadelphia, Nashville and Atlanta.

The Senate on Monday was nearing a vote to end the shutdown although it would still need to clear the House and final passage could still be days away. Transportation Secretary Sean Duffy made clear last week that flight cuts will remain until the FAA sees safety metrics improve.

Over the weekend, airlines canceled thousands of flights to comply with the order to drop 4% of flights at 40 of the nation’s busiest airports. That will rise to 6% on Tuesday and 10% by week’s end, the FAA says.

Already, travelers are growing angry.

“All of this has real negative consequences for millions of Americans, and it’s 100% unnecessary and avoidable,” said Todd Walker, whose flight from San Francisco to Washington state was canceled over the weekend, causing him to miss his mom’s 80th birthday party.

One out of every 10 flights nationwide were scratched Sunday — the fourth worst day for cancellations in almost two years, according aviation analytics firm Cirium.

The FAA expanded flight restrictions Monday, barring business jets and many private flights from using a dozen airports already under commercial flight limits.

Airports nationwide have seen intermittent delays since the shutdown began because the FAA slows air traffic when it’s short on controllers to ensure flights remain safe.

The shutdown has made controllers’ demanding jobs even more stressful, leading to fatigue and increased risks, said Nick Daniels, president of the National Air Traffic Controllers Association.

“This is the erosion of the safety margin the flying public never sees, but America relies on every single day,” the union chief said at a news conference Monday.

Some controllers can’t afford child care to be able to come to work while others are moonlighting as delivery drivers or even selling plasma to pay their bills, Daniels said. The number who are retiring or quitting is “growing by the day,” he said.

During the six weekends since the shutdown began, the average number of 30 air traffic control facilities had staffing issues. That’s almost four times the number on weekends this year before the shutdown, according to an Associated Press analysis of operations plans sent through the Air Traffic Control System Command Center system.

Tuesday will be the second missed payday for controllers and other FAA employees. It’s unclear how quickly they might be paid once the shutdown ends — it took more than two months to receive full back pay in 2019, Daniels said.

The shutdown and money worries have become regular “dinnertime conversations” for Amy Lark and her husband, both air traffic controllers in the Washington, D.C. area.

“Yesterday, my kids asked me how long we could stay in our house,” Lark said. Still, she said controllers remain “100% committed.”

The government has struggled for years with a shortage of controllers, and Duffy said the shutdown has worsened the problem. Before the shutdown, the transportation secretary had been working to hire more controllers, speed up training and offer retention bonuses.

Duffy warned over the weekend that if the shutdown drags on, air travel may “be reduced to a trickle” by Thanksgiving week.

___

Yamat reported from Las Vegas and Funk from Omaha, Nebraska. Associated Press writers Ken Sweet, Wyatte Grantham-Philips and Michael R. Sisak in New York, Stephen Groves and Kevin Freking in Washington, and John Seewer in Toledo, Ohio, contributed to this report.



Source link

Continue Reading

Business

Supreme Court rejects call to overturn its decision legalizing same-sex marriage nationwide

Published

on



The Supreme Court on Monday rejected a call to overturn its landmark decision that legalized same-sex marriage nationwide.

The justices, without comment, turned away an appeal from Kim Davis, the former Kentucky court clerk who refused to issue marriage licenses to same-sex couples after the high court’s 2015 ruling in Obergefell v. Hodges.

Davis had been trying to get the court to overturn a lower-court order for her to pay $360,000 in damages and attorney’s fees to a couple denied a marriage license.

Her lawyers repeatedly invoked the words of Justice Clarence Thomas, who alone among the nine justices has called for erasing the same-sex marriage ruling.

Thomas was among four dissenting justices in 2015. Chief Justice John Roberts and Justice Samuel Alito are the other dissenters who are on the court today.

Roberts has been silent on the subject since he wrote a dissenting opinion in the case. Alito has continued to criticize the decision, but he said recently he was not advocating that it be overturned.

Justice Amy Coney Barrett, who was not on the court in 2015, has said that there are times when the court should correct mistakes and overturn decisions, as it did in the 2022 case that ended a constitutional right to abortion.

But Barrett has suggested recently that same-sex marriage might be in a different category than abortion because people have relied on the decision when they married and had children.

Human Rights Campaign president Kelley Robinson praised the justices’ decision not to intervene. “The Supreme Court made clear today that refusing to respect the constitutional rights of others does not come without consequences,” Robinson said in a statement.

Davis drew national attention to eastern Kentucky’s Rowan County when she turned away same-sex couples, saying her faith prevented her from complying with the high court ruling. She defied court orders to issue the licenses until a federal judge jailed her for contempt of court in September 2015.

She was released after her staff issued the licenses on her behalf but removed her name from the form. The Kentucky legislature later enacted a law removing the names of all county clerks from state marriage licenses.

Davis lost a reelection bid in 2018.



Source link

Continue Reading

Business

You don’t hate AI because of genuine dislike. No, there’s a $1 billion plot by the ‘Doomer Industrial Complex’ to brainwash you, Trump’s AI czar says

Published

on



That disconnect, David Sacks insists, isn’t because AI threatens your job, privacy and the future of the economy itself. No – according to the venture-capitalist-turned-Trump-advisor, it’s all part of a $1 billion plot by what he calls the “Doomer Industrial Complex,” a shadow network of Effective Altruist billionaires bankrolled by the likes of convicted FTX founder Sam Bankman Fried  and Facebook co-founder Dustin Moskovitz. 

In an X post this week, Sacks argued that public distrust of AI isn’t organic at all — it’s manufactured. He pointed to research by tech-culture scholar Nirit Weiss-Blatt, who has spent years mapping the “AI doom” ecosystem of think tanks, nonprofits, and futurists.

Weiss-Blatt documents hundreds of groups that promote strict regulation or even moratoriums on advanced AI systems. She argues that much of the money behind those organizations can be traced to a small circle of donors in the Effective Altruism movement, including Facebook co-founder Dustin Moskovitz, Skype’s Jaan Tallinn, Ethereum creator Vitalik Buterin, and convicted FTX founder Sam Bankman-Fried.

According to Weiss-Blatt, those philanthropists have collectively poured more than $1 billion into efforts to study or mitigate “existential risk” from AI. However, she pointed at Moskovitz’s organization, Open Philanthropy, as “by far” the largest donors. 

The organization pushed back strongly on the idea that they were projecting sci-fi-esque doom and gloom scenarios.

“We believe that technology and scientific progress have drastically improved human well-being, which is why so much of our work focuses on these areas,” an Open Philanthropy spokesperson told Fortune. “AI has enormous potential to accelerate science, fuel economic growth, and expand human knowledge, but it also poses some unprecedented risks — a view shared by leaders across the political spectrum. We support thoughtful nonpartisan work to help manage those risks and realize the huge potential upsides of AI.”

But Sacks, who has close ties to Silicon Valley’s venture community and served as an early executive at PayPal, claims that funding from Open Philanthropy has done more than just warn of the risks– it’s bought a global PR campaign warning of “Godlike” AI. He cited polling showing that 83% of respondents in China view AI’s benefits as outweighing its harms — compared with just 39% in the United States — as evidence that what he calls “propaganda money” has reshaped the American debate.

Sacks has long pushed for an industry-friendly, no regulation approach to AI –and technology broadly—framed in the race to beat China. 

Sacks’ venture capital firm, Craft Ventures, did not immediately respond to a request for comment.

What is Effective Altruism?

The “propaganda money” Sacks refers to comes largely from the Effective Altruism (EA) community, a wonky group of idealists, philosophers, and tech billionaires who believe humanity’s biggest moral duty is to prevent future catastrophes, including rogue AI.

The EA movement, founded a decade ago by Oxford philosophers William MacAskill and Toby Ord, encourages donors to use data and reason to do the most good possible. 

That framework led some members to focus on “longtermism,” the idea that preventing existential risks such as pandemics, nuclear war, or rogue AI should take priority over short-term causes.

While some EA-aligned organizations advocate heavy AI regulation or even “pauses” in model development, others – like Open Philanthropy– take a more technical approach, funding alignment research at companies like OpenAI and Anthropic. The movement’s influence grew rapidly before the 2022 collapse of FTX, whose founder Bankman-Fried had been one of EA’s biggest benefactors.

Matthew Adelstein, a 21-year-old college student who has a prominent Substack on EA, notes that the landscape is far from the monolithic machine that Sacks describes. Weiss-Blatt’s own map of the “AI existential risk ecosystem” includes hundreds of separate entities — from university labs to nonprofits and blogs — that share similar language but not necessarily coordination. Yet, Weiss-Blatt deduces that though the “inflated ecosystem” is not “a grassroots movement. It’s a top down one.” 

Adelstein disagrees, noting that the reality is “more fragmented and less sinister” than Weiss-Blatt and Sacks portrays.

“Most of the fears people have about AI are not the ones the billionaires talk about,” Adelstein told Fortune. “People are worried about cheating, bias, job loss — immediate harms — rather than existential risk.”

He argues that pointing to wealthy donors misses the point entirely. 

“There are very serious risks from artificial intelligence,” he said. “Even AI developers think there’s a few-percent chance it could cause human extinction. The fact that some wealthy people agree that’s a serious risk isn’t an argument against it.”

To Adelstein, longtermism isn’t a cultish obsession with far-off futures but a pragmatic framework for triaging global risks. 

“We’re developing very advanced AI, facing serious nuclear and bio-risks, and the world isn’t prepared,” he said. “Longtermism just says we should do more to prevent those.”

He also brushed off accusations that EA has turned into a quasi-religious movement.

 “I’d like to see the cult that’s dedicated to doing altruism effectively and saving 50,000 lives a year,” he said with a laugh. “That would be some cult.”



Source link

Continue Reading

Trending

Copyright © Miami Select.