Connect with us

Business

Exclusive: Instacart bought his self-checkout startup for $350M. Now he’s teaming with a Google DeepMind alum to build low-cost robots

Published

on



When Instacart acquired Lindon Gao’s self-checkout shopping startup, Caper AI, for $350 million in 2021, it marked a big win for the founder in a competitive space led by Amazon through its checkout-free Go stores. Caper used sensors, computer vision, and other AI techniques to detect items in customers’ shopping carts so they could avoid cashier lines. 

Six months ago, Gao left his role at Instacart and Caper to tackle a new challenge—this time in robotics. His new company, Dyna Robotics, emerged from stealth on Tuesday with a $23.5 million seed round, co-led by CRV and First Round Capital, to build more affordable, easy-to-deploy AI-powered robots for brick-and-mortar businesses. The robots are intended to handle tasks ranging from dangerous to dull and dirty, including chopping food, loading dishes, folding laundry and cleaning toilets.

Gao founded Dyna Robotics, which he said is being valued at around $100 million in the latest funding round, with one of his Caper co-founders, engineer York Yang, as well as Jason Ma, a Google DeepMind alum. Ma was the lead author on Eureka, a widely-read paper on training robots with human-like dexterity.

Robots that are single task experts

With Casper, Gao said he helped grocery chains like ShopRite, Kroger and Aldi, as well as independent grocers, grow their businesses. Now, with his robotics startup, he hopes to do the same with a new set of customers: restaurants, groceries and dry cleaner shops.

Most companies in the “physical AI” space—that is, AI for real-world autonomous systems like robots and self-driving cars—are either working on general purpose AI models (such as Physical Intelligence and Skild) or humanoid robot hardware (like Figure AI and Agility Robotics). Dyna Robotics, however, is going a different route, building simple hardware in the form of a pair of stationary robotic arms, powered by an AI model trained to do one specific task or set of tasks. Gao said as far as he is aware, Dyna is the only non-humanoid robot company trying to put robot AI models fine-tuned on specific datasets into production. 

This narrow focus keeps costs down. Robots from some of the world’s most highly-valued robotic startups cost hundreds of thousands of dollars, if they’re even available at all. Dyna’s are expected to cost tens of thousands of dollars when they’re on sale. There are no firm dates yet for when the robots will debut, but Gao said it will likely be in the next few months. His robots are currently in trial production “but not fully live yet,” he said.

The goal is to automate many tasks that many people don’t want to do. “That’s a very, very high value for businesses of all kinds,” he added, especially since robots for many of these kinds of tasks don’t exist. For example, traditional machine learning struggles with the unpredictable nature of jobs like folding cloth, he explained. But today’s AI models can be trained to handle it—especially as Dyna Robotics focuses on collecting extensive data for specific tasks, rather than amassing vast and costly real-world data across a wide range of actions.

General-purpose robots will take a while

That is where Ma’s Eureka research comes in. While the tasks he explored in the paper—teaching a robot hand to spin a pen or a robot dog to juggle on a yoga ball—are not super-practical, Gao said the two bonded on the same idea: Creating an expert-level AI model for robots that can go into production very quickly. “I think he shares a very similar sentiment as me with regards to robotics, which is that getting to general-purpose robots is not going to happen as quickly as we hoped,” he said. However, Gao, Yang, and Ma are still working towards an ultimate goal of developing general-purpose AI-powered robots. Dyna’s robots master one task at a time, which lets its AI models learn and improve in production environments. 

The robotics industry, of course, is only getting more crowded: As of March 2024, there were reportedly over 1,500 robotics startups globally. And for many, convincing small to medium-sized business customers that robots are a better investment than humans may remain a tough sell.

However, Gao reiterated that few companies are currently able to scale their work quickly into production, as Dyna Robotics plans to do. In addition, there is a labor shortage in the types of jobs Dyna Robotics is tackling, such as food preparation, so he said convincing customers of the need is not difficult.

The biggest challenge, he said, is to get the robot AI models to work reliably and efficiently in a real-world production environment. “Right now the speed of foundation models is around 10-30% of human-level efficiency, and we are doing a ton of research to get us closer to human-level speeds,” he said.

Gao said the company, based in Redwood City, Calif., in the heart of Silicon Valley, already has 30 employees. As a second-time founder, he said he knows how to build products faster than before. “We have a very core philosophy that good engineering is still going to ultimately win,” he said. 

Still, starting Dyna Robotics is much harder than his Caper experience, Gao admitted. “The first time you have no baggage,” he said. “But now I have some sort of expectations and track record. I also want to prove to myself that I’m not a one hit wonder.” 

This story was originally featured on Fortune.com



Source link

Continue Reading

Business

Wall Street bonus pot surges to record-high $47.5 billion, but the outlook is dim 

Published

on



  • Employment in New York’s securities industry reached the highest level in three decades at more than 200,000 workers, reported state Comptroller Thomas DiNapoli on Wednesday. And along with sky-high employment, the total estimated 2024 bonus pool among New York Stock Exchange member firms is the largest on record since 1987. But looming uncertainty due to federal policy is muddying the industry outlook for 2025.

Wall Street is back and profits are soaring. And according to a new report, so are bonuses.

New York State Comptroller Thomas P. DiNapoli reported on Wednesday that Wall Street’s annual wealth infusion for employees—its bonus pool—notched a new record at $47.5 billion in 2024, an increase of 34% over the year prior. The bonus pot hasn’t veered even close to this level since 2021, when the total swelled to $42.7 billion, before tumbling back down to $33.9 billion in 2022. 

The comptroller’s office publishes a yearly estimate of bonus payouts for those employed in the securities industry based on personal income tax withholding trends and cash bonuses paid. The average bonus deposit, accounting for those at the entry level all the way up to those with panoramic views in corner offices, was $244,700, DiNapoli found. A year earlier, the average payout was $186,100. The 131 New York Stock Exchange member firms’ profits rose 90% in 2024, the comptroller reported

“The record high bonus pool reflects Wall Street’s very strong performance in 2024,” DiNapoli said in a statement. “This financial market strength is good news for New York’s economy and our fiscal position, which relies on the tax revenue it generates. However, increasing uncertainty in the economy amid significant federal policy changes may dampen the outlook for parts of the securities industry in 2025.”

Tariffs have claimed a starring role among the many policy changes implemented by the Trump administration, rocking major market averages with uncertainty and volatility. The S&P 500 is down 3% the past month and 1.5% year to date. One of the cascade effects of those federal policy changes—and the presence of Tesla CEO Elon Musk in Washington, D.C.—has resulted in pressure on DiNapoli. As comptroller, DiNapoli oversees the state’s $270 billion retirement fund, which holds a stake in Tesla valued at more than $800 million. A group of 23 Democratic state senators urged the comptroller this month to divest from the Musk-helmed automaker. 

According to the two dozen state senators who reached out to DiNapoli, the Tesla stake is the fund’s seventh-largest holding, and it is in jeopardy while Musk is the CEO

“Musk’s actions leading President Donald Trump’s Department of Government Efficiency (DOGE) have led to a deterioration of the company’s reputation among its most loyal customers,” states the letter, signed by Senator Patricia Fahy (D.-Albany) and 22 other senators. 

Tesla did not immediately respond to a request for comment. 

Meanwhile, the traders, supervisors, analysts, and portfolio managers in New York have a front-row seat to the volatility. The lucrative industry, with an average annual salary of $471,000, helps make up the beating heart of New York City, with 69% of employees residing in one of the five boroughs. More than a quarter of New York City residents who work in securities and finance make more than $250,000 a year. Similarly, more than half of commuters from Westchester County and 41% of commuters from Long Island who work in securities make more than $250,000 a year, according to New York state labor figures.

DiNapoli reported that one in 11 jobs in New York City is somehow linked to the securities industry, and the state derives 19% of its tax collections from it. The 2024 bonus pot will gin up an extra $600 million in income tax this year, and an additional chunk of change valued at $275 million will go into New York City’s coffers in 2024 compared to 2023. Securities industry employment is the highest it’s been in some 30 years with 201,500 workers in contrast to 198,400 the year before. It’s higher than any other state, the comptroller reported.

Still, while New York City boasts the largest number of securities-industry jobs in the U.S., the figure has tumbled consistently since the ’90s, according to labor data. In 1990, a third of all securities jobs were in NYC, compared to 17.4% in 2024. And while New York state added 15,600 securities industry jobs between 2019 and 2023, Texas outpaced it by adding 19,400 jobs of its own. Florida added 13,300 jobs during the same period. 

Also worth noting, major financial firms including Goldman Sachs and Citigroup have announced job cuts and restructurings, which could impact headcount in the state’s securities industry. 

This story was originally featured on Fortune.com



Source link

Continue Reading

Business

Bumble founder Whitney Wolfe Herd discusses her return as CEO

Published

on

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.



Source link

Continue Reading

Business

Don’t water down Europe’s AI rules to please Trump, EU lawmakers warn

Published

on



Lawmakers who helped shape the European Union’s landmark AI Act are worried that the 27-member bloc is considering watering down aspects of the AI rules in the face of lobbying from U.S. technology companies and pressure from the Trump administration.

The EU’s AI Act was approved just over a year ago, but its rules for general-purpose AI models like OpenAI’s GPT-4o will only come into effect in August. Ahead of that, the European Commission—which is the EU’s executive arm—has tasked its new AI Office with preparing a code of practice for the big AI companies, spelling out how exactly they will need to comply with the legislation.

But now a group of European lawmakers, who helped to refine the law’s language as it passed through the legislative process, is voicing concern that the AI Office will blunt the impact of the EU AI Act in “dangerous, undemocratic” ways. The leading American AI vendors have amped up their lobbying against parts of the EU AI Act recently, and the lawmakers are also concerned that the Commission may be looking to curry favor with the Trump administration, which has already made it clear it sees the AI Act as anti-innovation and anti-American.

The EU lawmakers say the third draft of the code, which the AI Office published earlier this month, takes obligations that are mandatory under the AI Act and inaccurately presents them as “entirely voluntary.” These obligations include testing models to see how they might allow things like wide-scale discrimination and the spread of disinformation.

In a letter sent Tuesday to European Commission vice president and tech chief Henna Virkkunen, first reported by the Financial Times but published in full for the first time below, current and former lawmakers said making these model tests voluntary could potentially allow AI providers who “adopt more extreme political positions” to warp European elections, restrict freedom of information, and disrupt the EU economy.

“In the current geopolitical situation, it is more important than ever that the EU rises to the challenge and stands strong on fundamental rights and democracy,” they wrote.

Brando Benifei, who was one of the European Parliament’s lead negotiators on the AI Act text and the first signatory on this week’s letter, told Fortune Wednesday that the political climate may have something to do with the watering-down of the code of practice. The second Trump administration is antagonistic toward European tech regulation; Vice President JD Vance warned in a fiery speech at the Paris AI Action Summit in February that “tightening the screws on U.S. tech companies” would be a “terrible mistake” for European countries.

“I think there is pressure coming from the United States, but it would be very naive [to think] that we can make the Trump administration happy by going in this direction, because it would never be enough,” noted Benifei, who currently chairs the European Parliament’s delegation for relations with the U.S.

Benifei said he and other former AI Act negotiators had met with the Commission’s AI Office experts, who are drafting the code of practice, on Tuesday. On the basis of that meeting, he expressed optimism that the offending changes could be rolled back before the code is finalized.

“I think the issues we raised have been considered, and so there is space for improvement,” he said. “We will see that in the next weeks.”

Virkkunen had not provided a response to the letter, nor to Benifei’s comment about U.S. pressure, at the time of publication. However, she has previously insisted that the EU’s tech rules are fairly and consistently applied to companies from any country. Competition Commissioner Teresa Ribera has also maintained that the EU “cannot transact on human rights [or] democracy and values” to placate the U.S.

Shifting obligations

The key part of the AI Act here is Article 55, which places significant obligations on the providers of general-purpose AI models that come with “systemic risk”—a term that the law defines as meaning the model could have a major impact on the EU economy or has “actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale.”

The act says that a model can be presumed to have systemic risk if the computational power used in its training “measured in floating point operations [FLOPs] is greater than 1025.” This likely includes many of today’s most powerful AI models, though the European Commission can also designate any general-purpose model as having systemic risk if its scientific advisors recommend doing so.

Under the law, providers of such models have to evaluate them “with a view to identifying and mitigating” any systemic risks. This evaluation has to include adversarial testing—in other words, trying to get the model to do bad things, to figure out what needs to be safeguarded against. They then have to tell the European Commission’s AI Office about the evaluation and what it found.

This is where the third version of the draft code of practice becomes problematic.

The first version of the code was clear that AI companies need to treat large-scale disinformation or misinformation as systemic risks when evaluating their models, because of their threat to democratic values and their potential for election interference. The second version didn’t specifically talk about disinformation or misinformation, but still said that “large-scale manipulation with risks to fundamental rights or democratic values,” such as election interference, was a systemic risk.

Both the first and second versions were also clear that model providers should consider the possibility of large-scale discrimination as a systemic risk.

But the third version only lists risks to democratic processes, and to fundamental European rights such as non-discrimination, as being “for potential consideration in the selection of systemic risks.” The official summary of changes in the third draft maintains that these are “additional risks that providers may choose to assess and mitigate in the future.”

In this week’s letter, the lawmakers who negotiated with the Commission over the final text of the law insisted that “this was never the intention” of the agreement they struck.

“Risks to fundamental rights and democracy are systemic risks that the most impactful AI providers must assess and mitigate,” the letter read. “It is dangerous, undemocratic and creates legal uncertainty to fully reinterpret and narrow down a legal text that co-legislators agreed on, through a Code of Practice.”

This story was originally featured on Fortune.com



Source link

Continue Reading

Trending

Copyright © Miami Select.