Connect with us

Business

Chinse open source AI models are eating the world—the U.S. is the exception

Published

on



Hello and welcome to Eye on AI. In this edition….Gemini 3 puts Google at the top of the AI leaderboards…the White House delays an Executive Order banning state level AI regulation…TSMC sues a former exec now at Intel…Google Research develops a new, post-Transformer AI architecture…OpenAI is pushing user engagement despite growing evidence that some users develop harmful dependencies and delusions after prolonged chatbot interactions.

I spent last week at the Fortune Innovation Forum in Kuala Lumpur, Malaysia, where I moderated several panel discussions around AI and its impacts. Among the souvenirs that I came back from KL with was a newfound appreciation for the extent to which businesses outside the U.S. and Europe really want to build on open source AI models and the extent to which they are gravitating to open source models from China.

My colleague Bea Nolan wrote a bit about this phenomenon in this newsletter a few weeks ago, but being on the ground in Southeast Asia really brought the point home: the U.S., despite having the most capable AI models out there, could well lose the AI race. And the reason is, as Chan Yip Pang, the executive director at Vertex Ventures Southeast Asia and India, said on a panel I moderated in KL, that the U.S. AI companies “build for perfection” while the Chinese AI companies “build for diffusion.”

One sometimes hears a U.S. executive, such as Airbnb CEO Brian Chesky, willing to say that they like Chinese open source AI models because they offer good enough performance at a very affordable price. But that attitude remains, for now at least, unusual. Many of the U.S. and European executives I talk to say they prefer the performance advantages of proprietary models from OpenAI, Anthropic, or Google. For some tasks, even an 8% performance advantage (which is the current gap separating top proprietary models from Chinese open source models on key software development benchmarks) can mean the difference between an AI solution that meets the threshold for being deployed at scale and one that doesn’t. These execs also say they have more confidence in the safety and security guardrails built around these proprietary models.

Asia is building AI applications on Chinese open source models

That viewpoint was completely different from what I heard from the executives I met in Asia. Here, the concern was much more about having control over both data and costs. On these metrics, open source models tended to win out. Jinhui Yuan, the cofounder and CEO of SiliconFlow, a leading Chinese AI cloud hosting service, said that his company had developed numerous techniques to run open source models more cost-effectively, meaning using them to accomplish a task was significantly cheaper than trying to do the same thing with proprietary AI models. What’s more, he said that most of his customers had found that if they fine-tuned an open source model on their own data for a specific use case, they could achieve performance levels that beat proprietary models—without any risk of leaking sensitive or competitive data.

That was a point that Vertex’s Pang also emphasized. He cautioned that while proprietary model providers also offer companies services to fine-tune on their own data, usually with assurances that this data will not be used for wider training by the AI vendor, “you never know what happens behind the scenes.”

Using a proprietary model also means you are giving up control over a key cost. He says he tells the startups he is advising that if they are building an application that is fundamental to their competitive advantage or core product, they should build it on open source. “If you are a startup building an AI native application and you are selling that as your main service, you better jolly well control the technology stack, and to be able to control it, open source would be the way to go,” he said.

Cynthia Siantar, the CEO of Dyna.AI, which is based in Singapore and builds AI applications for financial services, also said she felt some of the Chinese open source models performed much better in local languages.

But what about the argument that open source AI is less secure? Cassandra Goh, the CEO of Silverlake Axis, a Malaysian company that provides technology solutions to financial services firms, said that models had to be secured within a system—for instance, with screening tools applied to prompts to prevent jailbreaking and to outputs to filter out potential problems. This was true whether the underlying model was proprietary or open source, she said.

The conversation definitely made me think that OpenAI and Anthropic, both of which are rapidly trying to expand their global footprint, may run into headwinds, particularly in the middle income countries in Southeast Asia, the Middle East, North Africa, and Latin America. It is further evidence that the U.S. probably needs to do far more to develop a more robust open source AI ecosystem beyond Meta, which has been the only significant American player in the open source frontier model space to date. (IBM has some open source foundation models but they are not as capable as the leading models from OpenAI and Anthropic.)

Should “bridge countries” band together?

And that’s not the only way in which this trip to Asia proved eye-opening. It was also fascinating to see the plans to build out AI infrastructure throughout the region. The Malaysian state of Johor, in particular, is trying to position itself as the data center hub for not just nearby Singapore, but for much of Southeast Asia. (Discussions about a tie-up with nearby Indonesia to share data center capacity are already underway.)

Johor has plans to bring on 5.8 gigawatts of data center projects in the coming years, which would consume basically all of the state’s current electricity generation capacity. The state—and Malaysia as a whole—has plans to add significantly more electricity generation, from both gas-powered plants and big solar farms, by 2030. Yet concerns are growing about what this generation capacity expansion will mean for consumer electricity bills and whether the data centers will drink up too much of the region’s fresh water. (Johor officials have told data center developers to pause development of new water-cooled facilities until 2027 amid concerns about water shortages.)

Exactly how important regional players will align in the growing geopolitical competition between the U.S. and China over AI technology is a hot topic. Many seem eager to find a path that would allow them to use technology from both superpowers, without having to choose a side or risk becoming a “servant” of either power. But whether they will be able to walk this tightrope is a big open question.

Earlier this week, a group of 30 policy experts from Mila (the Quebec Artificial Intelligence Institute founded by AI “godfather” and Turing Award winner Yoshua Bengio), the Oxford Martin AI Governance Initiative, and a number of other European, East Asian, and South Asian institutions jointly issued a white paper calling on a number of middle income countries (which they called “bridge powers”) to band together to develop and share AI capacity and models so that they could achieve a degree of independence from American and Chinese AI tech.

Whether such an alliance—a kind of non-aligned movement of AI—can be achieved diplomatically and commercially, however, seems highly uncertain. But it is an idea that I am sure politicians in these bridge countries will be considering.

With that, here’s the rest of today’s AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

If you want to learn more about how AI can help your company to succeed and hear from industry leaders on where this technology is heading, I hope you’ll consider joining me at Fortune Brainstorm AI San Francisco on Dec. 8–9. Among the speakers confirmed to appear so far are Google Cloud chief Thomas Kurian, Intuit CEO Sasan Goodarzi, Databricks CEO Ali Ghodsi, Glean CEO Arvind Jain, Amazon’s Panos Panay, and many more. Register now.

FORTUNE ON AI

Amazon’s layoffs and leaked AI plans beg the question: Is the era of robot-driven unemployment upon us?—by Jason del Rey

Sam Altman says OpenAI’s first device is iPhone-level revolutionary but brings ‘peace and calm’ instead of ‘unsettling’ flashing lights and notifications—by Marco Quiroz-Gutierrez

Deloitte just got caught again citing fabricated and AI-generated research—this time in a million-dollar report for a Canadian provincial government—by Nino Paoli

Lovable’s CEO targets enterprise customers as the ‘vibe-coding’ unicorn doubles its annual revenue to $200 million in just four months—by Beatrice Nolan

AI IN THE NEWS

White House launches “Genesis Mission” to give AI-driven boost to science. President Trump signed an executive order launching what he is calling the “Genesis Mission,” a massive federal initiative to harness artificial intelligence and government science datasets via the U.S. Department of Energy and its national laboratories. The mission aims to build a unified AI‐driven research platform—linking supercomputers, university and industry partners, and federal data—to accelerate breakthroughs in fields like energy, engineering, biotech and national security. While pitched as a scientific “moonshot”-style effort, the initiative faces questions about its funding model and how it will manage sensitive national-security and proprietary data. Read more here from Reuters.

TSMC sues former executive who defected to Intel over alleged trade secret theft. TSMC has sued former senior executive Lo Wei-Jen, now at Intel, alleging he took or could disclose the company’s trade secrets, the Financial Timesreports. The company alleges that Wei-Jen told it he planned to enter academia after retiring in July. The case underscores intensifying geopolitical and commercial pressures in the global race for advanced chipmaking, as TSMC—responsible for more than 90% of the world’s leading-edge semiconductors—faces rising competition backed by a major U.S. government investment in Intel.

Google debuts Gemini 3 model, hailed by the company and some users as a big advance. Google launched its Gemini 3 large language model last week. The model surpassed rival models from OpenAI and Anthropic on a wide range of benchmark tests and its performance seems to have largely impressed users who have tried it, according to social media posts and blogs. The launch of Gemini 3—which Google immediately integrated into its AI-powered search features, such as AI Overviews and “AI Mode” in Google Search—is being hailed as a turning point in the AI race, helping restore investor confidence in Google-parent company Alphabet after years of anxiety about it losing ground. You can read more from the Wall Street Journalhere.

Anthropic premiers Claude Opus 4.5. Anthropic unveiled Claude Opus 4.5, its newest and most powerful AI model, designed to excel at complex business tasks and coding. The premiere—Anthropic’s third major model release in two months—comes as the company’s valuation has surged to roughly $350 billion following multibillion-dollar investments from Microsoft and Nvidia. Anthropic says Opus 4.5 outperforms Google’s Gemini 3 Pro (see above news item) and OpenAI’s GPT-5.1 on coding benchmarks, and even beat human candidates on its internal engineering exam, and is rolling out alongside upgraded tools including Claude Chrome, Claude for Excel, and enhanced developer features, according to a story in CNBC.

White House reportedly pauses work on Executive Order targeting state AI laws. Reuters reports that the White House has paused a draft executive order that would have aggressively challenged state AI regulations by directing the Justice Department to sue states and potentially withhold federal broadband funds from those that impose AI rules. The move—backed by major tech firms seeking uniform national standards—sparked bipartisan criticism from state officials and lawmakers, who argued it would undermine consumer protection and was potentially unconstitutional. The administration may still try to include a moratorium on state-level AI rules in the National Defense Authorization Act or another spending bill that Congress has to pass in the coming weeks. But so far, opposition highlights the intense political backlash to federal attempts to preempt state AI laws.

OpenAI offices locked down due to concerns about former Stop AI activist. OpenAI employees in San Francisco were briefly instructed to remain inside the office after police received a report that one of the cofounders of Stop AI had allegedly made threats to harm staff and might have acquired weapons. Stop AI publicly disavowed the individual and reaffirmed its commitment to nonviolence. Stop AI is an activist group trying to stop the development of increasingly powerful AI systems, which it fears are already harming society and also represent a potentially existential risk to humanity. The group has engaged in a number of public demonstrations and acts of civil disobedience outside the offices of major AI labs. Read more here from Wired.

EYE ON AI RESEARCH

Are we inching closer to a post-Transformer world? It’s been eight years since researchers at Google published their landmark research paper, “Attention is All You Need,” which introduced the world to the Transformer, a kind of neural network design that was particularly good at predicting sequences in which the next item depends on items that appeared fairly remotely from that item in the prior sequence. Transformers are what all of today’s large language models are based on. But AI models based on Transformers have several drawbacks. They don’t learn continuously. And, like most neural networks, they don’t have any kind of long-term memory. So, for several years now, researchers have been wondering if some new fundamental AI architecture will come along to displace the Transformer.

Well, we might be getting closer. Earlier this month, researchers—once again from Google—published a paper on what they are calling Nested Learning. It essentially breaks the neural network’s architecture into nested groups of digital neurons that update their weights at different frequencies based on how surprising any given piece of information is compared to what that part of the model would have predicted. The parts that update their weights more slowly form the longer-term memory of the model, while the parts that update their weights more frequently form a kind of shorter-term “working memory.” And nested between them are blocks of neurons that update at a medium speed, which modulates between the shorter and longer term memories. As an example of how this can work in practice, the researchers created an architecture they call HOPE that learns its own best way of optimizing each of these nested blocks. You can read the Google research here

AI CALENDAR

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

Jan. 6: Fortune Brainstorm Tech CES Dinner. Apply to attend here.

Jan. 19-23:World Economic Forum, Davos, Switzerland.

Feb. 10-11: AI Action Summit, New Delhi, India.

BRAIN FOOD

OpenAI is optimizing for engagement, even though there’s growing evidence its product harms some users. That’s the conclusion of a fascinating New York Times investigation that details how increasing commercial pressures within OpenAI—and a new cadre of executives hired from traditional tech and social media companies—have been driving the company to design ChatGPT to keep users engaged. The company is proceeding down this path, the newspaper reports, even as its own research shows some ChatGPT users develop dangerous emotional and psychological dependencies on the chatbot and that some subset of those become delusional after prolonged dialogues with OpenAI’s AI.

The story is a reminder of why AI regulation is necessary. We’ve seen this movie before with social media, and it doesn’t end well, for individuals or society. Any company which offers its service for free or substantially below cost—which is the case for most consumer-oriented AI products right now—has a strong incentive to monetize the user either through engagement (and advertising) or, perhaps even worse, directly paid persuasion (in some ways worse than conventional advertising). Neither is probably in the user’s best interest. 



Source link

Continue Reading

Business

U.S. consumers are so strained they put more than $1B on BNPL during Black Friday and Cyber Monday

Published

on



Financially strained and cautious customers leaned heavily on buy now, pay later (BNPL) services over the holiday weekend.

Cyber Monday alone generated $1.03 billion (a 4.2% increase YoY) in online BNPL sales with most transactions happening on mobile devices, per Adobe Analytics. Overall, consumers spent $14.25 billion online on Cyber Monday. To put that into perspective, BNPL made up for more than 7.2% of total online sales on that day.

As for Black Friday, eMarketer reported $747.5 million in online sales using BNPL services with platforms like PayPal finding a 23% uptick in BNPL transactions.

Likewise, digital financial services company Zip reported 1.6 million transactions throughout 280,000 of its locations over the Black Friday and Cyber Monday weekend. Millennials (51%) accounted for a chunk of the sizable BNPL purchases, followed by Gen Z, Gen X, and baby boomers, per Zip.

The Adobe data showed that people using BNPL were most likely to spend on categories such as electronics, apparel, toys, and furniture, which is consistent with previous years. This trend also tracks with Zip’s findings that shoppers were primarily investing in tech, electronics, and fashion when using its services.

And while some may be surprised that shoppers are taking on more debt via BNPL (in this economy?!), analysts had already projected a strong shopping weekend. A Deloitte survey forecast that consumers would spend about $650 million over the Black Friday–Cyber Monday stretch—a 15% jump from 2023.

“US retailers leaned heavily on discounts this holiday season to drive online demand,” Vivek Pandya, lead analyst at Adobe Digital Insights, said in a statement. “Competitive and persistent deals throughout Cyber Week pushed consumers to shop earlier, creating an environment where Black Friday now challenges the dominance of Cyber Monday.”

This report was originally published by Retail Brew.



Source link

Continue Reading

Business

AI labs like Meta, Deepseek, and Xai earned worst grades possible on an existential safety index

Published

on



A recent report card from an AI safety watchdog isn’t one that tech companies will want to stick on the fridge.

The Future of Life Institute’s latest AI safety index found that major AI labs fell short on most measures of AI responsibility, with few letter grades rising above a C. The org graded eight companies across categories like safety frameworks, risk assessment, and current harms.

Perhaps most glaring was the “existential safety” line, where companies scored Ds and Fs across the board. While many of these companies are explicitly chasing superintelligence, they lack a plan for safely managing it, according to Max Tegmark, MIT professor and president of the Future of Life Institute.

“Reviewers found this kind of jarring,” Tegmark told us.

The reviewers in question were a panel of AI academics and governance experts who examined publicly available material as well as survey responses submitted by five of the eight companies.

Anthropic, OpenAI, and GoogleDeepMind took the top three spots with an overall grade of C+ or C. Then came, in order, Elon Musk’s Xai, Z.ai, Meta, DeepSeek, and Alibaba, all of which got Ds or a D-.

Tegmark blames a lack of regulation that has meant the cutthroat competition of the AI race trumps safety precautions. California recently passed the first law that requires frontier AI companies to disclose safety information around catastrophic risks, and New York is currently within spitting distance as well. Hopes for federal legislation are dim, however.

“Companies have an incentive, even if they have the best intentions, to always rush out new products before the competitor does, as opposed to necessarily putting in a lot of time to make it safe,” Tegmark said.

In lieu of government-mandated standards, Tegmark said the industry has begun to take the group’s regularly released safety indexes more seriously; four of the five American companies now respond to its survey (Meta is the only holdout.) And companies have made some improvements over time, Tegmark said, mentioning Google’s transparency around its whistleblower policy as an example.

But real-life harms reported around issues like teen suicides that chatbots allegedly encouraged, inappropriate interactions with minors, and major cyberattacks have also raised the stakes of the discussion, he said.

“[They] have really made a lot of people realize that this isn’t the future we’re talking about—it’s now,” Tegmark said.

The Future of Life Institute recently enlisted public figures as diverse as Prince Harry and Meghan Markle, former Trump aide Steve Bannon, Apple co-founder Steve Wozniak, and rapper Will.i.am to sign a statement opposing work that could lead to superintelligence.

Tegmark said he would like to see something like “an FDA for AI where companies first have to convince experts that their models are safe before they can sell them.

“The AI industry is quite unique in that it’s the only industry in the US making powerful technology that’s less regulated than sandwiches—basically not regulated at all,” Tegmark said. “If someone says, ‘I want to open a new sandwich shop near Times Square,’ before you can sell the first sandwich, you need a health inspector to check your kitchen and make sure it’s not full of rats…If you instead say, ‘Oh no, I’m not going to sell any sandwiches. I’m just going to release superintelligence.’ OK! No need for any inspectors, no need to get any approvals for anything.”

“So the solution to this is very obvious,” Tegmark added. “You just stop this corporate welfare of giving AI companies exemptions that no other companies get.”

This report was originally published by Tech Brew.



Source link

Continue Reading

Business

Hollywood writers say Warner takeover ‘must be blocked’

Published

on



Hollywood writers, producers, directors and theater owners voiced skepticism over Netflix Inc.’s proposed $82.7 billion takeover of Warner Bros. Discovery Inc.’s studio and streaming businesses, saying it threatens to undermine their interests.

The Writers Guild of America, which announced in October it would oppose any sale of Warner Bros., reiterated that view on Friday, saying the purchase by Netflix “must be blocked.”

“The world’s largest streaming company swallowing one of its biggest competitors is what antitrust laws were designed to prevent,” the guild said in an emailed statement. “The outcome would eliminate jobs, push down wages, worsen conditions for all entertainment workers, raise prices for consumers, and reduce the volume and diversity of content for all viewers.”

The worries raised by the movie and TV industry’s biggest trade groups come against the backdrop of falling movie and TV production, slack ticket sales and steep job cuts in Hollywood. Another legacy studio, Paramount, was sold earlier this year.

Warner Bros. accounts for about a fourth of North American ticket sales — roughly $2 billion — and is being acquired by a company that has long shunned theatrical releases for its feature films. As part of the deal, Netflix co-CEO Ted Sarandos has promised Warner Bros. will continue to release moves in theaters.

“The proposed acquisition of Warner Bros. by Netflix poses an unprecedented threat to the global exhibition business,” Michael O’Leary, chief executive officer of the theatrical trade group Cinema United, said in en emailed statement Friday. “The negative impact of this acquisition will impact theaters from the biggest circuits to one-screen independents.”

The buyout of Warner Bros. by Netflix “would be a disaster,” James Cameron, the director of some of Hollywood’s highest-grossing films in history including Titanic and Avatar, said in late November on The Town, an industry-focused podcast. “Sorry Ted, but jeez. Sarandos has gone on record saying theatrical films are dead.”

On a conference call with investors Friday, Sarandos said that his company’s resistance to releasing films in cinemas was mostly tied to “the long exclusive windows, which we don’t really think are that consumer friendly.”

The company said Friday it would “maintain Warner Bros.’ current operations and build on its strengths, including theatrical releases for films.”

On the call, Sarandos reiterated that view, saying that, “right now, you should count on everything that is planned on going to the theater through Warner Bros. will continue to go to the theaters through Warner Bros.” 

Competition from online outfits like YouTube and Netflix has forced a reckoning in Hollywood, opening the door for takeovers like the Warner Bros. deal announced Friday. Media giants including Comcast Corp., parent of NBCUniversal, are unloading cable-TV networks like MS Now and USA, and steering resources into streaming. 

In an emailed note to Warner Bros. employees on Friday, Chief Executive Officer David Zaslav said the board’s decision to sell the company “reflects the realities of an industry undergoing generational change in how stories are financed, produced, distributed, and discovered.”

The Producers Guild of America said Friday its members are “rightfully concerned about Netflix’s intended acquisition of one of our industry’s most storied and meaningful studios,” while a spokesperson for the Directors Guild of America raised concerns about future pay at Warner Bros.

“We will be meeting with Netflix to outline our concerns and better understand their vision for the future of the company,” the Directors Guild said.

In September, the DGA appointed director Christopher Nolan as its president. Nolan has previously criticized Netflix’s model of releasing films exclusively online, or simultaneously in a small number of cinemas, and has said he won’t make movies for the company.

The Screen Actors Guild said Friday that the transaction “raises many serious questions about its impact on the future of the entertainment industry, and especially the human creative talent whose livelihoods and careers depend on it.”

Oscar winner Jane Fonda spoke out on Thursday before the deal was announced. 

“Consolidation at this scale would be catastrophic for an industry built on free expression, for the creative workers who power it, and for consumers who depend on a free, independent media ecosystem to understand the world,” the star of the Netflix series Grace and Frankie wrote on the Ankler industry news website.

Netflix and Warner Bros. obviously don’t see it that way. In his statement to employees, Zaslav said “the proposed combination of Warner Bros. and Netflix reflects complementary strengths, more choice and value for consumers, a stronger entertainment industry, increased opportunity for creative talent, and long-term value creation for shareholders.”



Source link

Continue Reading

Trending

Copyright © Miami Select.