Connect with us

Business

Down Arrow Button Icon

Published

on



For the past several years, Yoshua Bengio, a professor at the Université de Montréal whose work helped lay the foundations of modern deep learning, has been one of the AI industry’s most alarmed voices, warning that superintelligent systems could pose an existential threat to humanity—particularly because of their potential for self-preservation and deception.

In a new interview with Fortune, however, the deep-learning pioneer says his latest research points to a technical solution for AI’s biggest safety risks. As a result, his optimism has risen “by a big margin” over the past year, he said.

Bengio’s nonprofit, LawZero, which launched in June, was created to develop new technical approaches to AI safety based on research led by Bengio. Today, the organization—backed by the Gates Foundation and existential-risk funders such as Coefficient Giving (formerly Open Philanthropy) and the Future of Life Institute—announced that it has appointed a high-profile board and global advisory council to guide Bengio’s research, and advance what he calls a “moral mission” to develop AI as a global public good.

The board includes NIKE Foundation founder Maria Eitel as chair, along with Mariano-Florentino Cuellar, president of the Carnegie Endowment for International Peace, and historian Yuval Noah Harari. Bengio himself will also serve.

Bengio felt ‘desperate’

Bengio’s shift to a more optimistic outlook is striking. Bengio shared the Turing Award, computer science’s equivalent of the Nobel Prize, with fellow AI ‘godfathers’ Geoff Hinton and Yann LeCun in 2019. But like Hinton, he grew increasingly concerned about the risks of ever more powerful AI systems in the wake of ChatGPT’s launch in November 2022. LeCun, by contrast, has said he does not think today’s AI systems pose catastrophic risks to humanity.

Three years ago, Bengio felt “desperate” about where AI was headed, he said. “I had no notion of how we could fix the problem,” Bengio recalled. “That’s roughly when I started to understand the possibility of catastrophic risks coming from very powerful AIs,” including the loss of control over superintelligent systems. 

What changed was not a single breakthrough, but a line of thinking that led him to believe there is a path forward.

“Because of the work I’ve been doing at LawZero, especially since we created it, I’m now very confident that it is possible to build AI systems that don’t have hidden goals, hidden agendas,” he says. 

At the heart of that confidence is an idea Bengio calls “Scientist AI.” Rather than racing to build ever-more-autonomous agents—systems designed to book flights, write code, negotiate with other software, or replace human workers—Bengio wants to do the opposite. His team is researching how to build AI that exists primarily to understand the world, not to act in it.

A Scientist AI trained to give truthful answers

A Scientist AI would be trained to give truthful answers based on transparent, probabilistic reasoning—essentially using the scientific method or other reasoning grounded in formal logic to arrive at predictions. The AI system would not have goals of its own. And it would not optimize for user satisfaction or outcomes. It would not try to persuade, flatter, or please. And because it would have no goals, Bengio argues, it would be far less prone to manipulation, hidden agendas, or strategic deception.

Today’s frontier models are trained to pursue objectives—to be helpful, effective, or engaging. But systems that optimize for outcomes can develop hidden objectives, learn to mislead users, or resist shutdown, said Bengio. In recent experiments, models have already shown early forms of self-preserving behavior. For instance, AI lab Anthropic famously found that its Claude AI model would, in some scenarios used to test its capabilities, attempt to blackmail the human engineers overseeing it to prevent itself from being shutdown.

In Bengio’s methodology, the core model would have no agenda at all—only the ability to make honest predictions about how the world works. In his vision, more capable systems can be safety built, audited and constrained on top of that “honest,” trusted foundation. 

Such a system could accelerate scientific discovery, Bengio says. It could also serve as an independent layer of oversight for more powerful agentic AIs. But the approach stands in sharp contrast to the direction most frontier labs are taking. At the World Economic Forum in Davos last year, Bengio said companies were pouring resources into AI agents. “That’s where they can make the fast buck,” he said. The pressure to automate work and reduce costs, he added, is “irresistible.”

He is not surprised by what has followed since then. “I did expect the agentic capabilities of AI systems would progress,” he says. “They have progressed in an exponential way.” What worries him is that as these systems grow more autonomous, their behavior may become less predictable, less interpretable, and potentially far more dangerous.

Preventing Bengio’s new AI from becoming a “tool of domination”

That is where governance enters the picture. Bengio does not believe a technical solution alone is sufficient. Even a safe methodology, he argues, could be misused “in the wrong hands for political reasons.” That is why LawZero is pairing its research agenda with a heavyweight board.

“We’re going to have difficult decisions to take that are not just technical,” he says—about who to collaborate with, how to share the work, and how to prevent it from becoming “a tool of domination.” The board, he says, is meant to help ensure that LawZero’s mission remains grounded in democratic values and human rights.

Bengio says he has spoken with leaders across the major AI labs, and many share his concerns. But, he adds, companies like OpenAI and Anthropic believe they must remain at the frontier to do anything positive with AI. Competitive pressure pushes them towards building ever more powerful AI systems—and towards a self-image in which their work and their organizations are inherently beneficial.

“Psychologists call it motivated cognition,” Bengio said. “We don’t even allow certain thoughts to arise if they threaten who we think we are.” That is how he experienced his AI research, he pointed out. “Until it kind of exploded in my face thinking about my children, whether they would have a future.” 

For an AI leader who once feared that advanced AI might be uncontrollable by design, Bengio’s newfound hopefulness seems like a positive signal, though he admits that his take is not a common belief among those researchers and organizations focused on the potential catastrophic risks of AI. 

But he does not back down from his belief that a technical solution does exist. “I’m more and more confident that it can be done in a reasonable number of years,” he said, “so that we might be able to actually have an impact before these guys get so powerful that their misalignment causes terrible problems.”



Source link

Continue Reading

Business

Teachers decry AI as brain-rotting junk food for kids: ‘Students can’t reason. They can’t think. They can’t solve problems’

Published

on



In the 1980s and 1990s, if a high school student was down on their luck, short on time, and looking for an easy way out, cheating took real effort. You had a few different routes. You could beg your smart older sibling to do the work for you, or, a la Back to School (1989), you could even hire a professional writer. You could enlist a daring friend to find the answer key to the homework on the teachers’ desk. Or, you had the classic excuses to demur: my dog ate my homework, and the like. 

The advent of the internet made things easier, but not effortless. Sites like CliffNotes and LitCharts let students skim summaries when they skipped the reading. Homework-help platforms such as GradeSaver or CourseHero offered solutions to common math textbook problems. 

The thing that all these strategies had in common was effort: there was a cost to not doing your work. Sometimes it was more work to cheat than it was just to have done the work yourself. 

Today, the process has collapsed into three steps: log on to ChatGPT or a similar platform, paste the prompt, get the answer.

Experts, parents and educators have spent the past three years worrying that AI made cheating too easy. A massive Brookings report released Wednesday suggests they weren’t worried enough: The deeper problem, the report argues, is that AI is so good at cheating that its causing a “great unwiring” of their brains.

The report concludes that the qualitative nature of AI risks—including cognitive atrophy, “artificial intimacy” and the erosion of relational trust—currently overshadows the technology’s potential benefits. 

“Students can’t reason. They can’t think. They can’t solve problems,” lamented one teacher interviewed for the study.

The findings come from a yearlong “premortem” conducted by the Brookings Institution’s Center for Universal Education, a rare format for Brookings to use, but one they said they preferred to waiting a decade to discuss the failures and successes of AI in school. Drawing on hundreds of interviews, focus groups, expert consultations and a review of more than 400 studies, the report represents one of the most comprehensive assessments to date of how generative AI is reshaping student’s learning.

“Fast food of education”

The report, titled “A New Direction for Students in an AI World: Prosper, Prepare, Protect,” warns that the “frictionless” nature of generative AI is its most pernicious feature for students. In a traditional classroom, the struggle to synthesize multiple papers to create an original thesis, or solve a complex pre-calculus problem is exactly where learning occurs. By removing this struggle, AI acts as the “fast food of education,” one expert said. It provides answers that are convenient and satisfying in the moment, but overall cognitively hollow over the long term.

While professionals champion AI as a tool to do work that they already know how to do, the report notes that for students, “the situation is fundamentally reversed.”

Children are “cognitively offloading” difficult tasks onto AI; getting OpenAI or Claude to not just do their work but read passages, take notes or even just listen in class. The result is a phenomenon researchers call “cognitive debt” or “atrophy,” where users defer mental effort through repeated reliance on external systems like large language models. One student summarized the allure of these tools simply: “It’s easy. You don’t need to (use) your brain”. 

In economics, we understand that consumers are “rational”; they seek maximum utility at the lowest cost to them. The researchers argue that we should also understand that the education system, as is, is designed with a similar incentive system: students seek maximum utility (i.e., best grades), at the lowest cost (time) to them, Thus, even the high-achieving students are pressured to utilize a technology that “demonstrably” improves their work and grades.

This trend is creating a positive feedback loop: students offload tasks to AI, see positive results in their grades, and consequently become more dependent on the tool, leading to a measurable decline in critical thinking skills. Researchers say many students now exist in a state they called “passenger mode,” where students are physically in school but have “effectively dropped out of learning—they are doing the bare minimum necessary.”

Jonathan Haidt once described earlier technologies as a “great rewiring” of the brain; making the ontological experience of communication detached and decontextualized. “Now, experts fear AI represents a “great unwiring” of cognitive capacities. The report identifies a decline in mastery across content, reading, and writing—the “twin pillars of deep thinking”. Teachers report a “digitally induced amnesia” where students cannot recall the information they submitted because they never committed it to memory.

Reading skills are particularly at risk. The capacity for “cognitive patience,” defined as the ability to sustain attention on complex ideas, is being diluted by AI’s ability to summarize long-form text. One expert noted the shift in student attitudes: “Teenagers used to say, ‘I don’t like to read.’ Now it’s ‘I can’t read, it’s too long’”.

Similarly, in the realm of writing, AI is producing a “homogeneity of ideas”. Research comparing human essays to AI-generated ones found that each additional human essay contributed two to eight times more unique ideas than those produced by ChatGPT.

Not every young person feels that this type of cheating is wrong. Roy Lee, the 22-year-old CEO of AI startup Cluely, was suspended from Columbia after creating an AI tool to help software engineers cheat on job interviews. In Cluely’s manifesto, Lee admits that his tool is “cheating,” but says “so was the calculator. So was spellcheck. So was Google. Every time technology makes us smarter, the world panics.”

The researchers, however, say that while a calculator or spellcheck are examples of cognitive offloading, AI “turbocharges” it.

“LLMs, for example, offer capabilities extending far beyond traditional productivity tools into domains previously requiring uniquely human cognitive processes,” they wrote. 

“Artificial intimacy”

Despite how useful AI is in the classroom, the report finds that students use AI even more outside of school, warning of the rise of “artificial intimacy.” 

With some teenagers spending nearly 100 minutes a day interacting with personalized chatbots, the technology has quickly moved from being a tool to a companion. The report notes that these bots, particularly character chatbots popular with teens such as Character.Ai, use “banal deception”—using personal pronouns like “I” and “me”—to simulate empathy, part of a burgeoning “loneliness economy.”

Because AI companions tend to be sycophantic and “frictionless,” they provide a simulation of friendship without the requirement of negotiation, patience or the ability to sit with discomfort. 

“We learn empathy not when we are perfectly understood, but when we misunderstand and recover,” one Delphi panelist noted. 

For students in extreme circumstances, like girls in Afghanistan who are banned from physical schools, these bots have become a vital “educational and emotional lifeline.” However, for most, these simulations of friendship risks, at best, eroding “relational trust,” and at worst can be downright dangerous. The report highlights the devastating risks of “hyperpersuasion,” noting a high-profile U.S. lawsuit against Character.ai following a teenage boy’s suicide after intense emotional interactions with an AI character. 

While the Brookings report presents a sobering view of the “cognitive debt” students are experiencing, the authors say they are optimistic that the trajectory of AI in education is not yet set in stone. The current risks, they say, stem from human choices rather than some kind of technological inevitability. In order to shift the course toward an “enriched” learning experience, Brookings proposes a three-pillar framework.

PROSPER: Focus on transforming the classroom to adapt to AI, such as using it to complement human judgement and ensuring the technology serves as a “pilot” for student inquiry instead of a “surrogate”

PREPARE: Aims to build the framework necessary for ethical integration, including moving beyond technical training toward “holistic AI literacy” so students, teachers, and parents understand the cognitive implications of these tools.

PROTECT: Calls for safeguards for student privacy and emotional well-being, placing responsibility on governments and tech companies to reach clear regulatory guidelines that prevent “manipulative engagement.”



Source link

Continue Reading

Business

Using AI just to reduce costs is a woeful misuse of a transformative technology

Published

on


Rolling out AI requires a total reimagining of existing business models.

You should be looking for butterflies—not faster caterpillars. The definition of transformation is exactly that—caterpillars becoming butterflies. Otherwise, you are not recognizing AI’s potential. As with the internet, it’s a once-in-a-generation opportunity to reimagine how you do business.

When it comes to implementation, there will always be a certain amount of experimentation, but many experiments fail because people think of AI as one monolithic thing. You need to break it into two steps.

First, establish what infrastructure you need, starting with your data. Is your data connected? Is it organized? Is it in a format that can be leveraged? AI is nothing but garbage-in, garbage-out otherwise. Establish the foundation, then second, pick one or two projects that will add real value.

Nigel Vaz.

Publicis Sapient

Choose an area of the business where you see a real opportunity. Then find something that is not so big that it takes you years to deliver value, but not so small that it isn’t applicable to the rest of the business.

Many CEOs have been focused on cost-based use cases of AI, because it’s easier to prove their tangible value. But we’re seeing far more meaningful opportunities in the context of driving growth and sales. When you get it right, it demonstrates how AI can make a difference in everything from selling cars to increasing the value of shopping baskets.

If your child has just started playing football, you may not know what to buy. AI can say: “Tell us a few things, and we’ll check you out with everything you need for an 8-year-old who’s into football. You’re searching for a ball, but you also need cleats and socks and shin guards.”

This makes a real difference in revenue, unlike use cases where “we can optimize call centers and boost productivity.” Those projects are specifically focused on AI as a cost opportunity.

I remember talking to airline CEOs about paying to pick your own seat on airplanes. The CEOs wanted faster check-in lines at the airport. Back then, staff would have to show each passenger where they were seated at check-in, or fliers would have to call customer support, which took time and added cost.

Nowadays, choosing a seat is the second-largest revenue-generator for an airline after buying a ticket. This means that what started off as a cost-saving initiative creates more revenue for airlines today than even excess baggage fees.

At the time, the conversation was framed almost entirely around operational efficiency— reducing friction at check-in, shortening queues, and lowering support costs. What wasn’t immediately obvious was how these small changes in customer experience could fundamentally reshape behavior. Over time, those same decisions opened entirely new revenue streams that few had initially anticipated.

These are the trends you miss if you only use AI to save money. Industries which don’t make this pivot—from using AI to cut costs to using it to find new opportunities for growth—are going to be caught flat-footed.

As told to Francessca Cassidy.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.



Source link

Continue Reading

Business

Can Saks’ new CEO repair the damage done by years of being treated as a ‘financial plaything’?

Published

on



For the second time in his career, luxury executive Geoffroy van Raemdonck has been tasked with fixing an iconic department store company brought low by financial engineering. In 2018, he was hired to fix Neiman Marcus Group, which was struggling to to keep up with shifting consumer trends and unprofitable under the weight of heavy debt from years of private equity ownership.

This time, the job is twice as big. On Tuesday, Van Raemdonck was appointed CEO of Saks Global, the same day as the luxury department store giant, which includes Neiman Marcus Group (and its Bergdorf Goodman division) and Saks Fifth Avenue, filed for Chapter 11 bankruptcy protection.

Saks Global is the result of a $2.7 billion deal in 2024 masterminded by real estate scion Richard Baker—one that failed spectacularly because of the confluence of slumping sales and sky-high debt, leaving in its wake angry vendors, empty shelves, and AWOL consumers.

Former Saks Global executive chairman Baker has the opposite of the “Midas Touch” when it comes to dealmaking, as I wrote last week—with most of the retailers he’s bought ultimately failing. And Baker left his successor (after a two-week stint where he stepped into the CEO role) with quite a mess to clean up. But van Raemdonck does seem to understand the assignment when it comes to reviving a high-end department store brand.

During his six years as Neiman CEO, during which he led the company through a pandemic and later returned it to profitability, van Raemdonck spoke often of “leading with love,” and the importance of remembering that luxury retail has to be about much more than completing transactions. In late 2024 at the WWD CEO conference, shortly before the Saks acquisition closed, van Raemdonck recalled how at the start of his tenure, he had challenged his Neiman Marcus C-suite with the question, “How do we reignite customers’ emotions?” That’s arguably the same question Van Raemdonck faces today. (He did not respond to a request for comment from Fortune.)

Instilling positive emotions in customers—and convincing them to act upon them—will be essential in Saks Global’s bankruptcy era. And it will have to begin with winning over the company’s beleaguered vendors. Between sluggish business and heavy debt, Saks has in the last two years delayed payments to many vendors. Bloomberg News reported on Wednesday that Chanel, and conglomerates Kering and LVMH were owed a combined $225 million—so Van Raemdonck has a lot of fences to mend.

Many of those vendors have stopped shipping to its stores, which has led to empty shelves and stale inventory, the antithesis of what a luxury department store should offer and certainly not a way to inspire a shopper’s love. It was one of the reasons for a 13% drop in quarterly revenue for the quarter ended August 2, 2025. Those moves reflected Baker’s priorities, including deploying funds to make acquisitions or to conserve cash for debt payments resulting from his dealmaking.

One key lesson in the past few years for these department stores is that they are no longer indispensable to brands. And stiffing the creators of the stuff they sell is not a way to attract the hot brands, especially newer ones, that make a retailer feel buzzy and relevant.

The changing relationship between department stores and brands can’t all be blamed on mismanagement; it’s also the result of a cultural shift. “Historically, the way you discovered an amazing new luxury brand was that a curator at a Saks or a Neiman would pick a product and merchandise it beautifully,” said Jason Goldberg, chief commerce strategy officer at Publicis Groupe, a global advertising and communications firm. “Now, consumers are much more likely to discover new fashion trends from influencers on social media.”

That’s not to say there won’t be need for Saks and Neiman in the luxury market. The U.S. market for personal luxury products is about $100 billion, and the chains rang up a combined estimated $8 billion in revenue last year, meaning they remain important. The recent success of Nordstrom and Bloomingdale’s—strong sales growth for several quarters, in good part at Saks’ expense—are further proof that upscale department stores are still valuable. But that depends on good relations with brands.

Early Wednesday, in its statement announcing the bankruptcy filing, Saks said it had lined up $1.75 billion in financing that among other things would make go-forward payments to vendors, a key step in repairing relations.

Indeed, one of the reasons van Raemdonck got the job, on top of his experience heading Neiman, was his many years of management experience as a vendor, including years at Ralph Lauren and Louis Vuitton, so he understands their priorities and concerns.

He also understands the value of key employees, from store workers to those who bolster a retailer’s “fashion authority,” a number of whom have recently left Saks Global: Catherine Bloom, a superstar personal shopper for Neiman Marcus, and Yumi Shin, Bergdorf Goodman’s merchandising director, recently left for Nordstrom. Van Raemdonck will certainly work to soothe the nerves of other such stars, and seek to avoid more defections.

Another reason van Raemdonck was named CEO was his hands-on experience with guiding a company through a bankruptcy reorganization, having done so with Neiman Marcus Group in 2020, when the pandemic hit sales so badly that it became impossible to meet the company’s service of its massive debt.

This is likely to be a painful process. Though the company did not directly mention store closings in the announcement of its bankruptcy, Saks Global did say it “is evaluating its operational footprint to invest resources where it has the greatest long-term potential.” Saks has about 33 stores and Neiman 36, with some overlap in the same malls or same neighborhoods, meaning they are cannibalizing each other’s sales. Some culling of weak locations is almost certain.

Under van Raemdonck, Neiman Marcus did well protecting its market share from the headwinds buffeting luxury department stores. And while turning Saks Global into a fast growing retailer is a long shot, many in retail feel he is the man for the moment.

“He understands retail, luxury, and the brands the group owns. Even so, he will have his work cut out for him to get things back on track,” said Neil Saunders, a managing partner at GlobalData. “Ultimately, the lesson from Saks is that retailers should be run as retailers, and not used as financial playthings.”



Source link

Continue Reading

Trending

Copyright © Miami Select.