Connect with us

Business

Lidl launches holiday meal deal for less than $4 per person

Published

on



Lidl US is offering its first-ever holiday meal deal that serves 12 people for less than $4 per person. The shopping list includes a ham portion priced at $0.77 per pound, 12 ounces of hawaiian rolls at $1.79, and 7.25 ounces of mac and cheese for $0.56, as well as many other food items like sweet potatoes and ingredients to make a pumpkin pie. The deal runs through Dec. 24, according to the company’s press release

In total, the meal costs $42.66 and feeds 12 people. To qualify for the holiday meal at less than $4 per person, customers must be myLidl members. Other items not part of the holiday meal deal are offered at a discount, including a slightly pricier line featuring premium ham and an assortment of desserts.

“Lidl US is dedicated to making high-quality food accessible to everyone, especially during this time of year,” Lidl US CEO Joel Rampoldt said.

Holiday deals coming at the right time for consumers

The discount deal comes as American shoppers pull back on gift spending for the holidays and voters sour on grocery prices.

In November, President Donald Trump announced he was scrapping tariffs on beef, coffee, and other commodities as Democrats and Republicans alike decried a growing affordability crisis.

Despite inflation slowing since its pandemic spike, food price growth ticked up to 3.1% in September—the latest government data available—slightly outpacing headline inflation at 3% and well above the Fed’s target rate of 2%, according to the Bureau of Labor Statistics.

Still, the economy remains afloat, in large part due to a K-shaped economy, in which wealthier Americans who own financial and property assets have enjoyed the period of elevated inflation, while Americans with less financial means have been struck by sticker shock and rising energy prices. This has led to a downward trend in economic activity from low-income earners and an upward trend in assets owned by the wealthy, creating a “K” shape. 

Mark Zandi, chief economist at Moody’s Analytics, estimated in September the top 10% of earners account for about 49.2% of all U.S. consumer spending—heights that haven’t been reached in data back since 1989. The top 20% accounted for more than 60% of total spending this year.

When announcing another 25 basis points cut last week, Fed Chair Jerome Powell was uneasy about the state of the K-shaped economy.

“As to how sustainable it is, I don’t know,” Powell said.



Source link

Continue Reading

Business

IBM, AWS veteran says 90% of your employees are stuck in first gear with AI, just asking it to ‘write their mean email in a slightly more polite way’

Published

on



Employers are shelling out millions on artificial intelligence (AI) tools to boost productivity, but workers are still getting stuck using a tiny fraction of the tech’s potential, according to a presentation from a top executive in the space who advises Fortune 500 companies on strategy and tech adoption.

Allie K. Miller, the CEO of Open Machine, addressed the Fortune Brainstorm AI conference last week in San Francisco. Speaking from decades of experience at companies including IBM and Amazon Web Services (AWS), she argued that AI actually has four different, increasingly useful interaction modes. Miller, who helped launch the first multimodal AI team at IBM, said that AI can be a microtasker, companion, delegate, or a teammate, depending on the desired outcome. 

The problem, Miller said, is that most users never get beyond the first mode, using AI as a “microtasker,” basically a glorified search engine, returning results for simple queries.

Her central critique focused on the rudimentary way that most employees interact with Large Language Models (LLMs). While traditional software (“Software 1.0”) required exact inputs to get exact outputs, AI allows for reasoning and adaptation. Mistaking the former for the latter adds up to a waste of your annual ChatGPT, Gemini, or other subscription, she argued.

“Ninety percent of your employees are stuck in this mode. And so many employees think that they are an AI super user when all they are doing is asking AI to write their mean email in a slightly more polite way,” Miller said.

This roadblock is holding companies back from true productivity gains, added Miller. 

“Your annual subscriptions are made worthless because people are stuck in this mode,” she said, implicitly encouraging organizations to rethink their AI investment budgets.

Miller’s ideas are backed with data. According to a November study from software company Cornerstone OnDemand, there is an increasingly split “shadow AI economy” thriving beneath the surface of corporate America. The study found that 80% of employees are using AI at work, yet fewer than half had received proper AI training. 

To unlock the actual value of enterprise AI, Miller’s presentation outlined a shift toward three more advanced modes: “Companion,” “Delegate,” and the most critical evolution, “AI as a Teammate.”

By using AI through this interaction mode, the tech serves not as a reactive answer provider, but rather a collaborative partner that could be sitting in on meetings, fielding questions, as well as taking actions. Engineers at OpenAI are already doing this by incorporating the company’s software engineering agent Codex into Slack and treating it essentially as a coworker, she added.

While a “Delegate” might handle a 40-minute task like managing an inbox, the “Teammate” mode represents a fundamental shift in infrastructure. In this mode, AI is not transactional but ambient, “lifting up a system or a group and not the individual.” Miller predicted a near-future inversion of the current workflow: “We will no longer be prompting AI … AI will be prompting us because it will be in our systems and helping our team as a whole.”

But even for non-AI companies, incorporating the technology in this way essentially makes it the foundation of the business tasks employees complete daily, making it more of a productivity booster than a stand-alone curiosity for trivia questions.

“The big difference for AI as a teammate is that AI is lifting up a system or a group and not the individual,” she added.

To bridge the gap between rewriting emails and deploying autonomous systems, the speaker introduced the concept of “Minimum Viable Autonomy” (MVA), a spin on the old product-design principle of minimum viable product, or most market-ready prototype. This approach encourages leaders to stop treating AI like a chatbot requiring “perfect 18-page prompts” and start treating it as goal-oriented software.

“We are no longer giving step-by-step perfect instructions … we are going to provide goals and boundaries and rules and AI systems are going to work from the goal backwards,” the speaker explained.

To operationalize this safely, the forecast suggested implementing “agent protocols”—strict guidelines that group tasks into categories: “always do,” “please ask first,” and “never do.” The speaker recommended a risk distribution portfolio for these agents: 70% on low-risk tasks, 20% on complex cross-department tasks, and 10% on strategic tasks that fundamentally change organizational structure.

The Warning for the Next Decade

The presentation concluded with aggressive predictions for the immediate future. The speaker forecasted that within months, AI will be capable of working autonomously for over eight hours uninterrupted. Furthermore, as costs drop, companies will move from single queries to running hundreds of thousands of simulations for every market launch.

However, these advancements come with a caveat for legacy-minded leadership. The veteran closed with a reminder that evaluating whether AI is “good or not” is the new essential product requirement.

“AI is not just a tool,” Miller concluded, “and the organizations who continue to treat it like one are going to wonder over the next decade what happened.”

This story was originally featured on Fortune.com



Source link

Continue Reading

Business

Google researchers figure how to get AI agents to work better

Published

on



Welcome to Eye on AI. In this edition…President Trump takes aim at state AI regulations with a new executive order…OpenAI unveils a new image generator to catch up with Google’s Nano Banana….Google DeepMind trains a more capable agent for virtual worlds…and an AI safety report card doesn’t provide much reassurance.

Hello. 2025 was supposed to be the year of AI agents. But as the year draws to a close, it is clear such prognostications from tech vendors were overly optimistic. Yes, some companies have started to use AI agents. But most are not yet doing so, especially not in company-wide deployments.

A McKinsey “State of AI” survey from last month found that a majority of businesses had yet to begin using AI agents, while 40% said they were experimenting. Less than a quarter said they had deployed AI agents at scale in at least one use case; and when the consulting firm asked people about whether they were using AI in specific functions, such as marketing and sales or human resources, the results were even worse. No more than 10% of survey respondents said they had AI agents “fully scaled” or were “in the process of scaling” in any of these areas. The one function with the most usage of scaled agents was IT (where agents are often used to automatically resolve service tickets or install software for employees), and even here only 2% reported having agents “fully scaled,” with an additional 8% saying they were “scaling.”

A big part of the problem is that designing workflows for AI agents that will enable them to produce reliable results turns out to be difficult. Even the most capable of today’s AI models sit on a strange boundary—capable of doing certain tasks in a workflow as well as humans, but unable to do others. Complex tasks that involve gathering data from multiple sources and using software tools over many steps represent a particular challenge. The longer the workflow, the more risk that an error in one of the early steps in a process will compound, resulting in a failed outcome. Plus, the most capable AI models can be expensive to use at scale, especially if the workflow involves the agent having to do a lot of planning and reasoning.

Many firms have sought to solve these problems by designing “multi-agent workflows,” where different agents are spun up, with each assigned just one discrete step in the workflow, including sometimes using one agent to check the work of another agent. This can improve performance, but it too can wind up being expensive—sometimes too expensive to make the workflow worth automating.

Are two AI agents always better than one?

Now a team at Google has conducted research that aims to give businesses a good rubric for deciding when it is better to use a single agent, as opposed to building a multi-agent workflow, and what type of multi-agent workflows might be best for a particular task.

The researchers conducted 180 controlled experiments using AI models from Google, OpenAI, and Anthropic. It tried them against four different agentic AI benchmarks that covered a diverse set of goals: retrieving information from multiple websites; planning in a Minecraft game environment; planning and tool use to accomplish common business tasks such as answering emails, scheduling meetings, and using project management software; and a finance agent benchmark. That finance test requires agents to retrieve information from SEC filings and perform basic analytics, such as comparing actual results to management’s forecasts from the prior quarter, figuring out how revenue derived from a specific product segment has changed over time, or figuring out how much cash a company might have free for M&A activity.

In the past year, the conventional wisdom has been that multi-agent workflows produce more reliable results. (I’ve previously written about this view, which has been backed up by the experience of some companies, such as Prosus, here in Eye on AI.) But the Google researchers found instead that whether the conventional wisdom held was highly contingent on exactly what the task was.

Single agents do better at sequential steps, worse at parallel ones

If the task was sequential, which was the case for many of the Minecraft benchmark tasks, then it turned out that so long as a single AI agent could perform the task accurately at least 45% of the time (which is a pretty low bar, in my opinion), then it was better to deploy just one agent. Using multiple agents, in any configuration, reduced overall performance by huge amounts, ranging between 39% and 70%. The reason, according to the researchers, is that if a company had a limited token budget for completing the entire task, then the demands of multiple agents trying to figure out how to use different tools would quickly overwhelm the budget.

But if a task involved steps that could be performed in parallel, as was true for many of the financial analysis tasks, then multi-agent systems conveyed big advantages. What’s more, the researchers found that exactly how the agents are configured to work with one another makes a big difference, too. For the financial-analysis tasks, a centralized multi-agent syste—where a single coordinator agent directs and oversees the activity of multiple sub-agents and all communication flows to and from the coordinator—produced the best result. This system performed 80% better than a single agent. Meanwhile, an independent multi-agent system, in which there is no coordinator and each agent is simply assigned a narrow role that they complete in parallel, was only 57% better than a single agent.

Research like this should help companies figure out the best ways to configure AI agents and enable the technology to finally begin to deliver on last year’s promises. For those selling AI agent technology, late is better than never. For the people working in the businesses using AI agents, we’ll have to see what impact these agents have on the labor market. That’s a story we’ll be watching closely as we head into 2026.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

FORTUNE ON AI

A grassroots NIMBY revolt is turning voters in Republican strongholds against the AI data-center boom —by Eva Roytburg

Accenture exec gets real on transformation: ‘The data and AI strategy is not a separate strategy, it is the business strategy’ —by Nick Lichtenberg

AWS CEO says replacing young employees with AI is ‘one of the dumbest ideas’—and bad for business: ‘At some point the whole thing explodes on itself’ —by Sasha Rogelberg

What happens to old AI chips? They’re still put to good use and don’t depreciate that fast, analyst says —by Jason Ma

AI IN THE NEWS

President Trump signs executive order to stop state-level AI regulation. President Trump signed an executive order giving the U.S. Attorney General broad power to challenge and potentially overturn state laws that regulate artificial intelligence, arguing they hinder U.S. “global AI dominance.” The order also allows federal agencies to withhold funding from states that keep such laws. Trump said he wanted to replace what he called a confusing patchwork of state rules with a single federal framework—but the order did not contain any new federal requirements for those building AI models. Tech companies welcomed the move, but the executive order drew bipartisan criticism and is expected to face legal challenges from states and consumer groups who argue that only Congress can pre-empt state laws. Read more here from the New York Times.

Oracle stock hammered on reports of data center delays, huge lease obligations. Oracle denied a Bloomberg report that it had delayed completion of data centers being built for OpenAI, saying all projects remain on track to meet contractual commitments despite labor and materials shortages. The report rattled investors already worried about Oracle’s debt-heavy push into AI infrastructure under its $300 billion OpenAI deal, and investors pummeled Oracle’s stock price. You can read more on Oracle’s denial from Reuters here. Oracle was also shaken by reports that it has $248 billion in rental payments for data centers that will commence between now and 2028. That was covered by Bloomberg here.

OpenAI launches new image generation model. The company debuted a new image generation AI model that it says offers more fine-grained editing control and generates images four times faster than its previous image creators. The move is being widely viewed as an effort by OpenAI to show that it has not lost ground to competitors, in particular Google, whose Nano Banana Pro image generation model has been the talk of the internet since it launched in late November. You can read more from Fortune’s Sharon Goldman here.

OpenAI hires Shopify executive in push to make ChatGPT an ‘operating system’ The AI company hired Glen Coates, who had been head of “core product” at Shopify, to be its new head of app platform, working under ChatGPT product head Nick Turley. “We’re going to find out what happens if you architect an OS ground-up with a genius at its core that use its apps just like you can,” Coates wrote in a LinkedIn post announcing the move.

EYE ON AI RESEARCH

A Google DeepMind agent that can make complex plans in a virtual world. The AI lab debuted an updated version of its SIMA agent, called SIMA 2, that can navigate complex, 3D digital worlds, including those from different video games. Unlike earlier systems that only followed simple commands, SIMA 2 can understand broader goals, hold short conversations, and figure out multi-step plans on its own. In tests, it performed far better than its predecessor and came close to human players on many tasks, even in games it had never seen before. Notably, SIMA 2 can also teach itself new skills by setting its own challenges and learning from trial and error. The paper shows progress towards AI that can act, adapt, and learn in environments rather than just analyze text or images. The approach, which is based on reinforcement learning—a technique where an agent learns by trial and error to accomplish a goal—should help power more capable virtual assistants and, eventually, real-world robots. You can read the paper here.

AI CALENDAR

Jan. 6: Fortune Brainstorm Tech CES Dinner. Apply to attend here.

Jan. 19-23: World Economic Forum, Davos, Switzerland.

Feb. 10-11: AI Action Summit, New Delhi, India.

BRAIN FOOD

Is it safe? A few weeks ago, the Future of Life Institute (FLI) released its latest AI Safety Index, a report that grades leading AI labs on how they are doing on a range of safety criteria. A clear gap has emerged between three of the leading AI labs and pretty much everyone else. OpenAI, Google, and Anthropic all received grades in the “C” range. Anthropic and OpenAI both scored a C+, with Anthropic narrowly beating OpenAI on its total safety score. Google DeepMind’s solid C was an improvement from the C- it scored when FLI last graded the field on their safety efforts back in July. But the rest of the pack is doing a pretty poor job. X.ai and Meta and DeepSeek all received Ds, while Alibaba, which makes the popular open source AI model Qwen, got a D-. (DeepSeek’s grade was actually a step up from the F it received in the summer.)

Despite this somewhat dismal picture, FLI CEO Max Tegmark—ever an optimist—told me he actually sees some good news in the results. Not only did all the labs pull up their raw scores by at least some degree, more AI companies agreed to submit data to FLI in order to be graded. Tegmark sees this as evidence that the AI Safety Index is starting to have its intended effect of creating “a race to the top” on AI safety. But Tegmark also allows that all three of the top-marked AI labs saw their scores for “current harms” from AI—such as the negative impacts their models can have on mental health—slip since they were assessed in the summer. And when it comes to potential “existential risks” to humanity, none of the labs gets a grade above D. Somehow that doesn’t cheer me.

FORTUNE AIQ: THE YEAR IN AI—AND WHAT’S AHEAD

Businesses took big steps forward on the AI journey in 2025, from hiring Chief AI Officers to experimenting with AI agents. The lessons learned—both good and bad–combined with the technology’s latest innovations will make 2026 another decisive year. Explore all of Fortune AIQ, and read the latest playbook below: 

The 3 trends that dominated companies’ AI rollouts in 2025.

2025 was the year of agentic AI. How did we do?

AI coding tools exploded in 2025. The first security exploits show what could go wrong.

The big AI New Year’s resolution for businesses in 2026: ROI.

Businesses face a confusing patchwork of AI policy and rules. Is clarity on the horizon?



Source link

Continue Reading

Business

Ford CEO Jim Farley said Trump would halve the EV market by ending subsidies. Now he’s writing down $19.5 billion amid a ‘customer-driven’ shift

Published

on



Several months ago, Ford CEO Jim Farley said ending the nearly two-decade-long EV tax credit would halve America’s electric vehicle market. Now, his company is facing its own reality check.

Ford said this week it would cease production for the original electric F-150 Lightning, which was once touted as a breakthrough for the industry, and shift some of its existing workforce to producing a hybrid version of the pickup with a gas-powered generator called an EREV‚ or an extended range electric vehicle. The automaker said it would be taking a $19.5 billion charge in 2026 as a result of this “customer-driven shift.” 

With that in mind, it’s worth reviewing what Farley said at the Ford Pro Accelerate summit in Detroit in September. EVs will remain a “vibrant industry” going forward, he said, but also “smaller, way smaller than we thought.” The end of the $7,500 consumer incentive would be a game-changer, Farley added, before predicting that EV sales in the U.S. could plummet from to 5% from a previous 10%-12%.

Speaking to CNBC on Monday about Ford’s electric pivot, Farley claimed the EV market had, in fact, already shrunk to around 5% of the U.S. vehicle market. The automaker’s EV lineup was simply out of sync with consumer demand, he said.

“More importantly, the very high end EVs, the 50, 60, 70, $80,000 vehicles, they just weren’t selling,” Farley told CNBC.

Farley had established Ford’s Model E division in 2022 to innovate on electric vehicles and operate as a startup within the more-than-100-year-old automaker. At the same time, Farley told CNBC that he knew when he established Model E, it would be “brutal business-wise.” That may have been an understatement. In under three years, the Model E division has lost $13 billion, more than double Ford’s net income for 2024

As part of its pivot, Farley said the company is listening to consumers.

“We’re following customers to where the market is, not where people thought it was going to be, but to where it is today,” he said. 

This means prioritizing hybrid and semi-gas-powered EREVs over pure-play EVs. These categories are what customers are still interested in, Farley said. 

To be sure, the company says its Model E division will still be profitable, but in 2029, three years after the 2026 date it had previously targeted. By 2030, the company is also predicting that hybrids, semi-gas-powered EREVs, and pure-play EVs will make up half of Ford’s global sales, a stark increase from about 17% now. And most of that, Farley told CNBC, will be “hybrid and EREV.”

This story was originally featured on Fortune.com



Source link

Continue Reading

Trending

Copyright © Miami Select.