Connect with us

Business

Trump ally Vernon Jones launches bid for Georgia’s top elections post

Published

on



Vernon Jones, a former Democratic state representative who switched parties in support of President Donald Trump, announced Monday he’s running to become Georgia’s top election official.

Jones, who has called himself the “Black Donald Trump,” ran for Congress in 2022 with Trump’s endorsement, bolstering the president’s false claims that Georgia’s 2020 election was stolen from him.

“Trust in our elections has been shaken,” Jones said in a video announcing his campaign for secretary of state. He added, “Our elections must be secure. Our ballots must be protected.”

Current Secretary of State Brad Raffensperger, a Republican, is running for governor in the 2026 election. One of Raffensperger’s former top officials, Republican Gabriel Sterling, is also running to replace him. Both made names for themselves defending Georgia’s presidential election results in 2020 after Trump called Raffensperger and asked him to “find” votes to overturn Democratic President Joe Biden’s win in the state.

Jones dropped out of the 2022 governor’s race, then lost the Republican congressional primary that same year to U.S. Rep. Mike Collins, who is now vying for Trump’s endorsement to try and unseat Democrat U.S. Sen. Jon Ossoff.

Before his loss to Collins, Jones served several terms in Georgia’s state House before he became a Republican in January 2021 as his final term came to an end. Jones became a lauded voice in Republican circles as an African American who endorsed Trump’s reelection campaign.

The secretary of state oversees state elections and corporate filings, professional licenses and other business activities.

If elected, Jones said he would push for the use of paper ballots instead of Georgia’s electronic system, limit mail-in voting and toughen voter ID laws. He would also try to “cut red tape” for small businesses.

Along with Sterling, Republicans state Rep. Tim Fleming and Kelvin King are also running. Jones and King both appeal to Trump supporters who question the security of elections. King’s wife, Janelle King, is a member of the State Elections Board that saw some key actions overturned by the state Supreme Court.

Fleming heads a committee studying Georgia’s election system and is another vocal proponent of hand-marked paper ballots, a key demand from activists skeptical of the state’s voting machines.

Little-known candidate Adrian Consonery Jr. and former Fulton County State Court Judge Penny Brown Reynolds, who had a brief reality TV stint, are running as Democrats.

___

Kramon is a corps member for The Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

Business

As millions of Gen Zers face unemployment, McDonald’s CEO dishes out some tough love career advice for navigating the market: ‘You’ve got to make things happen for yourself’

Published

on



Instead, the 57-year-old executive is offering some blunt advice for aspiring young professionals: whether the market is hot or cold, no one is going to give you a handout. Your career is yours to build, and the onus is on you to make it happen.

“Remember, nobody cares about your career as much as you do,” Kempczinski said in a recent Instagram video. “You’ve got to own it, you’ve got to make things happen for yourself.”

At a time when many young workers are grasping at their networks for a leg up, the risks of falling behind are real: millions of young people are now classified as NEET—not in employment, education, or training. Against that backdrop, Kempczinski warned there’s no guarantee anyone will always have your back—or ensure you reach your career goals. 

Kempczinski knows firsthand that careers rarely unfold as planned. He once dreamed of becoming a professional soccer player, not a CEO. When it became clear early on that his athletic capability wasn’t up to star-level, he took his future into his own hands: turning lessons learned from washing dishes at 16 at First Watch into a three-decade-long career across companies like Procter & Gamble and PepsiCo before he was tapped to lead McDonald’s in 2019.

Keeping an open mind could be a career changer

Instead of expecting stability, one of the biggest paths to long-term success is embracing the chaos with curiosity—and a willingness to say yes when opportunities arise, according to Kempczinski.

“ To be a yes person is way better than to be a no person,” he told LinkedIn CEO Ryan Roslansky. “So as those career twists and turns happen, the more that you’re seen as someone who’s willing to say yes and to go do something, it just means you’re gonna get that next call.”

For Loreal’s Chief Human Resource Officer Stephanie Kramer, saying yes to things—even if they were unglamorous and “junior” looking, like grabbing coffee—was pivotal to her success.

“At the beginning of my career, I often credit it with the ability to say yes to the very, very little things,” Kramer recently told Fortune. “Who’s going to make the copies and going to get the coffee? Me. Who is going to be there early to set up the meeting? Me. Who is going to go watch which door consumers go in to determine what the best bay or window is for Saks Fifth Avenue that we want to have? Me.”

And the benefits of keeping an open mind early on may be more relevant now than ever, as opportunities have become slimmer for recent graduates. 

In the U.K., more than 1.2 million applications were submitted for just under 17,000 open graduate roles in 2023 and 2024, according to the Institute of Student Employers. And back stateside, lawmakers have warned that joblessness among recent graduates could hit 25% in the next two to three years as AI reshapes entry-level work.

Fortune reached out to Kempczinski for further comment.

The endless pursuit of knowledge—no matter what life throws at you

The emphasis on staying curious—even when plans change—is a theme echoed by other top executives.

Bank of America CEO Brian Moynihan has long credited asking questions and continuously learning as central to both the bank’s success and his own decade-plus tenure at the helm of a Fortune 500 company.

“You lose your curiosity, and you are on your way out of this company,” Moynihan told Fortune in 2017.

He echoed that message just last week, saying his top leadership advice remains simple: “You have to keep learning, you have to be curious, you have to read a lot,” he told The Master Investor podcast.

That mindset has also shaped the unconventional career path of Life360 CEO Lauren Antonoff. 

She once planned to become a civil rights lawyer, but an unexpected curiosity sparked by her first MacBook in college pulled her toward technology. She ultimately climbed the corporate ladder in tech—even without finishing her degree.

“I’m a big believer in finding your way in the world,” Antonoff recently told Fortune. “That’s not just about getting a job; if you don’t have a job, start something. If you don’t have a job, go volunteer someplace. In my experience, being active and working on problems that you’re interested in—one thing leads to another.”

This idea that careers aren’t built by waiting for someone to tell you what to do is exactly the message Kempczinski wanted to send to Gen Z. Staying curious and being willing to step through doors before you know exactly where they lead is often the key to long-term success.





Source link

Continue Reading

Business

IBM, AWS veteran says 90% of your employees are stuck in first gear with AI, just asking it to ‘write their mean email in a slightly more polite way’

Published

on



Employers are shelling out millions on artificial intelligence (AI) tools to boost productivity, but workers are still getting stuck using a tiny fraction of the tech’s potential, according to a presentation from a top executive in the space who advises Fortune 500 companies on strategy and tech adoption.

Allie K. Miller, the CEO of Open Machine, addressed the Fortune Brainstorm AI conference last week in San Francisco. Speaking from decades of experience at companies including IBM and Amazon Web Services (AWS), she argued that AI actually has four different, increasingly useful interaction modes. Miller, who helped launch the first multimodal AI team at IBM, said that AI can be a microtasker, companion, delegate, or a teammate, depending on the desired outcome. 

The problem, Miller said, is that most users never get beyond the first mode, using AI as a “microtasker,” basically a glorified search engine, returning results for simple queries.

Her central critique focused on the rudimentary way that most employees interact with Large Language Models (LLMs). While traditional software (“Software 1.0”) required exact inputs to get exact outputs, AI allows for reasoning and adaptation. Mistaking the former for the latter adds up to a waste of your annual ChatGPT, Gemini, or other subscription, she argued.

“Ninety percent of your employees are stuck in this mode. And so many employees think that they are an AI super user when all they are doing is asking AI to write their mean email in a slightly more polite way,” Miller said.

This roadblock is holding companies back from true productivity gains, added Miller. 

“Your annual subscriptions are made worthless because people are stuck in this mode,” she said, implicitly encouraging organizations to rethink their AI investment budgets.

Miller’s ideas are backed with data. According to a November study from software company Cornerstone OnDemand, there is an increasingly split “shadow AI economy” thriving beneath the surface of corporate America. The study found that 80% of employees are using AI at work, yet fewer than half had received proper AI training. 

To unlock the actual value of enterprise AI, Miller’s presentation outlined a shift toward three more advanced modes: “Companion,” “Delegate,” and the most critical evolution, “AI as a Teammate.”

By using AI through this interaction mode, the tech serves not as a reactive answer provider, but rather a collaborative partner that could be sitting in on meetings, fielding questions, as well as taking actions. Engineers at OpenAI are already doing this by incorporating the company’s software engineering agent Codex into Slack and treating it essentially as a coworker, she added.

While a “Delegate” might handle a 40-minute task like managing an inbox, the “Teammate” mode represents a fundamental shift in infrastructure. In this mode, AI is not transactional but ambient, “lifting up a system or a group and not the individual.” Miller predicted a near-future inversion of the current workflow: “We will no longer be prompting AI … AI will be prompting us because it will be in our systems and helping our team as a whole.”

But even for non-AI companies, incorporating the technology in this way essentially makes it the foundation of the business tasks employees complete daily, making it more of a productivity booster than a stand-alone curiosity for trivia questions.

“The big difference for AI as a teammate is that AI is lifting up a system or a group and not the individual,” she added.

To bridge the gap between rewriting emails and deploying autonomous systems, the speaker introduced the concept of “Minimum Viable Autonomy” (MVA), a spin on the old product-design principle of minimum viable product, or most market-ready prototype. This approach encourages leaders to stop treating AI like a chatbot requiring “perfect 18-page prompts” and start treating it as goal-oriented software.

“We are no longer giving step-by-step perfect instructions … we are going to provide goals and boundaries and rules and AI systems are going to work from the goal backwards,” the speaker explained.

To operationalize this safely, the forecast suggested implementing “agent protocols”—strict guidelines that group tasks into categories: “always do,” “please ask first,” and “never do.” The speaker recommended a risk distribution portfolio for these agents: 70% on low-risk tasks, 20% on complex cross-department tasks, and 10% on strategic tasks that fundamentally change organizational structure.

The Warning for the Next Decade

The presentation concluded with aggressive predictions for the immediate future. The speaker forecasted that within months, AI will be capable of working autonomously for over eight hours uninterrupted. Furthermore, as costs drop, companies will move from single queries to running hundreds of thousands of simulations for every market launch.

However, these advancements come with a caveat for legacy-minded leadership. The veteran closed with a reminder that evaluating whether AI is “good or not” is the new essential product requirement.

“AI is not just a tool,” Miller concluded, “and the organizations who continue to treat it like one are going to wonder over the next decade what happened.”

This story was originally featured on Fortune.com



Source link

Continue Reading

Business

Google researchers figure how to get AI agents to work better

Published

on



Welcome to Eye on AI. In this edition…President Trump takes aim at state AI regulations with a new executive order…OpenAI unveils a new image generator to catch up with Google’s Nano Banana….Google DeepMind trains a more capable agent for virtual worlds…and an AI safety report card doesn’t provide much reassurance.

Hello. 2025 was supposed to be the year of AI agents. But as the year draws to a close, it is clear such prognostications from tech vendors were overly optimistic. Yes, some companies have started to use AI agents. But most are not yet doing so, especially not in company-wide deployments.

A McKinsey “State of AI” survey from last month found that a majority of businesses had yet to begin using AI agents, while 40% said they were experimenting. Less than a quarter said they had deployed AI agents at scale in at least one use case; and when the consulting firm asked people about whether they were using AI in specific functions, such as marketing and sales or human resources, the results were even worse. No more than 10% of survey respondents said they had AI agents “fully scaled” or were “in the process of scaling” in any of these areas. The one function with the most usage of scaled agents was IT (where agents are often used to automatically resolve service tickets or install software for employees), and even here only 2% reported having agents “fully scaled,” with an additional 8% saying they were “scaling.”

A big part of the problem is that designing workflows for AI agents that will enable them to produce reliable results turns out to be difficult. Even the most capable of today’s AI models sit on a strange boundary—capable of doing certain tasks in a workflow as well as humans, but unable to do others. Complex tasks that involve gathering data from multiple sources and using software tools over many steps represent a particular challenge. The longer the workflow, the more risk that an error in one of the early steps in a process will compound, resulting in a failed outcome. Plus, the most capable AI models can be expensive to use at scale, especially if the workflow involves the agent having to do a lot of planning and reasoning.

Many firms have sought to solve these problems by designing “multi-agent workflows,” where different agents are spun up, with each assigned just one discrete step in the workflow, including sometimes using one agent to check the work of another agent. This can improve performance, but it too can wind up being expensive—sometimes too expensive to make the workflow worth automating.

Are two AI agents always better than one?

Now a team at Google has conducted research that aims to give businesses a good rubric for deciding when it is better to use a single agent, as opposed to building a multi-agent workflow, and what type of multi-agent workflows might be best for a particular task.

The researchers conducted 180 controlled experiments using AI models from Google, OpenAI, and Anthropic. It tried them against four different agentic AI benchmarks that covered a diverse set of goals: retrieving information from multiple websites; planning in a Minecraft game environment; planning and tool use to accomplish common business tasks such as answering emails, scheduling meetings, and using project management software; and a finance agent benchmark. That finance test requires agents to retrieve information from SEC filings and perform basic analytics, such as comparing actual results to management’s forecasts from the prior quarter, figuring out how revenue derived from a specific product segment has changed over time, or figuring out how much cash a company might have free for M&A activity.

In the past year, the conventional wisdom has been that multi-agent workflows produce more reliable results. (I’ve previously written about this view, which has been backed up by the experience of some companies, such as Prosus, here in Eye on AI.) But the Google researchers found instead that whether the conventional wisdom held was highly contingent on exactly what the task was.

Single agents do better at sequential steps, worse at parallel ones

If the task was sequential, which was the case for many of the Minecraft benchmark tasks, then it turned out that so long as a single AI agent could perform the task accurately at least 45% of the time (which is a pretty low bar, in my opinion), then it was better to deploy just one agent. Using multiple agents, in any configuration, reduced overall performance by huge amounts, ranging between 39% and 70%. The reason, according to the researchers, is that if a company had a limited token budget for completing the entire task, then the demands of multiple agents trying to figure out how to use different tools would quickly overwhelm the budget.

But if a task involved steps that could be performed in parallel, as was true for many of the financial analysis tasks, then multi-agent systems conveyed big advantages. What’s more, the researchers found that exactly how the agents are configured to work with one another makes a big difference, too. For the financial-analysis tasks, a centralized multi-agent syste—where a single coordinator agent directs and oversees the activity of multiple sub-agents and all communication flows to and from the coordinator—produced the best result. This system performed 80% better than a single agent. Meanwhile, an independent multi-agent system, in which there is no coordinator and each agent is simply assigned a narrow role that they complete in parallel, was only 57% better than a single agent.

Research like this should help companies figure out the best ways to configure AI agents and enable the technology to finally begin to deliver on last year’s promises. For those selling AI agent technology, late is better than never. For the people working in the businesses using AI agents, we’ll have to see what impact these agents have on the labor market. That’s a story we’ll be watching closely as we head into 2026.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

FORTUNE ON AI

A grassroots NIMBY revolt is turning voters in Republican strongholds against the AI data-center boom —by Eva Roytburg

Accenture exec gets real on transformation: ‘The data and AI strategy is not a separate strategy, it is the business strategy’ —by Nick Lichtenberg

AWS CEO says replacing young employees with AI is ‘one of the dumbest ideas’—and bad for business: ‘At some point the whole thing explodes on itself’ —by Sasha Rogelberg

What happens to old AI chips? They’re still put to good use and don’t depreciate that fast, analyst says —by Jason Ma

AI IN THE NEWS

President Trump signs executive order to stop state-level AI regulation. President Trump signed an executive order giving the U.S. Attorney General broad power to challenge and potentially overturn state laws that regulate artificial intelligence, arguing they hinder U.S. “global AI dominance.” The order also allows federal agencies to withhold funding from states that keep such laws. Trump said he wanted to replace what he called a confusing patchwork of state rules with a single federal framework—but the order did not contain any new federal requirements for those building AI models. Tech companies welcomed the move, but the executive order drew bipartisan criticism and is expected to face legal challenges from states and consumer groups who argue that only Congress can pre-empt state laws. Read more here from the New York Times.

Oracle stock hammered on reports of data center delays, huge lease obligations. Oracle denied a Bloomberg report that it had delayed completion of data centers being built for OpenAI, saying all projects remain on track to meet contractual commitments despite labor and materials shortages. The report rattled investors already worried about Oracle’s debt-heavy push into AI infrastructure under its $300 billion OpenAI deal, and investors pummeled Oracle’s stock price. You can read more on Oracle’s denial from Reuters here. Oracle was also shaken by reports that it has $248 billion in rental payments for data centers that will commence between now and 2028. That was covered by Bloomberg here.

OpenAI launches new image generation model. The company debuted a new image generation AI model that it says offers more fine-grained editing control and generates images four times faster than its previous image creators. The move is being widely viewed as an effort by OpenAI to show that it has not lost ground to competitors, in particular Google, whose Nano Banana Pro image generation model has been the talk of the internet since it launched in late November. You can read more from Fortune’s Sharon Goldman here.

OpenAI hires Shopify executive in push to make ChatGPT an ‘operating system’ The AI company hired Glen Coates, who had been head of “core product” at Shopify, to be its new head of app platform, working under ChatGPT product head Nick Turley. “We’re going to find out what happens if you architect an OS ground-up with a genius at its core that use its apps just like you can,” Coates wrote in a LinkedIn post announcing the move.

EYE ON AI RESEARCH

A Google DeepMind agent that can make complex plans in a virtual world. The AI lab debuted an updated version of its SIMA agent, called SIMA 2, that can navigate complex, 3D digital worlds, including those from different video games. Unlike earlier systems that only followed simple commands, SIMA 2 can understand broader goals, hold short conversations, and figure out multi-step plans on its own. In tests, it performed far better than its predecessor and came close to human players on many tasks, even in games it had never seen before. Notably, SIMA 2 can also teach itself new skills by setting its own challenges and learning from trial and error. The paper shows progress towards AI that can act, adapt, and learn in environments rather than just analyze text or images. The approach, which is based on reinforcement learning—a technique where an agent learns by trial and error to accomplish a goal—should help power more capable virtual assistants and, eventually, real-world robots. You can read the paper here.

AI CALENDAR

Jan. 6: Fortune Brainstorm Tech CES Dinner. Apply to attend here.

Jan. 19-23: World Economic Forum, Davos, Switzerland.

Feb. 10-11: AI Action Summit, New Delhi, India.

BRAIN FOOD

Is it safe? A few weeks ago, the Future of Life Institute (FLI) released its latest AI Safety Index, a report that grades leading AI labs on how they are doing on a range of safety criteria. A clear gap has emerged between three of the leading AI labs and pretty much everyone else. OpenAI, Google, and Anthropic all received grades in the “C” range. Anthropic and OpenAI both scored a C+, with Anthropic narrowly beating OpenAI on its total safety score. Google DeepMind’s solid C was an improvement from the C- it scored when FLI last graded the field on their safety efforts back in July. But the rest of the pack is doing a pretty poor job. X.ai and Meta and DeepSeek all received Ds, while Alibaba, which makes the popular open source AI model Qwen, got a D-. (DeepSeek’s grade was actually a step up from the F it received in the summer.)

Despite this somewhat dismal picture, FLI CEO Max Tegmark—ever an optimist—told me he actually sees some good news in the results. Not only did all the labs pull up their raw scores by at least some degree, more AI companies agreed to submit data to FLI in order to be graded. Tegmark sees this as evidence that the AI Safety Index is starting to have its intended effect of creating “a race to the top” on AI safety. But Tegmark also allows that all three of the top-marked AI labs saw their scores for “current harms” from AI—such as the negative impacts their models can have on mental health—slip since they were assessed in the summer. And when it comes to potential “existential risks” to humanity, none of the labs gets a grade above D. Somehow that doesn’t cheer me.

FORTUNE AIQ: THE YEAR IN AI—AND WHAT’S AHEAD

Businesses took big steps forward on the AI journey in 2025, from hiring Chief AI Officers to experimenting with AI agents. The lessons learned—both good and bad–combined with the technology’s latest innovations will make 2026 another decisive year. Explore all of Fortune AIQ, and read the latest playbook below: 

The 3 trends that dominated companies’ AI rollouts in 2025.

2025 was the year of agentic AI. How did we do?

AI coding tools exploded in 2025. The first security exploits show what could go wrong.

The big AI New Year’s resolution for businesses in 2026: ROI.

Businesses face a confusing patchwork of AI policy and rules. Is clarity on the horizon?



Source link

Continue Reading

Trending

Copyright © Miami Select.