Connect with us

Business

Why we should all pay attention to how lawyers, auditors, and accountants are using AI

Published

on



Hello and welcome to Eye on AI. In today’s edition…the U.S. Senate rejects moratorium on state-level AI laws…Meta unveils its new AI organization…Microsoft says AI can out diagnose doctors…and Anthropic shows why you shouldn’t let an AI agent run your business just yet.

AI is rapidly changing work for many of those in professional services—lawyers, accountants, auditors, compliance officers, consultants, and tax advisors. In many ways, the experience of these professionals, and the businesses they work for, are a harbinger of what’s likely to happen for other kinds of knowledge workers in the near future.

Because of this, it was interesting to hear the discussion yesterday at a conference on the “Future of Professionals” at Oxford University’s Said School of Business. The conference was sponsored by Thomson Reuters, in part to coincide with the publication of a report it commissioned on trends in professionals’ use of AI.

That report, based on a global survey of 2,275 professionals in February and March, found that professional services firms seem to be finding a return on their AI investment at a higher rate than in other sectors. Slightly more than half—53%—of the respondents said their firm had found at least one AI use case that was earning a return, which is about twice what other, broader surveys have tended to find.

Not surprisingly, Thomson Reuters found it was the professional firms where AI usage was part of a well-defined strategy and that had implemented governance structures around AI implementation that were most likely to see gains from the technology. Interestingly, among firms where AI adoption was less structured, 64% of those surveyed still reported ROI from at least one use case, which may reflect how powerful and time-saving these tools can be even when used by individuals to improve their own workflows.

The biggest factors holding back AI use cases, the respondents said, included concerns about inaccuracy (with 50% of those surveyed noting this was a problem) and data security (42%). For more on how law firms are using AI, check out this feature from my Fortune colleague Jeff John Roberts.

Mind the gaps

Here are a few tidbits from the conference worth highlighting:

Mari Sako, the Oxford professor of management studies who helped organize the conference, talked about the three gaps that professionals needed to watch out for in trying to manage AI implementation: One was the responsibility gap between model developers, application builders, and end users of AI models. Who bears responsibility for the model’s accuracy and possible harms?

A second was the principles to practice gap. Businesses enact high-minded “Responsible AI” principles but then the teams building or deploying AI products struggle to operationalize them. One reason this happens is that first gap—it means that teams building AI applications may not have visibility into the data used to train a model they are deploying or detailed information about how it may perform. This can make it hard to apply AI principles about transparency and mitigating bias, among other things.

Finally, she said, there is a goals gap. Is everyone in the business aligned about why AI is being used in the first place? Is it for human augmentation or automation? Is it operational efficiency or revenue growth? Is the goal to be more accurate than a human, or simply to come close to human performance at a lower cost? What role should environmental sustainability play in these decisions? All good questions.

Not a substitute for human judgment

Ian Freeman, a partner at KPMG UK, talked about his firm’s increasing use of AI tools to help auditors. In the past, auditors were forced to rely on sampling transactions, trying to apply more scrutiny to those that presented a bigger business risk. But now, with AI, it is possible to run a screen on every single transaction. But still, it is the riskiest transactions that should get the most scrutiny and AI can help identify those. Freeman said AI could also help more junior auditors understand the rationale for probing certain transactions. And he said AI models could help with a lot of routine financial analysis.

But he said KPMG had a policy of not deploying AI in situations that called for human judgment. Auditing is full of such cases, such as deciding on materiality thresholds, making a call about whether a client has submitted enough evidence to justify a particular accounting treatment, or deciding on appropriate warranty reserves for a new product. That sounds good, but I also wonder about the ability of AI models to act as tutors or digital mentors to junior auditors, helping them to develop their professional judgment? Surely, that seems like it might be a good use case for AI too.

A senior partner from a large law firm (parts of the conference were conducted under Chatham House Rules, so I can’t name them) noted that many corporate legal departments are embracing AI faster than legal firms—something the Thomson Reuters survey also showed—and that this disparity was putting pressure on the firms. Corporate counsel are demanding that external lawyers be more transparent about their AI usage—and critically, putting pressure on legal bills on the theory that many legal tasks can now be done in far fewer billable hours.

Changing career paths and the need for AI expertise

AI is also possibly going to change how professional service firms think about career paths within their business and even who leads these firms, several lawyers at the conference said. AI expertise is increasingly important to how these firms operate, and yet it is difficult to attract the talent these businesses need if these “non-qualified” technical experts (the term “non-qualified” is simply used to denote an employee who has not been admitted to the bar, but its pejorative connotations are hard to escape) know they will always be treated as second-class compared to the client-facing lawyers and also are ineligible for promotion to the highest ranks of the firm’s management. 

Michael Buenger, executive vice president and chief operating officer at the National Center for State Courts in the U.S., said that if large law firms had trouble attracting and retaining AI expertise, the situation was far worse for governments. And he pointed out that judges and juries were increasingly being asked to rule on evidence, particularly video evidence, but also other kinds of documentary evidence, that might be AI manipulated, but without access to independent expertise to help them make calls about what has been altered by AI and how. If not addressed, he said, this could seriously undermine faith in the courts to deliver justice.

There were lots more insights from the conference, but that’s all we have space for today. Here’s more AI news.

Note: The essay above was written and edited by humans. The news items below are curated by the newsletter author. Short summaries of the relevant stories were created using AI. These summaries were then edited and fact-checked by the author, who also wrote the blurb headlines. This entire newsletter was then further edited by additional humans.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Want to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Then join me at the Ritz-Carlton, Millenia in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. This year’s theme is The Age of Intelligence. We will be joined by leading executives from DBS Bank, Walmart, OpenAI, Arm, Qualcomm, Standard Chartered, Temasek, and our founding partner Accenture, plus many others, along with key government ministers from Singapore and the region, top academics, investors and analysts. We will dive deep into the latest on AI agents, examine the data center build out in Asia, examine how to create AI systems that produce business value, and talk about how to ensure AI is deployed responsibly and safely. You can apply to attend here and, as loyal Eye on AI readers, I’m able to offer complimentary tickets to the event. Just use the discount code BAI100JeremyK when you checkout.

AI IN THE NEWS

Senate strips 10-year moratorium on state AI laws from Trump tax bill. The U.S. Senate voted 99-1 to remove the controversial measure from President Donald Trump’s landmark “Big Beautiful Bill.” The restrictions had been supported by Silicon Valley tech companies and venture capitalists as well as their allies in the Trump administration. Bipartisan opposition to the moratorium—led by Sen. Marsha Blackburn—centered on preserving state-level protections like Tennessee’s Elvis Act, which protects citizens from unauthorized use of their voice or likeness, including in AI-generated content. Critics warned that in the absence of federal AI regulation, the ban on state-level laws would leave U.S. citizens with no protection from AI harms at all. But tech companies argue that the increasing patchwork of state-level AI regulation is unworkable, hampering AI progress. Read more from Bloomberg News here.

Meta announced new AI leadership team and key hires from rival AI labs. Meta CEO Mark Zuckerberg sent a memo to employees formally announcing the creation of Meta Superintelligence Labs, a new organization uniting the company’s foundational AI model, product, and Fundamental AI Research (FAIR) teams under a single umbrella. Scale AI founder and CEO Alexandr Wang—who is joining Meta as part of a $14.3 billion investment into Scale—will have the title “chief AI officer” and will co-lead the new Superintelligence unit along with former GitHub CEO Nat Friedman. Zuckerberg also announced the hiring of 11 prominent AI researchers from OpenAI, Google DeepMind, and Anthropic. You can read more about Meta’s AI talent raid from Wired here.

Cloudflare begins blocking AI web-crawlers by default. Internet content delivery provider Cloudflare announced it has begun blocking AI companies’ web crawlers from accessing website content by default. Owners of the websites can choose to unblock specific crawlers—such as those Google uses to build its search index—or even opt for a “pay per crawl” option that will allow them to monetize the scraping of their content. With around 16% of global internet traffic passing through Cloudflare, the change could significantly impact AI development. (Full disclosure: Fortune is one of the initial participants in the Cloudflare crawler initiative.) Read more from CNBC here.

EYE ON AI RESEARCH

Even better than House? Microsoft has unveiled an AI system for medical diagnoses that it claims can accurately diagnose complex cases four times more accurately than individual human doctors (under certain conditions—more on that in a sec.) The “Microsoft AI Diagnostic Orchestrator” (MAI-DxO—gotta love those AI acronyms) consists of five AI “agents” that each have a distinct role to play in scouring the medical literature, hypothesizing what the patient’s condition might be, ordering tests to eliminate possibilities, and even trying to optimize these tests to derive the most useful information at the least cost. These five “AI doctors” then engage in a process Microsoft is dubbing “chain of debate,” where they collaborate and critique one another, ultimately arriving at a diagnosis.

In trials involving 304 real-world cases from the New England Journal of Medicine, MAI-DxO, achieved an 85.5% success rate, compared to about 20% for human doctors. Microsoft tried powering the system with different AI models from OpenAI, Google, Meta, Anthropic, and DeepSeek, but found it worked best when using OpenAI’s o3 model (Microsoft is a major investor in OpenAI, sells OpenAI’s models through its cloud service, and depends on OpenAI for many of its own AI offerings). As for the poor performance of the human docs, it is important to note that in the test they were not allowed to consult either medical textbooks or colleagues.

Nonetheless, Microsoft AI CEO Mustafa Suleyman said the system could transform healthcare—although the company also said MAI-DxO is just a research project and is not yet being turned into a product. You can read more from the Financial Times here.

FORTUNE ON AI

Mark Zuckerberg overhauled Meta’s entire AI org in a risky, multi-billion dollar bet on ‘superintelligence’ —by Sharon Goldman

Longtime Bessemer investor Mary D’Onofrio, who backed Anthropic and Canva, leaves for Crosslink Capital —by Allie Garfinkle

Ford CEO says new technologies like AI are leaving many workers behind, and companies need a plan —by Jessica Mathews

Commentary: When your AI assistant writes your performance review: A glimpse into the future of work —by David Ferrucci

AI CALENDAR

July 8-11: AI for Good Global Summit, Geneva

July 13-19: International Conference on Machine Learning (ICML), Vancouver

July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.

July 26-28: World Artificial Intelligence Conference (WAIC), Shanghai. 

Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.

Oct. 6-10: World AI Week, Amsterdam

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

BRAIN FOOD

AI tries to run a vending machine business. Hilarity ensues, Part Deux. A month ago in the research section of this newsletter, I wrote about research from Andon Labs about what happens when you try to have various AI models run a simulated vending machine business. Now, Anthropic teamed up with Andon Labs to test one of its latest models, Claude 3.7 Sonnet, to see how it did running a real-life vending machine in Anthropic’s San Francisco office. The answer, as it turns out, is not well at all. As Anthropic writes in its blog on the experiment, “If Anthropic were deciding today to expand into the in-office vending market, we would not hire [Claude 3.7 Sonnet].”

The model made a lot of mistakes—like telling customers to send it payment to Venmo account that didn’t exist (it had hallucinated it)—and also a lot of poor business decisions, like offering far too many discounts (including an Anthropic employee discount in a location where 99% of the customers were Anthropic employees), failing to seize a good arbitrage opportunity, and failing to increase prices in response to high demand.

The entire Anthropic blog makes for fun reading. And the experiment makes it clear that AI agents probably are nowhere near ready for a lot of complex, multi-step tasks over long time periods.



Source link

Continue Reading

Business

Mark Zuckerberg renamed Facebook for the metaverse. 4 years and $70B in losses later, he’s moving on

Published

on



In 2021, Mark Zuckerberg recast Facebook as Meta and declared the metaverse — a digital realm where people would work, socialize, and spend much of their lives — the company’s next great frontier. He framed it as the “successor to the mobile internet” and said Meta would be “metaverse-first.”

The hype wasn’t all him. Grayscale, the investment firm specializing in crypto, called the Metaverse a “trillion-dollar revenue opportunity.” Barbados even opened up an embassy in Decentraland, one of the worlds in the metaverse. 

Five years later, that bet has become one of the most expensive misadventures in tech. Meta’s Reality Labs division has racked up more than $70 billion in losses since 2021, according to Bloomberg, burning through cash on blocky virtual environments, glitchy avatars, expensive headsets, and a user base of approximately 38 people as of 2022.

For many people, the problem is that the value proposition is unclear; the metaverse simply doesn’t yet deliver a must-have reason to ditch their phone or laptop. Despite years of investment, VR remains burdened by serious structural limitations, and for most users there’s simply not enough compelling content beyond niche gaming.

A 30% budget cut 

Zuckerberg is now preparing to slash Reality Labs’ budget by as much as 30%, Bloomberg said. The cuts—which could translate to $4 billion to $6 billion in reduced spend—would hit everything from the Horizon Worlds virtual platform to the Quest hardware unit. Layoffs could come as early as January, though final decisions haven’t been made, according to Bloomberg. 

The move follows a strategy meeting last month at Zuckerberg’s Hawaii compound, where he reviewed Meta’s 2026 budget and asked executives to find 10% cuts across the board, the report said. Reality Labs was told to go deeper. Competition in the broader VR market simply never took off the way Meta expected, one person said. The result: a division long viewed as a money sink is finally being reined in.

Wall Street cheered. Meta’s stock jumped more than 4% Thursday on the news, adding roughly $69 billion in market value.

“Smart move, just late,” Craig Huber of Huber Research told Reuters. Investors have been complaining for years that the metaverse effort was an expensive distraction, one that drained resources without producing meaningful revenue.

Metaverse out, AI in

Meta didn’t immediately respond to Fortune’s request for comment, but it insists it isn’t killing the metaverse outright. A spokesperson told the South China Morning Post that the company is “shifting some investment from Metaverse toward AI glasses and wearables,” point­ing to momentum behind its Ray-Ban smart glasses, which Zuckerberg says have tripled in sales over the past year.

But there’s no avoiding the reality: AI is the new obsession, and the new money pit.

Meta expects to spend around $72 billion on AI this year, nearly matching everything it has lost on the metaverse since 2021. That includes massive outlays for data centers, model development, and new hardware. Investors are much more excited about AI burn than metaverse burn, but even they want clarity on how much Meta will ultimately be spending — and for how long.

Across tech, companies are evaluating anything that isn’t directly tied to AI. Apple is revamping its leadership structure, partially around AI concerns. Microsoft is rethinking the “economics of AI.” Amazon, Google, and Microsoft are pouring billions into cloud infrastructure to keep up with demand. Signs point to money-losing initiatives without a clear AI angle being on the chopping block, with Meta as a dramatic example.

On the company’s most recent earnings call, executives didn’t use the word “metaverse” once.



Source link

Continue Reading

Business

Robert F. Kennedy Jr. turns to AI to make America healthy again

Published

on



HHS billed the plan as a “first step” focused largely on making its work more efficient and coordinating AI adoption across divisions. But the 20-page document also teased some grander plans to promote AI innovation, including in the analysis of patient health data and in drug development.

“For too long, our Department has been bogged down by bureaucracy and busy-work,” Deputy HHS Secretary Jim O’Neill wrote in an introduction to the strategy. “It is time to tear down these barriers to progress and unite in our use of technology to Make America Healthy Again.”

The new strategy signals how leaders across the Trump administration have embraced AI innovation, encouraging employees across the federal workforce to use chatbots and AI assistants for their daily tasks. As generative AI technology made significant leaps under President Joe Biden’s administration, he issued an executive order to establish guardrails for their use. But when President Donald Trump came into office, he repealed that order and his administration has sought to remove barriers to the use of AI across the federal government.

Experts said the administration’s willingness to modernize government operations presents both opportunities and risks. Some said that AI innovation within HHS demanded rigorous standards because it was dealing with sensitive data and questioned whether those would be met under the leadership of Health Secretary Robert F. Kennedy Jr. Some in Kennedy’s own “Make America Health Again” movement have also voiced concerns about tech companies having access to people’s personal information.

Strategy encourages AI use across the department

HHS’s new plan calls for embracing a “try-first” culture to help staff become more productive and capable through the use of AI. Earlier this year, HHS made the popular AI model ChatGPT available to every employee in the department.

The document identifies five key pillars for its AI strategy moving forward, including creating a governance structure that manages risk, designing a suite of AI resources for use across the department, empowering employees to use AI tools, funding programs to set standards for the use of AI in research and development and incorporating AI in public health and patient care.

It says HHS divisions are already working on promoting the use of AI “to deliver personalized, context-aware health guidance to patients by securely accessing and interpreting their medical records in real time.” Some in Kennedy’s Make America Healthy Again movement have expressed concerns about the use of AI tools to analyze health data and say they aren’t comfortable with the U.S. health department working with big tech companies to access people’s personal information.

HHS previously faced criticism for pushing legal boundaries in its sharing of sensitive data when it handed over Medicaid recipients’ personal health data to Immigration and Customs Enforcement officials.

Experts question how the department will ensure sensitive medical data is protected

Oren Etzioni, an artificial intelligence expert who founded a nonprofit to fight political deepfakes, said HHS’s enthusiasm for using AI in health care was worth celebrating but warned that speed shouldn’t come at the expense of safety.

“The HHS strategy lays out ambitious goals — centralized data infrastructure, rapid deployment of AI tools, and an AI-enabled workforce — but ambition brings risk when dealing with the most sensitive data Americans have: their health information,” he said.

Etzioni said the strategy’s call for “gold standard science,” risk assessments and transparency in AI development appear to be positive signs. But he said he doubted whether HHS could meet those standards under the leadership of Kennedy, who he said has often flouted rigor and scientific principles.

Darrell West, senior fellow in the Brooking Institution’s Center for Technology Innovation, noted the document promises to strengthen risk management but doesn’t include detailed information about how that will be done.

“There are a lot of unanswered questions about how sensitive medical information will be handled and the way data will be shared,” he said. “There are clear safeguards in place for individual records, but not as many protections for aggregated information being analyzed by AI tools. I would like to understand how officials plan to balance the use of medical information to improve operations with privacy protections that safeguard people’s personal information.”

Still, West, said, if done carefully, “this could become a transformative example of a modernized agency that performs at a much higher level than before.”

The strategy says HHS had 271 active or planned AI implementations in the 2024 financial year, a number it projects will increase by 70% in 2025.



Source link

Continue Reading

Business

Construction workers are earning up to 30% more in the data center boom

Published

on



Big Tech’s AI arms race is fueling a massive investment surge in data centers with construction worker labor valued at a premium. 

Despite some concerns of an AI bubble, data center hyperscalers like Google, Amazon, and Meta continue to invest heavily into AI infrastructure. In effect, construction workers’ salaries are being inflated to satisfy a seemingly insatiable AI demand, experts tell Fortune.

In 2026 alone, upwards of $100 billion could be invested by tech companies into the data center buildout in the U.S., Raul Martynek, the CEO of DataBank, a company that contracts with tech giants to construct data centers, told Fortune.

In November, Bank of Americaestimated global hyperscale spending is rising 67% in 2025 and another 31% in 2026, totaling a massive $611 billion investment for the AI buildout in just two years.

Given the high demand, construction workers are experiencing a pay bump for data center projects.

Construction projects generally operate on tight margins, with clients being very cost-conscious, Fraser Patterson, CEO of Skillit, an AI-powered hiring platform for construction workers, told Fortune.

But some of the top 50 contractors by size in the country have seen their revenue double in a 12-month period based on data center construction, which is allowing them to pay their workers more, according to Patterson.

“Because of the huge demand and the nature of this construction work, which is fueling the arms race of AI… the budgets are not as tight,” he said. “I would say they’re a little more frothy.”

On Skillit, the average salary for construction projects that aren’t building data centers is $62,000, or $29.80 an hour, Patterson said. The workers that use the platform comprise 40 different trades and have a wide range of experience from heavy equipment operators to electricians, with eight years as the average years of experience.

But when it comes to data centers, the same workers make an average salary of $81,800 or $39.33 per hour, Patterson said, increasing salaries by just under 32% on average.

Some construction workers are even hitting the six-figure mark after their salaries rose for data center projects, according to The Wall Street Journal. And the data center boom doesn’t show any signs it’s slowing down anytime soon.

Tech companies like Google, Amazon, and Microsoft operate 522 data centers and are developing 411 more, according to The Wall Street Journal, citing data from Synergy Research Group. 

Patterson said construction workers are being paid more to work on building data centers in part due to condensed project timelines, which require complex coordination or machinery and skilled labor.

Projects that would usually take a couple of years to finish are being completed—in some instances—as quickly as six months, he said.

It is unclear how long the data center boom might last, but Patterson said it has in part convinced a growing number of Gen Z workers and recent college grads to choose construction trades as their career path.

“AI is creating a lot of job anxiety around knowledge workers,” Patterson said. “Construction work is, by definition, very hard to automate.”

“I think you’re starting to see a change in the labor market,” he added.



Source link

Continue Reading

Trending

Copyright © Miami Select.