Connect with us

Business

Down Arrow Button Icon

Published

on



The sudden shift in the industry’s landscape got me thinking about a classic tool for understanding any industry. Harvard Business School professor Michael Porter created the “Five Forces” framework in 1979, and it still stands as a brilliant way to grasp a given industry’s big picture. Note that it’s a way to characterize an industry, not an individual company

So, for example, the first force, “threat of new entrants,” means, “Is this an industry in which new entrant companies could easily compete, or not?” If the answer is, “This force is weak,” it would mean there is little threat of new entrants coming into that industry, which would be good news for incumbents. We asked expert analysts Charlie Dai of Forrester and Arun Chandrasekaran of Gartner and our own Fortune AI experts for context about how each force might affect Google Gemini and OpenAI.

Force One: Threat of new entrants. Chandrasekaran sees the industry becoming “a three-horse race” with OpenAI, Google, and Anthropic; he can’t see how a new company could “be on a par with these three.” Dai sees formidable barriers to new entrants in “compute cost, talent scarcity, and regulatory complexity.” Conclusion: This force is weak which bodes well for the incumbents. Google may be better positioned than OpenAI given how much more of the AI value chain it controls.

Force Two: Bargaining power of suppliers. Dai says suppliers of chips hold strong power because only a few companies, especially Nvidia, AMD, and Huawei, design the best chips and can’t supply them fast enough. The picture here is similar to the vast amounts of AI cloud capacity that AI providers must buy or build. Chandrasekaran notes that the major LLM companies train their models by crawling the internet and scooping up data—but some data providers are now demanding money. This force is strong. Google may be better protected by its control of its own chips, its own cloud, and nearly all its needed infrastructure.

Force Three: Bargaining power of buyers. It’s tempting to think that buyers aren’t super-strong in bargaining because over time they’ll get effectively locked into a provider’s system. “If [OpenAI’s] ChatGPT is integrated into your workflow and processes, extricating out of an application like ChatGPT is not really easy,” Chandrasekaran says. But buyers are increasingly using multiple models and finding they can be compatible. This force is moderate to strong. Google has stronger structural lock-in, but OpenAI has more brand affinity from consumers.

Force Four: Threat of substitutes. “Open-source alternatives like DeepSeek and Qwen will play a key role” in the industry, Dai says. In addition, Chandrasekaran says, “we are starting to see smaller language models challenging the larger models in very specific domains.”  This force is medium and getting stronger. Google and OpenAI are about equally able to confront it.

Force Five: Rivalry among existing firms. Our experts agree: This force is strong and getting much, much stronger. OpenAI and Google are in a virtual tie, though OpenAI has fewer defensive moats and must innovate quickly to retain its lead.

Bottom line: In what may be the most profoundly important industry yet seen, OpenAI has a fragile lead but faces an imposing foe that may benefit more as the Five Forces act on the sector. In five years, will one be the clear winner? Or will a Chinese competitor show that we grievously underestimated the “threat of new entrants”? Going through your industry’s Five Forces framework can be a demanding exercise, but it’s worthwhile for leaders in any industry. When done right, it will spark debates, insights—and possibly even a code red.—Geoff Colvin

Contact CEO Daily via Diane Brady at diane.brady@fortune.com

Top news

No refunds if Supreme Court strikes down tariffs, Hassett says

In an interview on CBS News’ Face the Nation on Sunday, National Economic Council Director Kevin Hassett predicted that the justices will rule in the White House’s favor because refunding the companies that paid them would be “very complicated,” Hassett says. Lower courts have ruled that the so-called reciprocal tariffs invoked under the International Emergency Economic Powers Act are illegal, though the Supreme Court will have a final say. “And I also think that if they didn’t find with us, that it’s going to be pretty unlikely that they’re going to call for widespread refunds, because it would be an administrative problem to get those refunds out to there,” Hassett said.

Possible successor to GM’s Barra is old foe of Musk

Sterling Anderson, 42, joined GM in June as its global product chief. He previously worked at Tesla but fell out with Elon Musk and was sued by Musk after he left, the WSJ reports. The robotics expert is a possible successor to CEO Mary Barra, 64, the paper says.

Justice Department published, deleted, and then published again some of the Epstein files

The Justice Department released a portion of the Epstein files on Friday and into Saturday, and some came with heavy redactions. At least 16 files then vanished from the DOJ’s Epstein document webpage a day after they were posted on Friday. Among them was file 468, an image showing a drawer filled with photographs, including one with President Trump alongside Jeffrey Epstein, Melania Trump, and Epstein associate Ghislaine Maxwell. Another photograph in the drawer showed Trump surrounded by women. Deputy Attorney General Todd Blanche told NBC’s Meet the Press on Sunday there were concerns that the photos inadvertently revealed the faces of victims, to the photos were retracted before being republished again. “It has nothing to do with President Donald Trump,” he said.

Contempt charges drafted for Bondi

On Sunday, Rep. Thomas Massie (R-Ky.) and Rep. Ro Khanna (D-Calif.) told Face the Nation on Sunday that they are drafting “inherent contempt” charges against Attorney General Pam Bondi for every day that the entirety of the files aren’t released. 

Apollo preparing for ‘when something bad happens’

Apollo Global—$908 billion in assets under management—is moving into cash, cutting its leverage, and derisking from certain parts of the debt market in preparation for “when something bad happens,” according to CEO Marc Rowan. He wants the company to be prepared to invest when the market goes through any upcoming turmoil, he said in private meetings at a Goldman Sachs conference, according to the FT.

Economists say any Fed Chair will clash with Trump

National Economic Council Director Kevin Hassett is the favorite on prediction market Kalshi to replace Jerome Powell as Fed Chair, but economists last week argued that any Fed chair will have trouble lowering rates as much as President Trump would like. Meanwhile, Hassett said over the weekend that he believes the Supreme Court will find Trump’s tariffs legal, but tariff refund checks probably won’t come even if they don’t.

AI not destroying finance jobs—yet

Experts told Fortune that AI isn’t destroying finance jobs—at least, for now. Although AI in theory can perform hours of junior-level analyst tasks in just seconds, experts agree that AI-related layoffs have been insignificant so far. “If there’s a large company that might say, ‘Well, we’re not planning to hire as much because of AI,’ or maybe ‘We’re letting people go because of AI,’ I think there’s a little bit of smoke and mirrors there,” Robert Seamans, director of New York University Stern’s Center for the Future of Management, tells Fortune

SpaceX explosion endangered three jets

The January 16 explosion of a SpaceX rocket over the Caribbean rained debris over a vast area of airspace for 50 minutes, the WSJ reports, endangering three passenger jets carrying 450 people.

The markets

S&P 500 futures are up 0.33% this morning. The last session closed up 0.88%. STOXX Europe 600 was down 0.17% in early trading. The U.K.’s FTSE 100 was down 0.39% in early trading. Japan’s Nikkei 225 was up 1.81%. China’s CSI 300 was up 0.95%. The South Korea KOSPI was up 2.12%. India’s NIFTY 50 was up 0.79%. Bitcoin was at $89K.

Around the watercooler

Shield AI took its drones from the ‘Batcave’ to the battlefield. Now the $5.6 billion defense-tech startup’s new CEO says it’s at an inflection point by Jessica Mathews

Sam Altman says he’s ‘0%’ excited to be CEO of a public company as OpenAI drops hints about an IPO: ‘In some ways I think it’d be really annoying’ by Sasha Rogelberg

‘They’ll lose their humanity’: Dartmouth professor says he’s surprised just how scared his Gen Z students are of AI by Nick Lichtenberg

Bill Gates identifies the biggest burden being passed on to his children after seeing his daughter harassed online by Eleanor Pringle

CEO Daily is compiled and edited by Joey Abrams, Claire Zillman and Lee Clifford.



Source link

Continue Reading

Business

It’s starting to look like we’ll never come up with a good way to tell what was written by AI and what was written by humans

Published

on


People and institutions are grappling with the consequences of AI-written text. Teachers want to know whether students’ work reflects their own understanding; consumers want to know whether an advertisement was written by a human or a machine.

Writing rules to govern the use of AI-generated content is relatively easy. Enforcing them depends on something much harder: reliably detecting whether a piece of text was generated by artificial intelligence.

Some studies have investigated whether humans can detect AI-generated text. For example, people who themselves use AI writing tools heavily have been shown to accurately detect AI-written text. A panel of human evaluators can even outperform automated tools in a controlled setting. However, such expertise is not widespread, and individual judgment can be inconsistent. Institutions that need consistency at a large scale therefore turn to automated AI text detectors.

The problem of AI text detection

The basic workflow behind AI text detection is easy to describe. Start with a piece of text whose origin you want to determine. Then apply a detection tool, often an AI system itself, that analyzes the text and produces a score, usually expressed as a probability, indicating how likely the text is to have been AI-generated. Use the score to inform downstream decisions, such as whether to impose a penalty for violating a rule.

This simple description, however, hides a great deal of complexity. It glosses over a number of background assumptions that need to be made explicit. Do you know which AI tools might have plausibly been used to generate the text? What kind of access do you have to these tools? Can you run them yourself, or inspect their inner workings? How much text do you have? Do you have a single text or a collection of writings gathered over time? What AI detection tools can and cannot tell you depends critically on the answers to questions like these.

There is one additional detail that is especially important: Did the AI system that generated the text deliberately embed markers to make later detection easier?

These indicators are known as watermarks. Watermarked text looks like ordinary text, but the markers are embedded in subtle ways that do not reveal themselves to casual inspection. Someone with the right key can later check for the presence of these markers and verify that the text came from a watermarked AI-generated source. This approach, however, relies on cooperation from AI vendors and is not always available.

How AI text detection tools work

One obvious approach is to use AI itself to detect AI-written text. The idea is straightforward. Start by collecting a large corpus, meaning collection of writing, of examples labeled as human-written or AI-generated, then train a model to distinguish between the two. In effect, AI text detection is treated as a standard classification problem, similar in spirit to spam filtering. Once trained, the detector examines new text and predicts whether it more closely resembles the AI-generated examples or the human-written ones it has seen before.

The learned-detector approach can work even if you know little about which AI tools might have generated the text. The main requirement is that the training corpus be diverse enough to include outputs from a wide range of AI systems.

But if you do have access to the AI tools you are concerned about, a different approach becomes possible. This second strategy does not rely on collecting large labeled datasets or training a separate detector. Instead, it looks for statistical signals in the text, often in relation to how specific AI models generate language, to assess whether the text is likely to be AI-generated. For example, some methods examine the probability that an AI model assigns to a piece of text. If the model assigns an unusually high probability to the exact sequence of words, this can be a signal that the text was, in fact, generated by that model.

Finally, in the case of text that is generated by an AI system that embeds a watermark, the problem shifts from detection to verification. Using a secret key provided by the AI vendor, a verification tool can assess whether the text is consistent with having been generated by a watermarked system. This approach relies on information that is not available from the text alone, rather than on inferences drawn from the text itself. https://www.youtube.com/embed/oUgfQAaRL6Y?wmode=transparent&start=0 AI engineer Tom Dekan demonstrates how easily commercial AI text detectors can be defeated.

Limitations of detection tools

Each family of tools comes with its own limitations, making it difficult to declare a clear winner. Learning-based detectors, for example, are sensitive to how closely new text resembles the data they were trained on. Their accuracy drops when the text differs substantially from the training corpus, which can quickly become outdated as new AI models are released. Continually curating fresh data and retraining detectors is costly, and detectors inevitably lag behind the systems they are meant to identify.

Statistical tests face a different set of constraints. Many rely on assumptions about how specific AI models generate text, or on access to those models’ probability distributions. When models are proprietary, frequently updated or simply unknown, these assumptions break down. As a result, methods that work well in controlled settings can become unreliable or inapplicable in the real world.

Watermarking shifts the problem from detection to verification, but it introduces its own dependencies. It relies on cooperation from AI vendors and applies only to text generated with watermarking enabled.

More broadly, AI text detection is part of an escalating arms race. Detection tools must be publicly available to be useful, but that same transparency enables evasion. As AI text generators grow more capable and evasion techniques more sophisticated, detectors are unlikely to gain a lasting upper hand.

Hard reality

The problem of AI text detection is simple to state but hard to solve reliably. Institutions with rules governing the use of AI-written text cannot rely on detection tools alone for enforcement.

As society adapts to generative AI, we are likely to refine norms around acceptable use of AI-generated text and improve detection techniques. But ultimately, we’ll have to learn to live with the fact that such tools will never be perfect.

Ambuj Tewari, Professor of Statistics, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.



Source link

Continue Reading

Business

Your mortgage likely cost $11,500 to originate—and reams of paperwork. How Salesforce Agentforce is helping improve the process

Published

on



The Fed lowered interest rates recently for a third consecutive time and the second time in two months. The move signaled easing financial conditions that are likely to trigger a surge in the demand for mortgages across the country — particularly in regions where there have already been signs of a housing rebound. 

But the higher volume will also undoubtedly present a challenge to financial institutions, if they are bound by legacy technology. Too much of the mortgage technology still used by many banks and other lending institutions isn’t designed to keep up with increased demand. Nor are these outmoded systems able to improve profit margins for lenders. A recent Freddie Mac study indicated that as recently as this summer, mortgages still regularly cost, on average, more than $11,500 for a lender to originate. 

And so, the mortgage market is ripe for innovation. Salesforce supports banks and lenders by helping them bring together customer data including borrower profiles, loan details, and interactions, with AI built in to help teams work more efficiently and better support borrowers.

In conversations with our mortgage customers and industry leaders, we’re seeing growing interest in AI agents — autonomous systems that can take action on tasks. This agentic approach will empower lenders to rethink the entire mortgage process, turning the loan lifecycle from a slow, paper-intensive gauntlet into a streamlined digital journey. Embracing AI agents can also redefine the entire value chain, from property valuation and listing to lending and long-term asset management.

As someone who served as an executive in the Federal Housing Administration within the U.S. Department of Housing and Urban Development (HUD) during the aftermath of the 2008 financial crisis, I now often wonder if aspects of that mortgage-based calamity could have been mitigated if the industry had access to agentic AI in the functional areas of quality control and risk and fraud management back then.

Today, agentic AI offers a level of visibility that simply didn’t exist back then—providing the real-time insights that allow lenders to better support borrowers and ensure they are in the best possible financial position from the start.

Agentic applications

There are many banking and lending benefits to agentic AI.

Let’s start with one of the most basic — automation. A significant portion of lending involves rote tasks which account for a significant portion of the mortgage process, including the collection and assimilation of data such as bank statements, pay stubs, and property details. Agentic AI can automate this work drastically reducing the time it takes to process and underwrite a loan. This efficiency drives down the cost of originating a loan, a critical metric for any lender.

Another benefit comes in proactive risk management. Agentic AI excels in this area by providing automated underwriting and sophisticated risk modeling to catch potential issues early in the lending process. By analyzing vast amounts of borrower data and property values in real time, AI systems can spot patterns, flag anomalies (such as undisclosed payments on a bank statement), and make informed lending decisions faster than traditional and manual methods. This technological capability not only protects the lending institution but also imbues a sense of urgency that helps keep things moving. 

The impact of AI, of course, extends beyond the lending back office and into the heart of the property transaction itself, transforming how assets are valued, marketed, and managed. The traditional slow and often subjective property appraisal process is being revolutionized by AI-driven automated valuation models (AVMs). These use machine learning to analyze thousands of data points in seconds, drawing from MLS records, tax rolls, deeds, and unstructured data such as property photos and listing descriptions. 

For real estate professionals, AI-powered systems can generate high-quality and engaging listing descriptions, optimizing them for search visibility and providing personalized property recommendations to buyers by analyzing buyer preferences and behavior.

There’s a customer service aspect to AI, as well. Many inbound customer inquiries come through lenders’ websites. Yet, if the responses depend entirely on overworked human customer service agents, many of these leads go unanswered. By managing and rerouting these inquiries with agentic AI, organizations can ensure that no potential customer is ignored. 

Customers for life

The real business opportunity with agentic AI in the lending industry comes in the area of intelligent indexing, or what some might call the “contextual cross-sell/upsell.” This begins with the mortgage application and incorporates other data into a golden record of customer experience. 

Consider all the disparate data about a customer that a full-service financial institution has about a customer. A cloud-based AI platform that aggregates all this information and makes it accessible to AI agents can digest data and proactively recommend products or opportunities to expand that customer’s relationship with the lender.

In some cases, this might mean recommending a customer toward another mortgage product such as a home equity line of credit. In others, it might mean suggesting to that customer an entirely different financial endeavor such as a 529 account if a young family wants to start saving for their children’s college tuition, or a life insurance product to ensure a family is protected in times of crisis. 

This proactive service transforms loan officers from paperwork processors into financial-service concierges — professionals who are focused on strategic relationship-building and turning mortgage applicants into customers for life.

Rising to the Challenge

Of course, the agentic AI era is not without potential pitfalls – particularly in a regulated industry like housing

The first challenge: Overcoming the spectre of bias. The use of AI in lending decisions, AVMs, and tenant screening must be subject to rigorous guardrails to prevent discrimination and the perpetuation of historical biases embedded in training data. 

Lenders must be able to explain how AI models arrived at a decision, a key regulatory piece known as explainability. This concept dictates that AI serves primarily in an assistive capacity, ensuring that a human remains in the loop for critical decisions like final underwriting, where judgment and empathy are irreplaceable.

If mortgage lending companies implement agentic AI across the organization — to become truly agentic enterprises — the industry could become one of the most effective AI use cases in the marketplace today. Housing and its related financial activities are ripe to become an agentic industry — an efficient, integrated, and predictive ecosystem where the intelligent use of data creates certainty for borrowers and a competitive advantage for businesses. 

Agentic AI technology – in conjunction with skilled humans in the loop – provides a transformative opportunity. Forward-thinking lending institutions will be brave enough to seize it.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.



Source link

Continue Reading

Business

Intuit CFO: Stablecoins are the new ‘digital dollar’ rail

Published

on



Good morning. Intuit is entering a multi-year strategic partnership with Circle Internet Group to integrate Circle’s USDC stablecoin and infrastructure across the Intuit platform.

“Our partnership with Circle is a strategic step toward building a world-class financial platform designed for an always-on, global economy,” Intuit CFO Sandeep Aujla told me about the partnership announced on Dec. 18. “By integrating stablecoins like USDC as a new ‘digital dollar’ rail for Intuit, we will help customers move money more seamlessly by extending our platform with a 24/7, programmable method that settles transactions near-instantly and at materially lower cost.”​

Intuit, a fintech company and maker of TurboTax, Credit Karma, and QuickBooks, already orchestrates across bank, card, and real-time payment methods, Aujla explained. Stablecoins add a modern, software-native rail that allows the company to move money with the same speed and intelligence as the rest of its platform, he said. When identity, wallets, and workflows come together, Intuit’s platform advantages compound, he added.​

“Intuit’s massive scale and industry leadership make it an ideal platform to extend the speed, power, and efficiency of USDC for everyday financial transactions,” Jeremy Allaire, co-founder, chairman, and CEO of Circle, said in a statement.​

Stablecoins, such as Circle’s USDC, are digital assets designed to maintain a stable value, typically pegged to and backed by the U.S. dollar or equivalent assets. In the U.S., the GENIUS Act has clarified how stablecoins are regulated.

Circle CFO Jeremy Fox-Geen recently told me that regulatory certainty is “a major unlock” for large companies considering digital assets for corporate treasuries. Circle made its public debut on the New York Stock Exchange on June 5, marking the largest two-day post-IPO surge since 1980, Fortune reported

For Intuit, the long-term opportunity lies in the network effects, Aujla noted. He commented: “Approximately 100 million consumers and businesses use Intuit to get paid, pay others, and manage cash flow. We can embed smarter automation, richer insights, and new financial capabilities directly into their daily workflows. Intuit is moving with the speed of a startup and the discipline of an enterprise to define the next generation of money movement.”​

Sheryl Estrada
sheryl.estrada@fortune.com

This story was originally featured on Fortune.com



Source link

Continue Reading

Trending

Copyright © Miami Select.