Business
OpenAI ChatGPT and Anthropic Claude chatbot usage studies may signal job losses ahead
Published
3 months agoon
By
Jace Porter
Hello and welcome to Eye on AI…In this edition: OpenAI and Anthropic detail chatbot usage trends…AI companies promise big investments in the U.K….and the FTC probes chatbots’ impact on kids.
Yesterday saw the release of dueling studies from OpenAI and Anthropic about the usage of their respective AI chatbots, ChatGPT and Claude. The studies provide a good snapshot of who is using AI chatbots and what they are using them for. But the two reports were also a study in contrasts, with OpenAI clearly emerging as primarily a consumer product, while Claude’s use cases were more professionally oriented.
The ChatGPT study confirmed the huge reach OpenAI has, with 700 million active weekly users, or almost 10% of the global population, exchanging some 18 billion messages with the chatbot every week. And the majority of those messages—70%—were classified by the study’s authors as “non-work” queries. Of these, about 80% of the messages fell into three big categories: practical guidance, writing help, and seeking information. Within practical guidance, teaching or tutoring queries accounted for more than a third of messages. How many of these were students using ChatGPT to “help” with homework or class assignments was unclear—but ChatGPT has a young user base, with nearly half of all messages coming from those under the age of 26.
Educated professionals more likely to be using ChatGPT for work
When ChatGPT was used for work, it was most likely to be used by highly educated users working in high-paid professions. While this is perhaps not surprising, it is a bit depressing.
There is a vision of our AI future, one which I outline in my book, Mastering AI, in which the technology becomes a leveling force. With the help of AI copilots and decision-support systems, people with fewer qualifications or experience could take on some of the work currently performed by more skilled and experienced professionals. They might not earn as much as those more qualified individuals, but they could still earn a good middle-class income. To some extent, this already happens in law, with paralegals, and in medicine, with nurse practitioners. But this model could be extended to other professions, for instance accounting and finance—democratizing access to professional advice and helping shore up the middle class.
There’s another vision of our AI future, however, where the technology only makes economic inequality worse, with the most educated and credentialed using AI to become even more productive, while everyone else falls farther behind. I fear that, as this ChatGPT data suggests, that’s the way things may be heading.
While there’s been a lot of discussion lately of the benefits and dangers of using chatbots for companionship, or even romance, OpenAI’s research showed messages classified as being about relationships constituted just 2.4% of messages, personal reflection 1.9%, and role-playing and games 0.4%.
Interestingly, given how fiercely all the leading AI companies—including OpenAI—compete with one another on coding benchmarks and tout the coding performance of their models, coding was a relatively small use case for ChatGPT, constituting just 4.2% of the messages the researchers analyzed. (One big caveat here is that the research only looked at the consumer versions of ChatGPT—its free, premium, and pro tiers—but not usage of the OpenAI API or enterprise ChatGPT subscriptions, which is how many business users may access ChatGPT for professional use cases.)
Meanwhile, coding constituted 39% of Claude.ai’s usage. Software development tasks also dominated the use of Anthropic’s API.
Automation rather than augmentation dominates work usage
Read together, both studies also hinted at an intriguing contrast in how people were using chatbots in work contexts, compared to more personal ones.
ChatGPT messages classified as non-work related were more about what the researchers called “asking”—which involved seeking information or advice—as opposed to “doing” prompts, where the chatbot was asked to complete a task for the user. But in work-related messages, “doing” prompts were more common, constituting 56% of message traffic.
For Anthropic, where work-related messages seemed more dominant to begin with, there was a clear trend for users to ask the chatbot to complete tasks for them, and in fact the majority of Anthropic’s API usage (some 77%) was classified as automation requests. Anthropic’s research also indicated that many of the tasks that were most popular with business users of Claude also were those that were most expensive to run, indicating that companies are probably finding—despite some other survey and anecdotal evidence to the contrary—that the value of automating tasks with AI is indeed worth the money.
The studies also indicate that in business contexts people increasingly want AI models to automate tasks for them, not necessarily offer decision support or expert advice. This could have significant implications for economies as a whole: If companies mostly use the technology to automate tasks, the negative effect of AI on jobs is likely to be far greater.
There were lots of other interesting tidbits in the two studies. For instance, whereas previous usage data had shown a significant gender gap, with men far more likely than women to be using ChatGPT, the new study shows that gap has now disappeared. Anthropic’s research shows interesting geographic divergence in Claude usage too—usage is concentrated on the coasts, which is to be expected, but there are also hotspots in Utah and Nevada.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
FORTUNE ON AI
China says Nvidia violated antitrust laws as it ratchets up pressure ahead of U.S. trade talks—by Jeremy Kahn
AI chatbots are harming young people. Regulators are scrambling to keep up.—by Beatrice Nolan
OpenAI’s deal with Microsoft could pave the way for a potential IPO—by Beatrice Nolan
EYE ON AI NEWS
Alphabet announces $6.8 billion investment in U.K.-based AI initiatives, other tech companies also announce U.K. investments alongside Trump’s state visit. Google’s parent company announced a £5 billion ($6.8 billion) investment in the U.K. over the next two years, funding AI infrastructure, a new $1 billion AI data center that is set to open this week, and more funding for research at Google DeepMind, its advanced AI lab that continues to be headquartered in London. The BBC reports that the investments were unveiled ahead of President Trump’s state visit to Britain. Many other big U.S. tech companies are expected to make similar investments over the next few days. For instance, Nvidia, OpenAI and U.K. data center provider Nscale also announced a multi-billion-dollar data center project this week. More on that here from Bloomberg. Meanwhile, Salesforce said it was increasing a previously announced package of investments in the U.K., much of it around AI, from $4 billion to $6 billion.
FTC launches inquiry into AI chatbot effects on children amid safety concerns. The U.S. Federal Trade Commission has started an inquiry into how AI chatbots affect children, sending detailed questionnaires to six major companies including OpenAI, Alphabet, Meta, Snap, xAI, and Character.AI. Regulators are seeking information on issues such as sexually themed responses, safeguards for minors, monetization practices, and how companies disclose risks to parents. The move follows rising concerns over children’s exposure to inappropriate or harmful content from chatbots, lawsuits and congressional scrutiny, and comes as firms like OpenAI have pledged new parental controls. Read more here from the New York Times.
Salesforce backtracks, reinstates team that helped customers adopt AI agents. The team, called Well-Architected, had displeased Salesforce CEO Marc Benioff by suggesting to customers that deploying AI agents successfully would take extensive planning and significant work, a position that contradicted Benioff’s own pitch to customers that, with Salesforce, deploying AI agents was a cinch. Now, according to a story in The Information, the software company has had to reconstitute the team, which provided advisory and consulting help to companies implementing Agentforce. The company is finding Agentforce adoption is lagging its expectations—with fewer than 5% of its 150,000 clients currently paying for the AI agent product, the publication reported—amid complaints that the product is too expensive, too difficult to implement, and too prone to accuracy issues and errors. Having invested heavily in the pivot to Agentforce, Benioff is now under pressure from investors to deliver.
Humanoid robotics startup Figure AI valued at $39 billion in new funding deal. Figure AI, a startup developing humanoid robots, has raised over $1 billion in a new funding round that values the company at $39 billion, making it one of the world’s most valuable startups, Bloomberg reports. The round was led by Parkway Venture Capital with participation from major backers including Nvidia, Salesforce, Brookfield, Intel, and Qualcomm, alongside earlier supporters like Microsoft, OpenAI, and Jeff Bezos. Founded in 2022, Figure aims to build general-purpose humanoid robots, though Fortune’s Jason del Rey questioned whether the company was exaggerating the extent to which its robots were being deployed with BMW.
EYE ON AI RESEARCH
Can AI replace my job? Journalists are certainly worried about what AI is doing to the profession. Mostly, though, after some initial concerns that AI would directly replace journalists, the concern has largely shifted to fears that AI will further undermine the business models that fund good journalism (see Brain Food below). But recently a group of AI researchers in Japan and Taiwan created a benchmark called NEWSAGENT to see how well LLMs can do at actually taking source material and composing accurate news stories. It turned out that the models could, in many cases, do an ok job.
But the most interesting thing about the research is how the scientists, none of whom were journalists, characterized the results. They found that Alibaba’s open weight model, Qwen-3 32B, did best stylistically, but that GPT 4-o did better on metrics like objectivity and factual accuracy. And they write that human-written stories did not consistently outperform those drafted by the AI models in overall win rates, but that the human-written stories “emphasize factual accuracy.” The human-written stories were also often judged to be more objective than the AI-written ones.
The problem here is that in the real world, factual accuracy is the bedrock of journalism, and objectivity would be a close second. If the models fall down on accuracy, they should lose in every case to the human-written stories, even if evaluators preferred the AI-written ones stylistically.
This is why computer scientists should not be left to create benchmarks for real world professional tasks without deferring to expert advice from people working in those professions. Otherwise you get distorted views of what AI models can and can’t do. You can read the NEWSAGENT research here on arxiv.org.
AI CALENDAR
Oct. 6-10: World AI Week, Amsterdam
Oct. 21-22: TedAI San Francisco.
Nov. 10-13: Web Summit, Lisbon.
Nov. 26-27: World AI Congress, London.
Dec. 2-7: NeurIPS, San Diego
Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.
BRAIN FOOD
Is Google the most malevolent AI actor? A lot of publishing execs are starting to say so. At Fortune Brainstorm Tech in Deer Valley, Utah, last week, Neil Vogel, the CEO of magazine publisher People Inc. said that Google was “the worst” when it came to using publishers’ content without permission to train AI models. The problem, Vogel said, is that Google used the same web crawlers to index sites for Google Search as it did to scrape content to feed its Gemini AI models. While other AI vendors have increasingly been cutting multi-million dollar annual licensing deals to pay for publishers’ content, Google has refused to do so. And publishers’ can’t block Google’s bots without losing search traffic on which they currently depend for revenue.
You can read more on Vogel’s comments here.
You may like
A sweeping Reuters investigation has put a price tag on Meta’s tolerance for ad fraud: billions of dollars a year. For Rob Leathern, a former Meta executive who led the company’s business integrity operations until 2019, the findings expose a stark tension between revenue growth and consumer harm.
The report, published Monday, found that Meta generated roughly $18 billion in advertising revenue from China in 2024, around 10% of its global revenue, even as internal documents showed that nearly one-fifth of that (about $3 billion) came from ads tied to scams, illegal gambling, pornography, and other prohibited activity. Meta internally labeled China its top “scam exporting nation,” accounting for 25% of all scam and banned-product ads globally, according to the report.
Meta’s core social media platforms (Facebook, Instagram, WhatsApp) are blocked in China, but the company still earns billions from Chinese advertisers targeting global users.
The investigation, Leathern told Fortune, illuminates several issues with both Meta and the broader Chinese ad market. “It appears that a variety of business partners that Meta has are not conducting themselves in an ethical way and or there are employees of those companies that are not doing what they’re supposed to be doing,” he said. “It’s quite telling that Meta took down its entire partner directory, which obviously means that they must be reviewing their partners, and there’s a lot of them.”
“Scams are spiking across the internet, driven by persistent criminals and sophisticated, organized crime syndicates constantly evolving their schemes to evade detection. We are focused on rooting them out by using advanced technical measures and new tools, disrupting criminal scam networks, working with industry partners and law enforcement, and raising awareness on our platforms about scam activity. And when we determine that bad actors have violated our rules prohibiting fraud and scams, we take action,” a Meta spokesperson told Fortune in a statement.
Meta communications chief Andy Stone, however, pushed back on the investigation, posting on Threads, “Once again, Reuters is misconstruing and misrepresenting the facts.” He argued that CEO Mark Zuckerberg’s “integrity strategy pivot”—which included instructing the China ads-enforcement team to reportedly “pause” its work—was to improve teams’ goals and “instruct them to redouble efforts to fight frauds and scams globally, not just from specific markets.”
Stone also claimed that these teams have “doubled their fraud and scam reduction goal and over the last 15 months, user reports of scam ads have declined by well over 50%.”
The revelations published by Reuters echo—but far exceed—the AI-driven deepfake scheme earlier this year involving Goldman Sachs during which scammers used AI-generated videos of investment strategist Abby Joseph Cohen to lure retail investors into fraudulent WhatsApp groups via Instagram ads.
Reuters’ reporting suggests Meta’s China-linked scam problem is not an edge case or a blind spot, but an allegedly known and lucrative segment of its advertising business.
According to internal estimates cited by Reuters, Meta served as many as 15 billion “high-risk” fraudulent ads per day, generating roughly $7 billion annually. The company required a 95% confidence threshold before banning fraudulent advertisers; those falling below it were often allowed to continue operating, sometimes at higher fees. Meta also established a 0.15% revenue “guardrail” (about $135 million) as the maximum revenue it was willing to forgo to crack down on suspicious ads, even as it earned $3.5 billion every six months from ads deemed to carry “higher legal risk.”
Internal decision-making was explicit. When enforcement staff proposed shutting down fraudulent accounts, internal documents reviewed by Reuters showed they sought assurance that growth teams would not object “given the revenue impact.” Asked whether Meta would penalize high-spending Chinese partners running scams, the answer was reportedly “No,” citing “high revenue impact.” Internal assessments reportedly noted that revenue from risky ads would “almost certainly exceed the cost of any regulatory settlement,” effectively treating fines as a cost of doing business.
In late 2024, Meta reinstated 4,000 second-tier Chinese ad agencies that had previously been suspended, unlocking $240 million in annualized revenue—roughly half of it tied to ads violating Meta’s own safety policies, according to the investigation. More than 75% of harmful ad spending, Reuters found, came from accounts benefiting from Meta’s partner protections. The company also disbanded its China-focused anti-scam team.
An external audit commissioned by Meta from the Propellerfish Group reached a blunt conclusion when investigating fraud and scams on the platform: Meta’s “own behavior and policies” were promoting systemic corruption in China’s advertising ecosystem. Reuters reported that the company largely ignored the findings and expanded operations anyway.
Leathern, who reviewed the reporting and internal figures referenced in the report, told Fortune the scale of the problem was difficult to defend. “I was disappointed that the violation rates for the China-specific advertisers were as high as they were in the last year,” he said. “It’s disappointing, because there are ways to make it lower.”
His critique goes to the heart of the failure. Platforms, he said, should hold intermediary agencies accountable for the quality of advertisers they bring in. “If you’re measuring violation rates coming from certain partners, and those rates are above a threshold every quarter or every year, you can just fire your worst-performing customers,” he said.
“I think it’s important for us to have some sense of transparency into how policies are being enforced and what companies are doing in terms of reducing scams on their platforms,” Leathern added.
Over the last 18 months, Meta has removed or rejected more than 46 million advertisements placed via so-called resellers, or large Chinese ad firms. And more than 99% of ad accounts associated with resellers found to be violating the company’s fraud policies were proactively detected and disabled.
Aside from a need for transparency, Leathern warned that prioritizing short-term revenue over trust ultimately threatens the business itself. “If people don’t trust advertisers, advertising, it reduces the effectiveness of that channel for all advertisers,” he said. “There’s a lot of risk to their business, directly and indirectly, from not doing a good enough job on stopping scams.”
The human cost is already visible. Reuters documented victims across North America and Asia, including U.S. and Canadian investors who lost life savings to fake stock and crypto ads, Taiwanese consumers misled into buying counterfeit health products, and a Canadian Air Force recruiter whose Facebook account was hijacked to run crypto scams. Meta’s own internal safety staff estimated the company’s platforms were “involved” in roughly one-third of all successful U.S. scams, linked to more than $50 billion in consumer losses.
The problem is intensifying as generative AI lowers the barrier for scammers. “You can create something that looks plausible far more easily than ever before,” Leathern said. “The speed and adaptability of criminals and their use of AI tools just makes the environment far more tricky.”
Yet Leathern said platforms like Meta have not been sufficiently transparent about how aggressively they are using those same tools to fight abuse. “We just don’t have a ton of insight into what they’re doing to reduce scams and fraud coming in through ads,” he said.
For Leathern, the investigation should be a turning point. “I hope they see this as an opportunity to improve things for people,” he said.
Business
Morgan Stanley strategist Michael Wilson says lackluster job numbers could actually be good news
Published
60 minutes agoon
December 15, 2025By
Jace Porter
Ahead of the highly anticipated November jobs data to be released this week, even lackluster numbers may be greeted with relief by Wall Street.
A moderately cooling labor market could increase the likelihood of more rate cuts by the Federal Reserve—a tantalizing prospect for many investors eying future earnings growth—fueling bullish behaviors in the stock market, according to Morgan Stanley analysts.
“We are now firmly back in a good is bad/bad is good regime,” Michael Wilson, chief U.S. equity strategist and chief investment officer for Morgan Stanley, wrote in a note to investors on Monday.
Fed Chair Jerome Powell’s divisivecut last week, the Fed’s third cut in as many meetings, was based on consistent data showing a softening job market, including unemployment rising three months in a row through September, and the private sector shedding 32,000 jobs last month, per ADP’s November report.
According to Powell, the quarter-point cut was defensive and a way to prevent the labor market from tumbling, adding that while inflation sits at about 2.8%, which is higher than the Fed’s preferred 2%, he said he expects inflation to peak early next year, barring no additional tariffs.
He added that monthly jobs data may have been overcounted by about 60,000 as a result of data collection errors, and that payroll gains may actually be stagnant or even negative.
“I think a world where job creation is negative…we need to watch that very carefully,” Powell said at the press conference directly following the announcement of the rate cut.
Wilson suggested that Powell’s emphasis on the jobs data, as well as his de-emphasis on tariff-caused inflation, makes the labor market a crucial factor in monetary policy going into 2026.
As a result of the government shutdown, the Labor Department’s job market report will be released on Tuesday, which will contain data from both October and November, and is expected to show a modest 50,000 payroll gain in November, with the unemployment rate ticking up from 4.4% to about 4.5%, consistent with the trend of a labor market that is slowing, but not suddenly bottoming out.
‘Rolling recovery’ versus plain bad news
The Morgan Stanley strategist has previously argued that weak payroll numbers are actually a sign of a “rolling recovery,” with the economy in the early stages of an upswing slowly making its way through each sector. It follows three years of a “rolling recession” that Wilson said had kept the economy weaker than what employment and GDP figures suggested.
In Wilson’s eyes, because jobs data is a lagging metric, the trough of the labor cycle was actually back in the spring, coinciding with mass DOGE firings and “Liberation Day” tariffs. For a more accurate representation of the health of the economy, Wilson argued to look instead at the markets. The S&P 500, for example, is up nearly 13% over the last six months.
However, with Powell basing his policy decisions on data such as jobs, Wilson noted, the Fed could still see more room to cut, even as Morgan Stanley sees a labor market that is not in jeopardy.
“In real time, the data has not been weak enough to justify cutting more,” Wilson told CNBC last week prior to the Fed meeting. “But when they actually look at the revisions now…it’s very clear that we had a significant labor cycle, and we’ve come out of it, which is very good.”
But just as economists weren’t in consensus for the FOMC’s most recent rate cut, the possibility of more meager jobs numbers is not universally favored.
Claudia Sahm, chief economist at New Century Advisors and a former Fed economist, agreed the job data is a lagging economic indicator, but warned it could indicate a recession is underway, not that we’re already in the clear. What was particularly concerning to her was that lagging labor data could bear worse job news, as layoffs have yet to surge following shrinking job openings.
She told Fortune ahead of the Fed’s decision last week that additional rate cuts would not be welcome news, but rather a sign the Fed had acted too late in trying to correct a battered labor market.
“If the Powell Fed ends up doing a lot more cuts, then we probably don’t have a good economy,” she said. “Be careful what you wish for.”
Business
Actor Joseph Gordon-Levitt wonders why AI companies don’t have to ‘follow any laws’
Published
2 hours agoon
December 15, 2025By
Jace Porter
In a sharp critique of the current artificial intelligence landscape, actor-turned-filmmaker-turned- (increasingly) AI activist Joseph Gordon-Levitt challenged the tech industry’s resistance to regulation, posing a provocative rhetorical question to illustrate the dangers of unchecked development: “Are you in favor of erotic content for eight-year-olds?”
Speaking at the Fortune Brainstorm AI conference this week with editorial director Andrew Nusca, Gordon-Levitt used “The Artist and the Algorithm” session to pose another, deeper question: “Why should the companies building this technology not have to follow any laws? It doesn’t make any sense.”
In a broad-ranging conversation covering specific failures in self-regulation, including instances in which “AI companions” on major platforms reportedly verged into inappropriate territory for children, Gordon-Levitt argued relying on internal company policies rather than external law is insufficient, noting such features were approved by corporate ethicists.
Gordon-Levitt’s criticisms were aimed, in part, at Meta, following the actor’s appearance in a New York Times Opinion video series airing similar claims. Meta spokesperson Andy Stone pushed back hard on X.com at the time, noting Gordon-Levitt’s wife was formerly on the board of Meta rival OpenAI.
Gordon-Levitt argued without government “guardrails,” ethical dilemmas become competitive disadvantages. He explained that if a company attempts to “prioritize the public good” and take the “high road,” they risk being “beat by a competitor who’s taking the low road.” Consequently, he said he believes business incentives alone will inevitably drive companies toward “dark outcomes” unless there is an interplay between the private sector and public law.
‘Synthetic intimacy’ and children
Beyond the lack of regulation, Gordon-Levitt expressed deep concern regarding the psychological impact of AI on children. He compared the algorithms used in AI toys to “slot machines,” saying they use psychological techniques designed to be addictive.
Drawing on conversations with NYU psychologist Jonathan Haidt, Gordon-Levitt warned against “synthetic intimacy.” He argued that while human interaction helps develop neural pathways in young brains, AI chatbots provide a “fake” interaction designed to serve ads rather than foster development.
“To me it’s pretty obvious that you’re going down a very bad path if you’re subjecting them to this synthetic intimacy that these companies are selling,” he said.
Haidt, whose New York Times bestseller The Anxious Generation came recommended from Gordon-Levitt onstage, recently appeared at a Dartmouth-United Nations Development Program symposium on mental health among young people and used the metaphor of tree roots for neurons. Explaining tree-root growth is structured by environments, he brought up a picture of a tree growing around a Civil War–era tombstone. With Gen Z and technology, specifically the smartphone, he said: “Their brains have been growing around their phones very much in the way that this tree grew around this tombstone.” He also discussed the physical manifestations of this adaptation, with children “growing hunched around their phone,” as screen addiction is literally “warping eyeballs,” leading to a global rise in myopia shortsightedness.
The ‘arms race’ narrative
When addressing why regulations have been slow to materialize, Gordon-Levitt pointed to a powerful narrative employed by tech companies: the geopolitical race against China. He described this framing as “storytelling” and “handwaving” designed to bypass safety checks,. Companies often compare the development of AI to the Manhattan Project, arguing slowing down for safety means losing a war for dominance. In fact, The Trump administration’s “Genesis Mission” to encourage AI innovation was unveiled with similar fanfare just weeks ago, in late November.
However, this stance met with pushback from the audience. Stephen Messer of Collectiv[i] argued Gordon-Levitt’s arguments were falling apart quickly in a “room full of AI people.” Privacy previously decimated the U.S. facial recognition industry, he said as an example, allowing China to take a dominant lead within just six months. Gordon-Levitt acknowledged the complexity, admitting “anti-regulation arguments often cherrypick” bad laws to argue against all laws. He maintained that while the U.S. shouldn’t cede ground, “we have to find a good middle ground” rather than having no rules at all.
Gordon-Levitt also criticized the economic model of generative AI, accusing companies of building models on “stolen content and data” while claiming “fair use” to avoid paying creators. He warned a system in which “100% of the economic upside” goes to tech companies and “0%” goes to the humans who created the training data is unsustainable.
Despite his criticisms, Gordon-Levitt clarified he is not a tech pessimist. He said he would absolutely use AI tools if they were “set up ethically” and creators were compensated. However, he concluded without establishing the principle that a person’s digital work belongs to them, the industry is heading down a “pretty dystopian road.”
CJ Perry Says John Cena Tapping Out Was Right Move In Final Wrestling Match
Higher festive spend is due to inflation, clothing among first to be cut for budgeters – Deloitte survey
St. Pete LGBTQ+ Liaison Nathan Bruemmer on visibility, trust and community
Trending
-
Politics8 years agoCongress rolls out ‘Better Deal,’ new economic agenda
-
Entertainment8 years agoNew Season 8 Walking Dead trailer flashes forward in time
-
Politics8 years agoPoll: Virginia governor’s race in dead heat
-
Entertainment8 years agoThe final 6 ‘Game of Thrones’ episodes might feel like a full season
-
Entertainment8 years agoMeet Superman’s grandfather in new trailer for Krypton
-
Politics8 years agoIllinois’ financial crisis could bring the state to a halt
-
Business8 years ago6 Stunning new co-working spaces around the globe
-
Tech8 years agoHulu hires Google marketing veteran Kelly Campbell as CMO
