Connect with us

Business

Australia will start banning kids from social media this week

Published

on



Starting this Wednesday, many Australian teens will find it near impossible to access social media. That’s because, as of Dec. 10, social media platforms like TikTok and Instagram must bar those under the age of 16, or face significant fines. Australian Prime Minister Anthony Albanese called the pending ban “one of the biggest social and cultural changes our nation has faced” in a statement.

Much is riding on this ban—and not just in Australia. Other countries in the region are watching Canberra’s ban closely. Malaysia, for example, said that it also plans to bar under-16s from accessing social media platforms starting next year. 

Other countries are considering less drastic ways to control teenagers’ social media use. On Nov. 30, Singapore said it would ban the use of smartphones on secondary school campuses. 

Yet, governments in Australia and Malaysia argue a full social media ban is necessary to protect youth from online harms such as cyberbullying, sexual exploitation and financial scams.

Tech companies have had varied responses to the social media ban. 

Some, like Meta, have been compliant, starting to remove Australian under-16s from Instagram, Threads and Facebook from Dec. 4, a week before the national ban kicks in. The social media giant reaffirmed their commitment to adhere to Australian law, but called for app stores to instead be held accountable for age verification.

“The government should require app stores to verify age and obtain parental approval whenever teens under 16 download apps, eliminating the need for teens to verify their age multiple times across different apps,” a Meta spokesperson said.

Others, like YouTube, sought to be excluded from the ban, with parent company Google even threatening to sue the Australian federal government in July 2025—to no avail.

However, experts told Fortune that these bans may, in fact, be harmful, denying young people the place to develop their own identities and the space to learn healthy digital habits.

“A healthy part of the development process and grappling with the human condition is the process of finding oneself. Consuming cultural material, connecting with others, and finding your community and identity is part of that human experience,” says Andrew Yee, an assistant professor at the Nanyang Technological University (NTU)’s Wee Kim Wee School of Communication and Information.

Social media “allows young people to derive information, gain affirmation and build community,” says Sun Sun Lim, a professor in communications and technology at the Singapore Management University (SMU), who also calls bans “a very rough tool.”

Yee, from NTU, also points out that young people can turn to platforms like YouTube to learn about hobbies that may not be available in their local communities. 

Forcing kids to go “cold turkey” off social media could also make for a difficult transition to the digital world once they are of age, argues Chew Han Ei, a senior research fellow at the Lee Kuan Yew School of Public Policy in the National University of Singapore (NUS).

“The sensible way is to slowly scaffold [social media use], since it’s not that healthy social media usage can be cultivated immediately,” Chew says.

Enforcement

Australia plans to enforce its social media ban by imposing a fine of 49.5 million Australian dollars (US$32.9 million) on social media companies which fail to take steps to ban those under 16 from having accounts on their platforms.

Malaysia has yet to explain how it might enforce its own social media ban, but communications minister Fahmi Fadzil suggested that social media platforms could verify users through government-issued documents like passports. 

Though young people may soon figure out how to maintain their access to social media. “Youths are savvy, and I am sure they will find ways to circumvent these,” says Yee of NTU. He also adds that young may migrate to platforms that aren’t traditionally defined as social media, such as gaming sites like Roblox. Other social media platforms, like YouTube, also don’t require accounts, thus limiting the efficacy of these bans, he adds.

Forcing social media platforms to collect huge amounts of personal data and government-issued identity documents could also lead to data privacy issues. “It’s very intimate personally identifiable information that’s being collected to verify age—from passports to digital IDs,” Chew, from NUS, says. “Somewhere along the line, a breach will happen.”

Moving towards healthy social media use

Ironically, some experts argue that a ban may absolve social media platforms of responsibility towards their younger users. 

“Social media bans impose an unfair burden on parents to closely supervise their children’s media use,” says Lim of SMU. “As for the tech platform, they can reduce child safety safeguards that make their platforms safer, since now the assumption is that young people are banned from them, and should not have been venturing [onto them] and opening themselves up to risks.”

And rather than allow digital harms to proliferate, social media platforms should be held responsible for ensuring they “contribute to intentional and purposeful use”, argues Yee.

This could mean regulating companies’ use of user interface features like auto-play and infinite scroll, or ensuring algorithmic recommendations are not pushing harmful content to users.

“Platforms profit—lucratively, if I may add—from people’s use, so they have a responsibility to ensure that the product is safe and beneficial for its users,” Yee explains. 

Finally, conversations on safe social media use should center the voices of young people, Yee adds.

“I think we need to come to a consensus as to what a safe and rights-respecting online space is,” he says. “This must include young people’s voices, as policy design should be done in consultation with the people the policy is affecting.”



Source link

Continue Reading

Business

Why Jerome Powell’s latest rate cut still won’t help you get a lower mortgage rate

Published

on



For the third meeting in a row, the Federal Reserve cut interest rates—a “hawkish” move in an effort to help a softening labor market. The 0.25% cut brought the interest rate range to 3.5% to 3.75%—but economists and housing experts warn that’s not going to affect mortgage rates in the way potential homebuyers were hoping for. 

Chen Zhao, head of economics research at Redfin, wrote in a Wednesday post that the Fed’s December interest rate cut won’t move mortgage rates “because markets have already priced it in.” 

The Federal Reserve controls the Federal funds rate, which is a rate that banks charge each other and is more closely tied to credit cards, personal loans, and home-equity lines. A standard 30-year mortgage, on the other hand, is a long-term loan, and the pricing of those loans are tied more closely to yields on longer-term bonds like the 10-year Treasury and mortgage-backed securities. 

“Since this rate cut was no surprise, the markets have taken it in stride,” 43-year mortgage industry veteran Melissa Cohn, regional vice president of William Raveis Mortgage, told Fortune. She said more dropping shoes in terms of economic data will be the real turning point: “The future of bond yields and mortgage rates will be determined as new data on jobs and inflation get released.”

The current mortgage rate is 6.3%, according to Mortgage News Daily, which is of course much higher than the sub-3% rate that homebuyers from the pandemic era remember, although it’s also a far cry from the 8% peak in October 2023

“The committee’s projections and Chair Jerome Powell’s remarks indicate that this will be the last interest cut for a while,” Zhao wrote. “Given the underlying economic fundamentals of 3% inflation coupled with a weakening—but not recessionary—labor market, the Fed is likely to hold steady in the near future.

“Mortgage rates are unlikely to fall or rise by much,” she continued.

How mortgage rates affect housing affordability

Mortgage rates are just one piece of the housing affordability puzzle. While it may feel as if it’s the major roadblock in the ability to buy a home—especially having a recent memory of the pandemic housing boom—mortgage rates are only one factor. 

To put it in perspective, Zillow reported earlier this year not even a 0% mortgage rate would make buying a house affordable in several major U.S. cities. 

Let that sink in. 

Even without any interest accrued on a loan, homebuying is still out of reach for the typical American. Much of the affordability crisis has to do with home prices, which are more than 50% higher than in 2020. This has locked out new homebuyers from entering the market and current homeowners from selling. 

The mortgage rate drop required to make an average home affordable (to about 4.43%) for the typical buyer is “unrealistic,” according to Zillow economic analyst Anushna Prakash.  

“It’s unlikely rates will drop to the mid-[4% range] anytime soon,” Arlington, Va.–based real estate agent Philippa Main told Fortune. “And even if they did, housing prices are still at historic highs.” With 11 years of experience, Main is also a licensed mortgage loan officer.

To be sure, some economists see some light at the end of the tunnel for homebuyers plagued by high mortgage rates and home prices.

“For prospective buyers who have been waiting on the sidelines, the housing market is finally starting to listen,” wrote First American chief economist Mark Fleming in an Aug. 29 blog post. First American’s analysis takes into account inflation, and Fleming said: “The price of a house today is not directly comparable to the price of that same house 30 years ago.”



Source link

Continue Reading

Business

OpenAI debuts GPT-5.2 in effort to silence concerns it is falling behind its rivals

Published

on



OpenAI, under increasing competitive pressure from Google and Anthropic, has debuted a new AI model, GPT-5.2, that it says beats all existing models by a substantial margin across a wide range of tasks.

The new model, which is being released less than a month after OpenAI debuted its predecessor, GPT-5.1, performed particularly well on a benchmark of complicated professional tasks across a range of “knowledge work”—from law to accounting to finance—as well as on evaluations involving coding and mathematical reasoning, according to data OpenAI released.

Fidji Simo, the former InstaCart CEO who now serves as OpenAI’s CEO of applications, told reporters that the model should not been seen as a direct response to Google’s Gemini 3 Pro AI model, which was released last month. That release prompted OpenAI CEO Sam Altman to issue a “code red,” delaying the rollout of several initiatives in order to focus more staff and computing resources on improving its core product, ChatGPT.

“I would say that [the Code Red] helps with the release of this model, but that’s not the reason it is coming out this week in particular, it has been in the works for a while,” she said.

She said the company had been building GPT-5.2 “for many months.” “We don’t turn around these models in just a week. It’s the result of a lot of work,” she said. The model had been known internally by the code name “Garlic,” according to a story in The Information. The day before the model’s release Altman teased its imminent rollout by posting to social media a video clip of him cooking a dish with a large amount of garlic.

OpenAI executives said that the model had been in the hands of “Alpha customers” who help test its performance for “several weeks”—a time period that would mean the model was completed prior to Altman’s “code red” declaration.

These testers included legal AI startup Harvey, note-taking app Notion, and file-management software company Box, as well as Shopify and Zoom.

OpenAI said these customers found GPT-5.2 demonstrated a “state of the art” ability to use other software tools to complete tasks, as well as excelling at writing and debugging code.

Coding has become one of the most competitive use cases for AI model deployment within companies. Although OpenAI had an early lead in the space, Anthropic’s Claude model has proved especially popular among enterprises, exceeding OpenAI’s marketshare according to some figures. OpenAI is no doubt hoping to convince customers to turn back to its models for coding with GPT-5.2.

Simo said the “Code Red” was helping OpenAI focus on improving ChatGPT. “Code Red is really a signal to the company that we want to marshal resources in one particular area, and that’s a way to really define priorities and define things that can be deprioritized,” she said. “So we have had an increase in resources focused on ChatGPT in general.”

The company also said its new model is better than the company’s earlier ones at providing “safe completions”—which it defines as providing users with helpful answers while not saying things that might contribute to or worsen mental health crises.

“On the safety side, as you saw through the benchmarks, we are improving on pretty much every dimension of safety, whether that’s self harm, whether that’s different types of mental health, whether that’s emotional reliance,” Simo said. “We’re very proud of the work that we’re doing here. It is a top priority for us, and we only release models when we’re confident that the safety protocols have been followed, and we feel proud of our work.”

The release of the new model came on the same day a new lawsuit was filed against the company alleging that ChatGPT’s interactions with a psychologically troubled user had contributed to a murder-suicide in Connecticut. The company also faces several other lawsuits alleging ChatGPT contributed to people’s suicides. The company called the Connecticut murder-suicide “incredibly heartbreaking” and said it is continuing to improve “ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support.” 

GPT-5.2 showed a large jump in performance across several benchmark tests of interest to enterprise customers. It met or exceeded human expert performance on a wide range of difficult professional tasks, as measured by OpenAI’s GDPval benchmark, 70.9% of the time. That compares to just 38.8% of the time for GPT-5, a model that OpenAI released in August; 59.6% for Anthropic’s Claude Opus 4.5; and 53.3% for Google’s Gemini 3 Pro.

On the software development benchmark, SWE-Bench Pro, GPT-5.2 scored 55.6%, which was almost 5 percentage points better than its predecessor, GPT-5.1, and more than 12% better than Gemini 3 Pro.

OpenAI’s Aidan Clark, vice president of research (training), declined to answer questions about exactly what training methods had been used to upgrade GPT-5.2’s performance, although he said that the company had made improvements across the board, including in pretraining, the initial step in creating an AI model.

When Google released its Gemini 3 Pro model last month, its researchers also said the company had made improvements in pretraining as well as post-training. This surprised some in the field who believed that AI companies had largely exhausted the ability to wring substantial improvements out of the pretraining stage of model building, and it was speculated that OpenAI may have been caught off guard by Google’s progress in this area.



Source link

Continue Reading

Business

OpenAI and Disney just ended the ‘war’ between AI and Hollywood with their $1 billion Sora deal

Published

on



Disney’s $1 billion investment in OpenAI, announced Thursday morning—and its decision to let more than 200 Disney, Pixar, Marvel, and Star Wars characters appear inside the Sora video generator—is more than a licensing deal. According to copyright and AI law expert Matthew Sag, who teaches at Emory University’s law school, the deal marks a strategic realignment that could reshape how Hollywood protects its IP in the face of AI-generated content that threatens to leech on their legally-protected magic. 

“AI companies are either in a position where they need to aggressively filter user prompts and model outputs to make sure that they don’t accidentally show Darth Vader, or strike deals with the rights holders to get permission to make videos and images of Darth Vader,” Sag told Fortune. “The licensing strategy is much more of a win-win.” 

The three-year agreement gives OpenAI the right to ingest hundreds of Disney-owned characters into Sora and ChatGPT Image. Disney will also receive equity warrants and become a major OpenAI customer, while deploying ChatGPT internally.

Sag said the deal itself will be a kind of “revenue-sharing.”

“OpenAI hasn’t figured out the revenue model,” Sag said. “So I think making this just an investment deal, in some ways, simplifies it. For Disney … [OpenAI] will figure out a way to make this profitable at some point, and [Disney will] get a cut of that.”

Why this deal matters: the ‘Snoopy problem’

For more than a year, the biggest legal threat to large-scale generative AI has centered on what Sag calls the “Snoopy problem”: It is extremely difficult to train powerful generative models without some degree of memorization, and copyrightable characters are uniquely vulnerable because copyright protects them in the abstract.

Sag was careful to outline a key distinction. AI companies aren’t licensing the right to train on copyrighted works; they’re licensing the right to create outputs that would otherwise be infringing.

That’s because the case for AI companies training their models on unlicensed content is “very strong,” Sag said. Two recent court rulings involving Anthropic and Meta have strengthened those arguments.  

The real stumbling block, Sag said, has always been outputs, not training. If a model can accidentally produce a frame that looks too much like Darth Vader, Homer Simpson, Snoopy, or Elsa, the fair use defense begins to fray.

“If you do get too much memorization, if that memorization finds its way into outputs, then your fair-use case begins to just crumble,” Sag said.

While it’s impossible to license enough text to train an LLM (“that would take a billion” deals, Sag said), it is possible to build image or video models entirely from licensed data if you have the right partners. This is why deals like Disney’s are crucial: They turn previously illegal outputs into legal ones, irrespective of whether the training process itself qualifies as fair use.

“The limiting principle is going to be essentially about whether—in their everyday operation—these models reproduce substantial portions of works from their training data,” Sag said.

The deal, Sag says, is also a hedge against Hollywood’s lawsuits. This announcement is “very bad” for Midjourney, who Disney is suing for copyright infringement, because it upholds OpenAI’s licensing deal as the “responsible” benchmark for AI firms. 

This is also a signal about the future of AI data

Beyond copyright risk, the deal exposes another trend: the drying up of high-quality, unlicensed data on the public internet.

In a blog post, Sag wrote:

“The low-hanging fruit of the public internet has been picked,” he wrote. “To get better, companies like OpenAI are going to need access to data that no one else has. Google has YouTube; OpenAI now has the Magic Kingdom.”

This is the core of what he calls the “data scarcity thesis.” OpenAI’s next leap in model quality may require exclusive content partnerships, as opposed to more scraping. 

“By entangling itself with the world’s premier IP holder, OpenAI makes itself indispensable to the very industry that threatened to sue it out of existence,” Sag wrote. 

AI and Hollywood have spent three years locked in a cold war over training data, likeness rights and infringement. With Disney’s $1 billion investment, that era appears to be ending.

“This is the template for the future,” Sag wrote. “We are moving away from total war between AI and content, toward a negotiated partition of the world.”



Source link

Continue Reading

Trending

Copyright © Miami Select.