Connect with us

Business

Former Russian tycoon says Instagram post cost him $9 billion: His bank was sold for 3% of its value

Published

on



Former Russian banking tycoon Oleg Tinkov says a single Instagram post condemning the war in Ukraine cost him nearly $9 billion, after he was forced to sell his stake in his bank for a fraction of its real value. He described the episode as a “hostage” situation that shows how dissenting billionaires are brought to heel in Vladimir Putin’s Russia.

Tinkov, the founder of Tinkoff Bank, was once celebrated as one of Russia’s wealthiest bankers. That status changed dramatically in April 2022, when he used Instagram to denounce the war as “insane” and to criticize Russia’s military as poorly prepared and riddled with corruption. As CNBC reported at the time, Tinkov claimed 90% of Russians opposed the war, and the remaining 10% were “morons.” He urged an immediate and “face-saving” end to the war.​

Tinkov told the BBC recently that within a day of that post, senior executives at his bank received a call from officials linked to the Kremlin, delivering a stark ultimatum. Either Tinkov’s stake would be sold and his name scrubbed from the brand, or the bank—then one of Russia’s largest lenders—would be nationalized.

A forced fire sale

Tinkov said that what followed was not a negotiation but coercion under threat. He claimed he was told to accept whatever price was offered for his roughly 35% stake in TCS Group, the owner of Tinkoff Bank, or risk losing everything. “I couldn’t negotiate the price. I was like a hostage,” he told The New York Times. He ultimately sold the stake in April 2022, shortly after his Instagram post.​

Within a week of this conversation, Tinkov said, a firm linked to metals magnate Vladimir Potanin, one of Russia’s richest men and a key supplier of nickel used in military hardware, stepped in to buy the stake. Tinkov told the BBC that the deal valued his holding at just about 3% of its true market worth, wiping out almost $9 billion of the wealth he had built over decades in business.​

Exile and erasure

After the sale, Tinkov left Russia, eventually renouncing his Russian citizenship and becoming one of the few high-profile businessmen to publicly break with the Kremlin over the war. He alleged that the campaign against him extended beyond the balance sheet, including pressure to remove his name from the bank brand and efforts to erase his role in building the institution that once carried it.

In his telling, the episode shows how quickly loyalty is enforced when oligarchs step out of line. Public criticism of the invasion, even from a figure whose bank helped power Russia’s consumer boom, was treated as a direct challenge to the state in wartime. There are numerous examples from the recent past, including the erstwhile oil tycoon Mikhail Khodorkovsky, formerly Russia’s richest man, who spent 10 years in jail after launching a pro-democracy organization in 2001.​ Like Tinkov, he has since become an exile, residing in London.

For his part, Tinkov has taken a few years to retrench and is newly visible in 2025, recently emerging as a backer of Plata, a Mexican fintech led by former Tinkoff Bank executives.

But the former oligarch’s experience sits within a wider pattern described by analysts who say the Kremlin now relies on a mix of fear and opportunity to keep Russia’s wealthy elite compliant. Sanctions, war-time controls and the threat of asset seizures have made fortunes inside Russia highly contingent on political loyalty, while the departure of Western firms has opened up bargain acquisitions for trusted allies.

The war in Ukraine, meanwhile, has rumbled on, with President Trump holding meetings and calls with both Putin and Ukrainian President Volodymyr Zelensky. After the 2025 Christmas holiday, Trump met with Zelensky at his Mar-A-Lago resort in Florida while fielding phone calls with Putin, claiming a peace deal is “closer than ever,” more than three years after Tinkov made his fateful Instagram post.





Source link

Continue Reading

Business

Gen Z can skip college, and still earn big: Here are the top 15 highest-paying jobs that don’t require a degree

Published

on



Gen Z has been taking a harder look at the American Dream of pursuing a four-year degree as tuition costs have skyrocketed and AI takes over white-collar jobs. Luckily, they have an out—there are many careers that don’t require a bachelor’s and still pay six-figure salaries. 

The top high-wage job that doesn’t require a four-year degree and shows strong job growth may be unexpected: elevator and escalator installers and repairers. The role has a median annual salary of $106,580, only requiring a high school diploma, apprenticeship completion, and a state certification, according to a new report from Resume Genius analyzing U.S. Bureau of Labor Statistics data. 

The study placed transportation, storage, and distribution managers in second; by pursuing an entry-level role in logistics, new hires can put themselves on track to earn $102,010 per year. Flight attendants, chefs, athletes, and criminal investigators also made the list. 

Young people have been told that going to college is necessary for success, but Gen Zers wanting to skip costly degrees don’t have to sacrifice their careers. The report shows they have a litany of choices, from six-figure blue collar jobs to cushy office roles.

Resume Genius career expert Eva Chan told CNBC that “there’s no one way to get a high-paying job,” adding that all the ranked roles “have some degree of training, some have schooling, but they’re all very attainable without a degree.”

The top 15 high-paying jobs that don’t require four-year degrees

The top 15 highest-paying jobs that earn above the U.S. median, have positive projected job growth, and don’t require a four-year degree to apply, according to Resume Genius.

  1. Elevator and escalator installer and repairer (Median annual salary: $106,580)
  2. Transportation, storage, and distribution manager (Median annual salary: $102,010)
  3. Electrical power-line installer and repairer (Median annual salary: $92,560)
  4. Aircraft and avionics equipment mechanic and technician (Median annual salary: $79,140)
  5. Detective and criminal investigator (Median annual salary: $77,270)
  6. Locomotive engineer (Median annual salary: $75,680)
  7. Wholesale and manufacturing sales representative (Median annual salary: $74,100)
  8. Flight attendant (Median annual salary: $67,130)
  9. Property, real estate, and community association manager (Median annual salary: $66,700)
  10. Water transportation worker (Median annual salary: $66,490)
  11. Food service manager (Median annual salary: $65,310)
  12. Heavy vehicle and mobile equipment service technician (Median annual salary: $62,740)
  13. Athlete and sports competitor (Median annual salary: $62,360)
  14. Chef and head cook (Median annual salary: $60,990)
  15. Insurance sales agent (Median annual salary: $60,370)
Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.



Source link

Continue Reading

Business

OpenAI is hiring a head of preparedness, who will earn $555,000

Published

on



OpenAI is looking for a new employee to help address the growing dangers of AI, and the tech company is willing to spend more than half a million dollars to fill the role.

OpenAI is hiring a “head of preparedness” to reduce harms associated with the technology, like user mental health and cybersecurity, CEO Sam Altman wrote in an X post on Saturday. The position will pay $555,000 per year, plus equity, according to the job listing.

“This will be a stressful job and you’ll jump into the deep end pretty much immediately,” Altman said.

OpenAI’s push to hire a safety executive comes amid companies’ growing concerns about AI risks on operations and reputations. A November analysis of annual Securities and Exchange Commission filings by financial data and analytics company AlphaSense found that in the first 11 months of the year, 418 companies worth at least $1 billion cited reputational harm associated with AI risk factors. These reputation-threatening risks include AI datasets that show biased information or jeopardize security. Reports of AI-related reputational harm increased 46% from 2024, according to the analysis.

“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” Altman said in the social media post.

“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” he added.

OpenAI’s previous head of preparedness Aleksander Madry was reassigned last year to a role related to AI reasoning, with AI safety a related part of the job. 

OpenAI’s efforts to address AI dangers

Founded in 2015 as a nonprofit with the intention to use AI to improve and benefit humanity, OpenAI has, in the eyes of some of its former leaders, struggled to prioritize its commitment to safe technology development. The company’s former vice president of research, Dario Amodei, along with his sister Daniela Amodei and several other researchers, left OpenAI in 2020, in part because of concerns the company was prioritizing commercial success over safety. Amodei founded Anthropic the following year.

OpenAI has faced multiple wrongful death lawsuits this year, alleging ChatGPT encouraged users’ delusions, and claiming conversations with the bot were linked to some users’ suicides. A New York Times investigation published in November found nearly 50 cases of ChatGPT users having mental health crises while in conversation with the bot. 

OpenAI said in August its safety features could “degrade” following long conversations between users and ChatGPT, but the company has made changes to improve how its models interact with users. It created an eight-person council earlier this year to advise the company on guardrails to support users’ wellbeing and has updated ChatGPT to better respond in sensitive conversations and increase access to crisis hotlines. At the beginning of the month, the company announced grants to fund research about the intersection of AI and mental health.

The tech company has also conceded to needing improved safety measures, saying in a blog post this month some of its upcoming models could present a “high” cybersecurity risk as AI rapidly advances. The company is taking measures—such as training models to not respond to requests compromising cybersecurity and refining monitoring systems—to mitigate those risks.

“We have a strong foundation of measuring growing capabilities,” Altman wrote on Saturday. “But we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits.”



Source link

Continue Reading

Business

YouTube’s cofounder and former tech boss doesn’t want his kids to watch short videos

Published

on



  • YouTube cofounder Steve Chen is one of the latest tech trailblazers to warn against social media’s impact on kids. Chen warned in a talk short-form video “equates to shorter attention spans” and said he wouldn’t want his own kids to exclusively consume this type of content. Companies that distribute short-form video (which includes the company he cofounded, YouTube) should add safeguards for younger users, he added.

A YouTube cofounder who helped pave the way for our modern, content-obsessed world is the latest tech whiz to come out against short-form videos because of their effects on kids. 

Steve Chen, who served as YouTube’s former chief technology officer before it was acquired by Google in 2006, railed against the TikTok-ification of online life in a talk earlier this year at Stanford Graduate School of Business.

“I think TikTok is entertainment, but it’s purely entertainment,” Chen said during the talk, which was published on YouTube Friday. “It’s just for that moment. Just shorter-form content equates to shorter attention spans.”

Chen, who has two children with wife, Jamie Chen, said he wouldn’t want his kids only consuming short-form content, and then not be able to watch something longer than 15 minutes. He said he knows of other parents who force their kids to watch longer videos without the eye-catching colors and gimmicks that hook especially younger users. This strategy works well, he claims.

“If they don’t get exposure to the short-form content right away, then they’re still happy with that other type of content that they’re watching,” he said. 

Many companies have had to rush to offer short-form content after the rise of TikTok, he said, but these companies now have to balance their motivations for monetization and attracting users’ attention with content that’s “actually useful.” 

Companies that distribute short-form video, which includes his former company YouTube, could face problems with addictiveness. These companies should add safeguards for kids on short-form content, such as age restrictions for apps and limits on the amount of time some users can use them, he said. 

Chen joins fellow tech trailblazers Sam Altman of OpenAI and Elon Musk in sounding the alarm about social media’s impact on children. In a podcast interview, Altman specifically called out social media scrolling and the “dopamine hit” of short-form video for “probably messing with kids’ brain development in a super deep way.”

Musk, who owns the social network X (née Twitter), said in 2023 he doesn’t have any restrictions on social-media use for his children, but added this “might have been a mistake,” and encouraged parents to take a more active role in their kids’ social-media habits.

“I think, probably, I would limit social media a bit more than I have in the past and just take note of what they’re watching, because I think at this point they’re being programmed by some social media algorithms, which you may or may not agree with,” Musk said.

A version of this story originally published on Fortune.com on July 29, 2025.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.



Source link

Continue Reading

Trending

Copyright © Miami Select.