Connect with us

Business

The University of Oklahoma fired an instructor after she failed a psychology student who cited the Bible in an essay on gender

Published

on



The University of Oklahoma has fired an instructor who was accused by a student of religious discrimination over a failing grade on a psychology paper in which she cited the Bible and argued that promoting a “belief in multiple genders” was “demonic.”

The university said in a statement posted Monday on X that its investigation found the graduate teaching assistant had been “arbitrary” in giving 20-year-old junior Samantha Fulnecky zero points on the assignment. The university declined to comment beyond its statement, which said the instructor had been removed from teaching.

Through her attorney, the instructor, Mel Curth, denied Tuesday that she had “engaged in any arbitrary behavior regarding the student’s work.” The attorney, Brittany Stewart, said in a statement emailed to The Associated Press that Curth is “considering all of her legal remedies.”

Conservative groups, commentators and others quickly made Fulnecky’s failing grade an online cause, highlighting her argument that she’d been punished for expressing conservative Christian views. Her case became a flashpoint in the ongoing debate over academic freedom on college campuses as President Donald Trump pushes to end diversity, equity and inclusion initiatives, and restrict how campuses discuss race, gender and sexuality.

Fulnecky appealed her grade on the assignment, which was worth 3% of the final grade in the class, and the university said the assignment would not count. It also placed Curth on leave, and Oklahoma’s conservative Republican governor, Kevin Stitt, declared the situation “deeply concerning.”

“The University of Oklahoma believes strongly in both its faculty’s rights to teach with academic freedom and integrity and its students’ right to receive an education that is free from a lecturer’s impermissible evaluative standards,” the university’s statement said. “We are committed to teaching students how to think, not what to think.”

Universities under fire

A law approved this year by Oklahoma’s Republican-dominated Legislature and signed by Stitt prohibits state universities from using public funds to finance DEI programs or positions or mandating DEI training. However, the law says it does not apply to scholarly research or “the academic freedom of any individual faculty member.”

Home telephone listings for Fulnecky in the Springfield, Missouri, area had been disconnected, and her mother — an attorney, podcaster and radio host — did not immediately respond Tuesday to a Facebook message seeking comment about the university’s action.

Fulnecky’s failing grade came in an assignment for a psychology class on lifespan development. Curth directed students to write a 650-word response to an academic study that examined whether conformity with gender norms was associated with popularity or bullying among middle school students.

Fulnecky wrote that she was frustrated by the premise of the assignment because she does not believe that there are more than two genders based on her understanding of the Bible, according to a copy of her essay provided to The Oklahoman.

“Society pushing the lie that there are multiple genders and everyone should be whatever they want to be is demonic and severely harms American youth,” she wrote, adding that it would lead society “farther from God’s original plan for humans.”

In feedback obtained by the newspaper, Curth said the paper did “not answer the questions for the assignment,” contradicted itself, relied on “personal ideology” over evidence and “is at times offensive.”

“Please note that I am not deducting points because you have certain beliefs,” Curth wrote.

This story was originally featured on Fortune.com



Source link

Continue Reading

Business

OpenAI says prompt injections that can trick AI browsers may never be fully ‘solved’

Published

on



OpenAI has said that some attack methods against AI browsers like ChatGPT Atlas are likely here to stay, raising questions about whether AI agents can ever safely operate across the open web. 

The main issue is a type of attack called “prompt injection,” where hackers hide malicious instructions in websites, documents, or emails that can trick the AI agent into doing something harmful. For example, an attacker could embed hidden commands in a webpage—perhaps in text that is invisible to the human eye but looks legitimate to an AI—that override a user’s instructions and tell an agent to share a user’s emails, or drain someone’s bank account.

Following the launch of OpenAI’s ChatGPT Atlas browser in October, several security researchers demonstrated how a few words hidden in a Google Doc or clipboard link could manipulate the AI agent’s behavior. Brave, an open-source browser company that previously disclosed a flaw in Perplexity’s Comet browser, also published research warning that all AI-powered browsers are vulnerable to attacks like indirect prompt injection.

“Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully ‘solved,’” OpenAI wrote in a blog post Monday, adding that “agent mode” in ChatGPT Atlas “expands the security threat surface.”

OpenAI said that the aim was for users to “be able to trust a ChatGPT agent,” with Chief Information Security Officer Dane Stuckey adding that the way the company hopes to get there is by “investing heavily in automated red teaming, reinforcement learning, and rapid response loops to stay ahead of our adversaries.”

“We’re optimistic that a proactive, highly responsive rapid response loop can continue to materially reduce real-world risk over time,” the company said.

Fighting AI with AI

OpenAI’s approach to the problem is to use an AI-powered attacker of its own—essentially a bot trained through reinforcement learning to act like a hacker seeking ways to sneak malicious instructions to AI agents. The bot can test attacks in simulation, observe how the target AI would respond, then refine its approach and try again repeatedly.

“Our [reinforcement learning]-trained attacker can steer an agent into executing sophisticated, long-horizon harmful workflows that unfold over tens (or even hundreds) of steps,” OpenAI wrote. “We also observed novel attack strategies that did not appear in our human red teaming campaign or external reports.”

However, some cybersecurity experts are skeptical that OpenAI’s approach can address the fundamental problem. 

“What concerns me is that we’re trying to retrofit one of the most security-sensitive pieces of consumer software with a technology that’s still probabilistic, opaque, and easy to steer in subtle ways,” Charlie Eriksen, a security researcher at Aikido Security, told Fortune.

“Red-teaming and AI-based vulnerability hunting can catch obvious failures, but they don’t change the underlying dynamic. Until we have much clearer boundaries around what these systems are allowed to do and whose instructions they should listen to, it’s reasonable to be skeptical that the tradeoff makes sense for everyday users right now,” he said. “I think prompt injection will remain a long-term problem … You could even argue that this is a feature, not a bug.”

A cat-and-mouse game

Security researchers also previously told Fortune that while a lot of cybersecurity risks were essentially a continuous cat-and-mouse game, the deep access that AI agents need—such as users’ passwords and permission to take actions on a user’s behalf—posed such a vulnerable threat opportunity it was unclear if their advantages were worth the risk. 

George Chalhoub, assistant professor at UCL Interaction Centre, said that the risk is severe because prompt injection “collapses the boundary between the data and the instructions,” potentially turning an AI agent “from a helpful tool to a potential attack vector against the user” that could extract emails, steal personal data, or access passwords.

“That’s what makes AI browsers fundamentally risky,” Eriksen said. “We’re delegating authority to a system that wasn’t designed with strong isolation or a clear permission model. Traditional browsers treat the web as untrusted by default. Agentic browsers blur that line by allowing content to shape behavior, not just be displayed.”

OpenAI recommends users give agents specific instructions rather than providing broad access with vague directions like “take whatever action is needed.” The browser also has extra security features such as “logged out mode”— which allow a users to use it without sharing passwords— and “Watch mode”—which is a security feature that requires a user to explicitly confirm sensitive actions such as sending messages or making payments.  

“Wide latitude makes it easier for hidden or malicious content to influence the agent, even when safeguards are in place,” OpenAI said in the blogpost.



Source link

Continue Reading

Business

Down Arrow Button Icon

Published

on



Powerball’s $1.7 billion jackpot may create a new ultrarich winner, but financial planners say what happens after the drawing can matter more than the winning numbers. They describe a consistent set of mistakes that can quietly turn a once‑in‑a‑lifetime windfall into a long, public mess.

Rushing big decisions

Many experts warn that acting too quickly—quitting a job, claiming the prize immediately, or committing to big purchases—is one of the most damaging errors. Articles in outlets including CNBC, NerdWallet, and USA Today emphasize slowing down, taking time to process the shock, and making no irreversible decisions until a plan is in place.

A related misstep is choosing between the lump sum and annuity on instinct instead of analysis, even though that decision locks in tax timing, investment options, and how long the money is likely to last. Financial writers note that many winners default to the lump sum without modeling scenarios with professionals and understanding that, after taxes, the headline $1.7 billion quickly shrinks.

Going public and losing privacy

Coverage in CNBC highlights that bragging about your win on social media or talking openly about it can invite lawsuits, scams, and constant money requests. Advisors repeatedly stress “keep it quiet” and, where allowed, explore ways to claim through a trust or remain anonymous to avoid becoming a target.​​

Experts also point out that winners often underestimate the emotional toll of overnight fame, which can strain marriages, friendships, and even personal safety if boundaries are not set early.

Skipping a professional team

A recurring theme across NerdWallet, Business Insider, and other outlets is that trying to DIY a nine‑ or 10‑figure fortune is a costly mistake. Financial planners urge winners to assemble a small, vetted team—typically an attorney, a tax professional, and a fiduciary advisor with experience in sudden wealth—before claiming the prize.

Winners also get into trouble when they rely on friends or relatives who “know about money” instead of credentialed experts, a pattern cited in guidance from Northwestern Mutual and others on working with lottery clients.

Overspending and assuming the money is infinite

Business Insider’s reporting on advisors who work with lottery winners notes that many clients behave as if the balance can’t be depleted, only to burn through wealth with multiple mansions, jets, and speculative investments. Experts describe unchecked lifestyle inflation and “spend, spend, spend” behavior as one of the most common paths to regret, especially for lump‑sum recipients.

Financial outlets also emphasize that winners often fail to set a sustainable withdrawal rate or diversify, ignoring the reality that the money is finite and that even ultra‑large fortunes can erode through taxes, market volatility, and ongoing costs like property taxes and maintenance.

Poor boundaries with family, friends, and causes

Advisors interviewed by Northwestern Mutual and others say another frequent mistake is giving without a plan: ad hoc loans, endless gifts, and open‑ended promises that create resentment when the answer finally becomes “no.” They suggest that winners instead define a clear gifting and philanthropy framework upfront—including who gets what and how much is reserved for charity—to avoid both over‑giving and relationship damage.

Experts further warn that feeling obligated to become a one‑person safety net or charity can derail long‑term goals and quickly consume capital, especially when requests are amplified by public attention.

Neglecting long‑term planning and purpose

Guides from major financial firms emphasize that many winners focus on immediate fantasies—houses, cars, travel—and neglect estate planning, debt strategy, and long‑term investing. Advisors recommend tackling basics like wills, trusts, and tax‑efficient structures early, so the windfall will benefit multiple generations, if desired.

Several profiles of past winners also point to a subtler mistake: not thinking about life after the headlines, which can leave people isolated, directionless, or vulnerable to bad ideas when the novelty fades. For the future holder of the $1.7 billion ticket, experts suggest that pairing technical planning with a clear sense of purpose could be the difference between a brief lucky streak and durable, generational wealth.

For this story, Fortune journalists used generative AI as a research tool. An editor verified the accuracy of the information before publishing. 



Source link

Continue Reading

Business

Advocacy group slams Trump’s plan to garnish wages of student loan borrowers in default

Published

on



The Trump administration said on Tuesday that it will begin garnishing the wages of student loan borrowers who are in default early next year.

The department said it will send notices to approximately 1,000 borrowers the week of January 7, with more notices to come at an increasing scale each month.

Millions of borrowers are considered in default, meaning they are 270 days past due on their payments. The department must give borrowers 30 days notice before their wages can be garnished.

The department said it will begin collection activities, “only after student and parent borrowers have been provided sufficient notice and opportunity to repay their loans.”

In May, the Trump administration ended the pandemic-era pause on student loan payments, beginning to collect on defaulted debt through withholding tax refunds and other federal payments to borrowers.

The move ended a period of leniency for student loan borrowers. Payments restarted in October of 2023, but the Biden administration extended a grace period of one year. Since March 2020, no federal student loans had been referred for collection, including those in default, until the Trump administration’s changes earlier this year.

The Biden administration tried multiple times to give broad forgiveness to student loans, but those efforts were eventually stopped by courts.

Persis Yu, deputy executive director for the Student Borrower Protection Center, criticized the decision to begin garnishing wages, and said the department had failed to sufficiently help borrowers find affordable payment options.

“At a time when families across the country are struggling with stagnant wages and an affordability crisis, this administration’s decision to garnish wages from defaulted student loan borrowers is cruel, unnecessary, and irresponsible,” Yu said in a statement. “As millions of borrowers sit on the precipice of default, this Administration is using its self-inflicted limited resources to seize borrowers’ wages instead of defending borrowers’ right to affordable payments.”


The Associated Press’ education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.



Source link

Continue Reading

Trending

Copyright © Miami Select.