Connect with us

Business

ChatGPT bans evolve into ‘AI literacy’ as colleges scramble to answer the question: ‘what is cheating?’

Published

on



The book report is now a thing of the past. Take-home tests and essays are becoming obsolete.

Student use of artificial intelligence has become so prevalent, high school and college educators say, that to assign writing outside of the classroom is like asking students to cheat.

“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. “Anything you send home, you have to assume is being AI’ed.”

The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study and how teachers teach, and it’s creating new confusion over what constitutes academic dishonesty.

“We have to ask ourselves, what is cheating?” says Cuny, a 2024 recipient of California’s Teacher of the Year award. “Because I think the lines are getting blurred.”

Cuny’s students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him “lock down” their screens or block access to certain sites. He’s also integrating AI into his lessons and teaching students how to use AI as a study aid “to get kids learning with AI instead of cheating with AI.”

In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading.

“I used to give a writing prompt and say, ‘In two weeks, I want a five-paragraph essay,’” says Gibson. “These days, I can’t do that. That’s almost begging teenagers to cheat.”

Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in “The Great Gatsby.” Many students say their first instinct is now to ask ChatGPT for help “brainstorming.” Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: “Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!”

Students are uncertain when AI usage is out of bounds

Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation, and it’s sometimes hard to know where to draw the line.

College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading “felt like a different language” until she read AI summaries of the texts.

“Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?”

Her class syllabi say things like: “Don’t use AI to write essays and to form thoughts,” she says, but that leaves a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater.

Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences.

“Whether you can use AI or not depends on each classroom. That can get confusing,” says Valencia 11th grader Jolie Lahey. She credits Cuny with teaching her sophomore English class a variety of AI skills like how to upload study guides to ChatGPT and have the chatbot quiz them, and then explain problems they got wrong.

But this year, her teachers have strict “No AI” policies. “It’s such a helpful tool. And if we’re not allowed to use it that just doesn’t make sense,” Lahey says. “It feels outdated.”

Schools are introducing guidelines, gradually

Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term “AI literacy” has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges.

Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions.

The University of California, Berkeley emailed all faculty new AI guidance that instructs them to “include a clear statement on their syllabus about course expectations” around AI use. The guidance offered language for three sample syllabus statements — for courses that require AI, ban AI in and out of class, or allow some AI use.

“In the absence of such a statement, students may be more likely to use these technologies inappropriately,” the email said, stressing that AI is “creating new confusion about what might constitute legitimate methods for completing student work.”

Carnegie Mellon University has seen a huge uptick in academic responsibility violations due to AI, but often students aren’t aware they’ve done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university’s Heinz College of Information Systems and Public Policy.

For example, one student who is learning English wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English. But he didn’t realize the platform also altered his language, which was flagged by an AI detector.

Enforcing academic integrity policies has become more complicated, since use of AI is hard to spot and even harder to prove, Fitzsimmons said. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line, but are now more hesitant to point out violations because they don’t want to accuse students unfairly. Students worry that if they are falsely accused, there is no way to prove their innocence.

Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told a blanket ban on AI “is not a viable policy” unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to “flipped classrooms,” where homework is done in class.

Emily DeJeu, who teaches communication courses at Carnegie Mellon’s business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in “a lockdown browser” that blocks students from leaving the quiz screen.

“To expect an 18-year-old to exercise great discipline is unreasonable,” DeJeu said. “That’s why it’s up to instructors to put up guardrails.”

___

The Associated Press’ education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.



Source link

Continue Reading

Business

Databricks CEO Ali Ghodsi says company will be worth $1 trillion by doing these three things

Published

on



Ali Ghodsi, the CEO and cofounder of data intelligence company Databricks, is betting his privately held startup can be the latest addition to the trillion-dollar valuation club.

In August, Ghodsi told the Wall Street Journalthat he believed Databricks, which is reportedly in talks toraise funding at a $134 billion valuation, had “a shot to be a trillion-dollar company.” At Fortune’s Brainstorm AI conference in San Francisco on Tuesday, he explained how it would happen, laying out a “trifecta” of growth areas to ignite the company’s next leg of growth.

The first is entering the transactional database market, the traditional territory of large enterprise players like Oracle, which Ghodsi said has remained largely “the same for 40 years.” Earlier this year, Databricks launched a link-based offering called Lakehouse, which aims to combine the capabilities of traditional databases with modern data lake storage, in an attempt to capture some of this market.

The company is also seeing growth driven by the rise of AI-powered coding. “Over 80% of the databases that are being launched on Databricks are not being launched by humans, but by AI agents,” Ghodsi said. As developers use AI tools for “vibe coding”—rapidly building software with natural language commands—those applications automatically need databases, and Ghodsi they’re defaulting to Databricks’ platform.

“That’s just a huge growth factor for us. I think if we just did that, we could maybe get all the way to a trillion,” he said.

The second growth area is Agentbricks, Databricks’ platform for building AI agents that work with proprietary enterprise data.

“It’s a commodity now to have AI that has general knowledge,” Ghodsi said, but “it’s very elusive to get AI that really works and understands that proprietary data that’s inside enterprise.” He pointed to the Royal Bank of Canada, which built AI agents for equity research analysts, as an example. Ghodsi said these agents were able to automatically gather earnings calls and company information to assemble research reports, reducing “many days’ worth of work down to minutes.”

And finally, the third piece to Ghodsi’s puzzle involves building applications on top of this infrastructure, with developers using AI tools to quickly build applications that run on Lakehouse and which are then powered by AI agents. “To get the trifecta is also to have apps on top of this. Now you have apps that are vibe coded with the database, Lakehouse, and with agents,” Ghodsi said. “Those are three new vectors for us.”

Ghodsi did not provide a timeframe for attaining the trillion-dollar goal. Currently, only a handful of companies have achieved the milestone, all of them as publicly traded companies. In the tech industry, only big tech giants like Apple, Microsoft, Nvidia, Alphabet, Amazon, and Meta have managed to cross the trillion-dollar threshold.

To reach this level would require Databricks, which is widely expected to go public sometime in early 2026, to grow its valuation roughly sevenfold from its current reported level. Part of this journey will likely also include the expected IPO, Ghodsi said.

“There are huge advantages and pros and cons. That’s why we’re not super religious about it,” Ghodsi said when asked about a potential IPO. “We will go public at some point. But to us, it’s not a really big deal.”

Could the company IPO next year? Maybe, replied Ghodsi.



Source link

Continue Reading

Business

New contract shows Palantir working on tech platform for another federal agency that works with ICE

Published

on



Palantir, the artificial intelligence and data analytics company, has quietly started working on a tech platform for a federal immigration agency that has referred dozens of individuals to U.S. Immigration and Customs Enforcement for potential enforcement since September.

The U.S. Citizenship and Immigration Services agency—which handles services including citizenship applications, family immigration, adoptions, and work permits for non-citizens—started the contract with Palantir at the end of October, and is paying the data analytics company to implement “Phase 0” of a “vetting of wedding-based schemes,” or “VOWS” platform, according to the federal contract, which was posted to the U.S. government website and reviewed by Fortune.

The contract is small—less than $100,000—and details of what exactly the new platform entails are thin. The contract itself offers few details, apart from the general description of the platform (“vetting of wedding-based schemes”) and an estimate that the completion of the contract would be Dec. 9.Palantir declined to comment on the contract or nature of the work, and USCIS did not respond to requests for comment for this story.

But the contract is notable, nonetheless, as it marks the beginning of a new relationship between USCIS and Palantir, which has had longstanding contracts with ICE, another agency of the Department of Homeland Security, since at least 2011. The description of the contract suggests that the “VOWS” platform may very well be focused on marriage fraud and related to USCIS’ recent stated effort to drill down on duplicity in applications for marriage and family-based petitions, employment authorizations, and parole-related requests.

USCIS has been outspoken about its recent collaboration with ICE. Over nine days in September, USCIS announced that it worked with ICE and the Federal Bureau of Investigation to conduct what it called “Operation Twin Shield” in the Minneapolis-St. Paul area, where immigration officials investigated potential cases of fraud in immigration benefit applications the agency had received. The agency reported that its officers referred 42 cases to ICE over the period. In a statement published to the USCIS website shortly after the operation, USCIS director Joseph Edlow said his agency was “declaring an all-out war on immigration fraud” and that it would “relentlessly pursue everyone involved in undermining the integrity of our immigration system and laws.” 

“Under President Trump, we will leave no stone unturned,” he said.

Earlier this year, USCIS rolled out updates to its policy requirements for marriage-based green cards, which have included more details of relationship evidence and stricter interview requirements.

While Palantir has always been a controversial company—and one that tends to lean into that reputation no less—the new contract with USCIS is likely to lead to more public scrutiny. Backlash over Palantir’s contracts with ICE have intensified this year amid the Trump Administration’s crackdown on immigration and aggressive tactics used by ICE to detain immigrants that have gone viral on social media. Not to mention, Palantir inked a $30 million contract with ICE earlier this year to pilot a system that will track individuals who have elected to self-deport and help ICE with targeting and enforcement prioritization. There has been pushback from current and former employees of the company alike over contracts the company has with ICE and Israel.

In a recent interview at the New York Times DealBook Summit, Karp was asked on stage about Palantir’s work with ICE and later what Karp thought, from a moral standpoint, about families getting separated by ICE. “Of course I don’t like that, right? No one likes that. No American. This is the fairest, least bigoted, most open-minded culture in the world,” Karp said. But he said he cared about two issues politically: immigration and “re-establishing the deterrent capacity of America without being a colonialist neocon view. On those two issues, this president has performed.”



Source link

Continue Reading

Business

CoreWeave CEO: Despite see-sawing stock, IPO was ‘incredibly successful’ amid challenges of tariff timing

Published

on



CoreWeave has been rocked by dizzying stock swings—with its stock currently trading 52% below its post-IPO high—and a frequent target of market commentators, but CEO Michael Intrator says the company’s move to the public markets has been “incredibly successful. And he takes the public’s mixed reaction in stride, given the novelty of CoreWeave’s “neocloud” business which competes with established cloud providers like Amazon AWS and Google Cloud.

“When you introduce new models, introduce a new way of doing business, disrupt what has been a static environment, it’s going to take some people some time,” Intrator said Tuesday at Fortune’s Brainstorm AI conference in San Francisco. But, he added, more people are beginning to understand the CoreWeave’s business model.

“We came out into one of the most challenging environments,” Intrator said of CoreWeave’s March IPO, which occurred very close to President Trump’s “Liberation Day” tariffs in April. “In spite of the incredible headwinds, we’re able to launch a successful IPO.”

CoreWeave, which priced its IPO at $40 per share, has experienced frequent severe up-and-down price swings in the eight months since its public market debut. At its closing price of $90.66 on Tuesday, the stock remains well above its IPO price.

As Fortune reported last month, CoreWeave’s rapid rise has been fueled by an aggressive, debt-heavy strategy to stand up data centers at unprecedented speed for AI customers. And for now, the bet is still paying off. In its third-quarter results released in November, the company said its revenue backlog nearly doubled in a single quarter—to $55.6 billion from $30 billion—reflecting long-term commitments from marquee clients including Meta, OpenAI, and French AI startup Poolside. Both earnings and revenue came in ahead of Wall Street expectations.

But the numbers were not all celebratory. CoreWeave disclosed a further increase in the debt it has taken on to finance its expansion, and it revised its full-year revenue outlook downward—suggesting that, even with historic demand in the pipeline.

With media headlines calling CoreWeave a “ticking time bomb,” with critics calling out insider stock sales, circular financing accusations and an overreliance on Nvidia, Intrator was asked whether he felt CoreWeave was misunderstood.

“Look, we built a company that is challenging one of the most stable businesses that exist—that cloud business, these three massive players,” he said, referring to AWS, Microsoft Azure and Google Cloud.  I feel like it’s incumbent on CoreWeave to introduce a new business model on how the cloud is going to be built and run. And that’s what we’re doing.” 

He repeatedly framed CoreWeave not as a GPU reseller or traditional data-center operator but as a company purpose-built from scratch to deliver high-performance, parallelized computing for AI workloads. That focus, he said, means designing proprietary software that orchestrates GPUs, building and colocating its own infrastructure, and moving “up the stack” through acquisitions such as Weights & Biases and OpenPipe.

Intrator also defended the company’s debt strategy, saying CoreWeave is effectively inventing a new financing model for AI infrastructure. He pointed to the company’s ability to repurpose power sources, rapidly deploy capacity, and finance large-scale clusters as proof it is solving problems incumbents never had to face.

“When I look back at history of the company, it took us a year with with a company investor like Fidelity, before they were like, ‘Oh, I get it,’” he said. “So look, we’ve been public for eight months. I couldn’t be prouder of what the company has accomplished.” 



Source link

Continue Reading

Trending

Copyright © Miami Select.