Connect with us

Business

Most police forces banned shooting at moving vehicles decades ago, and Biden asked ICE to do it in 2022. So why isn’t it policy?

Published

on


Minneapolis is once again the focus of debates about violence involving law enforcement after an Immigration and Customs Enforcement officer shot and killed Renee Nicole Good, a 37-year-old mother, in her car.

The incident quickly prompted dueling narratives. Trump administration officials defended the shooting as justified, while local officials condemned it.

The shooting will also likely prompt renewed scrutiny of training and policy of officers and the question of them shooting at moving vehicles. There has been a recent trend in law enforcement toward policies that prohibit such shootings. It is a policy shift that has shown promise in saving lives.

Decades ago, the New York City Police Department prohibited its officers from shooting at moving vehicles. That led to a drop in police killings without putting officers in greater danger.

Debates over deadly force are often contentious, but as I note in my research on police ethics and policy, for the most part there is consensus on one point: Policing should reflect a commitment to valuing human life and prioritizing its protection. Many use-of-force policies adopted by police departments endorse that principle.

Yet, as in Minneapolis, controversial law enforcement killings continue to occur. Not all agencies have implemented prohibitions on shooting at vehicles. Even in agencies that have, some policies are weak or ambiguous.

In addition, explicit prohibitions on shooting at vehicles are largely absent from the law, which means that officers responsible for fatal shootings of drivers that appear to violate departmental policies still often escape criminal penalties.

In the case of ICE, which is part of the Department of Homeland Security, its policy on shooting at moving vehicles – unlike that of many police agencies – lacks a clear instruction for officers to get out of the way of moving vehicles where feasible. It’s an omission at odds with generally recognized best practices in policing.

ICE’s policy on shooting at moving vehicles

ICE’s current use-of-force policy prohibits its officers from “discharging firearms at the operator of a moving vehicle” unless it is necessary to stop a grave threat. The policy is explicit that deadly force should not be used “solely to prevent the escape of a fleeing suspect.”

That point is relevant for evaluating the fatal shooting in Minneapolis. Videos show one officer trying to open the door of the vehicle that Good was driving, while another officer appears to be in front of the vehicle as she tried to pull away.

Shooting to prevent the driver simply from getting away would have been in violation of agency policy and obviously inconsistent with prioritizing the protection of life.

ICE’s policy lacks clear instruction, however, for its officers to get out of the way of moving vehicles where feasible. In contrast, the Department of Justice’s use-of-force policy makes it explicit that officers should not shoot at a vehicle if they can protect themselves by “moving out of the path of the vehicle.”

Notably, President Joe Biden issued an executive order in 2022 requiring federal law enforcement agencies – like ICE – to adopt use-of-force policies “that are equivalent to, or exceed, the requirements” of the Department of Justice’s policy.

Despite that order, the provision to step out of the way of moving cars never made it into the use-of-force policy that applies to ICE.

The rationale for not shooting at moving vehicles

Prioritizing the protection of life doesn’t rule out deadly force. Sometimes such force is necessary to protect lives from a grave threat, such as an active shooter. But it does rule out using deadly force when less harmful tactics can stop a threat. In such cases, deadly force is unnecessary – a key consideration in law and ethics that can render force unjustified.

That’s the concern involved with police shooting at moving vehicles. It often is not necessary because officers have a less harmful option to avoid a moving vehicle’s threat: stepping out of the way.

This guidance has the safety of both suspects and police in mind. Obviously, police not shooting lowers the risk of harm to the suspect. But it also lowers the risk to the officer in the vast majority of cases because of the laws of physics. If you shoot the driver of a car barreling toward you, that rarely brings a car to an immediate stop, and the vehicle often continues on its path.

Many police departments have incorporated these insights into their policies. A recent analysis of police department policies in the 100 largest U.S. cities found that close to three-quarters of them have prohibitions against shooting at moving vehicles.

The gap between policy and best practices for protecting life

The shooting in Minneapolis serves as a stark reminder of the stubborn gap that often persists between law and policy on the one hand and best law enforcement practices for protecting life on the other. When steps are taken to close that gap, however, they can have a meaningful impact.

Some of the most compelling examples involve local, state and federal measures that reinforce one another. Consider the “fleeing felon rule,” which used to allow police to shoot a fleeing felony suspect to prevent their escape even when the suspect posed no danger to others.

That rule was at odds with the doctrine of prioritizing the protection of life, leading some departments to revise their use-of-force policies and some states to ban the rule. In 1985, the U.S. Supreme Court ruled that it was unconstitutional for police to shoot a fleeing suspect who was not a danger.

Banning that questionable tactic notably led to a reduction in killings by police.

This history suggests that clear bans in law and policy on questionable tactics have the potential to save lives, while also strengthening the means for holding officers accountable.

Ben Jones, Assistant Professor of Public Policy and Research Associate in the Rock Ethics Institute, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.



Source link

Continue Reading

Business

Trump calls for one-year cap on credit card rates at 10%

Published

on



President Donald Trump on Friday called for a one-year cap on credit card interest rates at 10%, effective Jan. 20, without specifying details.

“Please be informed that we will no longer let the American Public be ‘ripped off’ by Credit Card Companies that are charging Interest Rates of 20 to 30%, and even more, which festered unimpeded during the Sleepy Joe Biden Administration. AFFORDABILITY!” he wrote on social media.

It’s not clear whether credit card companies will respond to his call, or what actions he might take to force any change.

The post comes as the Trump administration intensifies efforts to demonstrate to voters that the president is addressing concerns about costs and prices that have emerged as a central issue in the November midterm elections.

During the 2024 presidential campaign, Trump pledged to seek limits on the interest credit card companies can charge.

Hours before his message on Friday, Senator Bernie Sanders, a Vermont independent, said on X: “Trump promised to cap credit card interest rates at 10% and stop Wall Street from getting away with murder. Instead, he deregulated big banks charging up to 30% interest on credit cards.”

In a letter last year to Sanders and Senator Josh Hawley, a Missouri Republican, a group of banking trade groups painted a dire outcome for consumers if the government ever capped interest rates on credit cards at 10%, as the senators had proposed.

“Many consumers who currently rely on credit cards would be forced to turn elsewhere for short-term financing needs, including pawn shops, auto title lenders or worse — such as loan sharks, unregulated online lenders and the black market,” the group wrote.

The Bank Policy Institute said in a report last year that “while the proposed cap is a well-intentioned effort to reduce the high debt burden some households are facing, it would harm consumers’ access to card credit.” The group also said such a move could force card issuers to reduce cardholder benefits, including lucrative rewards tied to purchases. 

Responding to Trump’s post on Friday, Hawley said on X: “Fantastic idea. Can’t wait to vote for this.”



Source link

Continue Reading

Business

Asian households still save as much as half their wealth in cash. Fintech platforms like Syfe want to change that

Published

on



Growing up in India, Dhruv Arora’s mother gave him one key piece of financial advice: Put his money in the bank. 

But Arora, now the founder of Singapore-based fintech platform Syfe, quickly realized that following his mother’s advice meant his money “did absolutely nothing.”

“We have quite a heavy culture of saving,” Arora says, citing Asia’s often unstable economic and policy history. But inflation and low interest rates end up eroding the value of household savings. “Over time, the $100 you put in the bank doesn’t become $101, but effectively $98” due to the effects of inflation.

Asian households sometimes keep as much as 50% of their net worth in cash, rather than in investments or assets. In contrast, in developed markets like the U.S. and Europe, that figure is closer to 15%. 

But that conservative attitude in Asia is starting to change. Asians are getting wealthier, pushing them to explore different investment options. Strong stock market performance is also driving a new wave of retail investors across the Asia-Pacific.

“Asian households are slowly dipping their toes into stock markets,” HSBC economists wrote in a Jan. 9 report, though noted that “overall equity investment remains quite low.” The bank predicts that a steady shift from low-yield cash to higher-yield investments will mean “more money will continue to rotate into equity markets over the next few years,” reducing a reliance on foreign investors. 

A slew of fintech apps have emerged in recent years to tap a growing interest in investing and wealth management among Asian users. These alternative finance platforms, such as Syfe, Stashaway and Endowus, often offer a range of investment options, ranging from cash management to managed portfolios and options trading. The challenge, Arora says, is how to “bridge the gap between holding money and growing wealth,” and “give more people the confidence to put their savings to work.”

Arora began his career as an investment banker for UBS in Hong Kong in 2008, soon after the Global Financial Crisis. Despite Asia’s relatively quick recovery, Arora noticed that the region’s professionals were building wealth yet didn’t know how to manage it. “These were smart people like doctors, lawyers and consultants, who were doing well professionally, but just did not know what to do with their money,” he says. 

He launched Syfe in 2019, just a few months before another global crisis: The COVID-19 pandemic. Yet the pandemic ended up being an opportunity for fintech platforms like Syfe. “It acted as a catalyst for a shift in investor behavior,” Arora explained, as people suddenly had the time to engage with financial markets.

In the U.S., for example, people stuck at home began to get involved in stock trading through platforms like Robinhood. Fueled by social media, these retail investors began to heavily trade in so-called meme stocks like Gamestop and AMC.

Syfe has since expanded from its home market of Singapore to new Asia-Pacific economies like Australia and Hong Kong. The platform continues to grow both its userbase and company revenue, and the company claimed it reached profitability in Q4 2025. It’s now a “self-sustaining organization,” Arora says. 

Syfe closed an $80 million Series C funding round last year, and is backed by major investors like NYC-based Valar Ventures and UK-based investment firm Unbound.

The platform’s users generated $2 billion worth of returns while saving $80 million in fees last year, according to the company. 

Currently, Arora wants to deepen Syfe’s presence in its existing markets. Last year, the platform began to roll out bespoke offerings for its users, like private credit for accredited investors looking to diversify their portfolios on Syfe. Syfe will launch options trading in 2026.

Arora notes that many of Syfe’s users, over time, have grown more comfortable with taking larger investment risks, moving from putting their money in Syfe-managed portfolios, to more actively trading on brokerages and income portfolios.

Yet he eventually wants to bring Syfe to new markets in North Asia and the Middle East, which boast sizable populations of what Arora terms the “mass affluent,” a population with significant investable assets and higher-than-average incomes, though still not in the high-net-worth category. 

“This demographic has historically been ‘stuck in the middle’: too large for basic retail banking, yet often underserved by traditional private banks,” he explains.

This story was originally featured on Fortune.com



Source link

Continue Reading

Business

Lawmakers and victims criticize new limits on Grok’s AI image as ‘insulting’ and ‘not effective’

Published

on



Elon Musk’s xAI has restricted its AI chatbot Grok’s image generation capabilities to paying subscribers only, following widespread condemnation over its use to create non-consensual sexualized images of real women and children.

“Image generation and editing are currently limited to paying subscribers,” Grok announced via X on Friday. The restriction means the vast majority of users can no longer access the feature. Paying, verified subscribers with credit card details on file can still do so, but theoretically they can be identified more easily if the function is misused.

However, experts, regulators, and victims say that the new restrictions aren’t a solution to the now widespread problem.

“The argument that providing user details and payment methods will help identify perpetrators also isn’t convincing, given how easy it is to provide false info and use temporary payment methods,” Henry Ajder, a UK-based deepfakes expert, told Fortune. “The logic here is also reactive: it is supposed to help identify offenders after content has been generated, but it doesn’t represent any alignment or meaningful limitations to the model itself.”

The UK government has called the move “insulting” to victims, in remarks reported by the BBC. The UK’s prime minister’s spokesperson told reporters on Friday that the change “simply turns an AI feature that allows the creation of unlawful images into a premium service.

“It is time for X to grip this issue; if another media company had billboards in town centers showing unlawful images, it would act immediately to take them down or face public backlash,” they said.

A representative for X said they were “looking into” the new restrictions. xAI responded with the automated message: “Legacy Media Lies.”

Over the past week real women have been targeted at scale with users manipulating photos to remove clothing, place subjects in bikinis, or position them in sexually explicit scenarios without their consent. Some victims reported feeling violated and disturbed by the trend, with many saying their reports to X went unanswered and images remained live on the platform.

Researchers said the scale at which Grok was producing and sharing images was unprecedented as, unlike other AI bots, Grok essentially has a built-in distribution system in the X platform. 

One researcher, whose analysis was published by Bloomberg, estimated that X has become the most prolific site for deepfakes over the last week. Genevieve Oh, a social media and deepfake researcher who conducted a 24-hour analysis of images the @Grok account posted to X, found that the chatbot was producing roughly 6,700 sexually suggestive or nudifying images per hour. By comparison, the five other leading websites for sexualized deepfakes averaged 79 new AI undressing images hourly during the same period. Oh’s research also found that sexualized content dominated Grok’s output, accounting for 85% of all images the chatbot generated.

Ashley St. Clair, a conservative commentator and mother of one of Musk’s children, was among those affected by the images. St. Clair told Fortune that users were turning images on her X profile into explicit AI-generated photos of her, including some she said depicted her as a minor. After speaking out against the images and raising concerns about deepfakes on minors, St Clair also said X took away her verified, paying subscribers status without notifying her or refunding her for the $8 per month fee.

“Restricting it to the paid-only user shows that they’re going to double down on this, placing an undue burden on the victims to report to law enforcement and law enforcement to use their resources to track these people down,” Ashley St Clair said of the recent restrictions. “It’s also a money grab.”

St Clair told Fortune that many of the accounts targeting her were already verified users: “It’s not effective at all,” she said. “This is just in anticipation of more law enforcement inquiries regarding Grok image generation.”

Regulatory pressure

The move to limit Grok’s capabilities comes amid mounting pressure from regulators worldwide. In the U.K., Prime Minister Keir Starmer has indicated he is open to banning the platform entirely, describing the content as “disgraceful” and “disgusting.” Regulators in India, Malaysia, and France have also launched investigations or probes.

The European Commission on Thursday ordered X to preserve all internal documents and data related to Grok, stepping up its investigation into the platform’s content moderation practices after describing the spread of nonconsensual sexually explicit deepfakes as “illegal,” “appalling,” and “disgusting.”

Experts say the new restrictions may not satisfy regulators’ concerns: “This approach is a blunt instrument that doesn’t address the root of the problem with Grok’s alignment and likely won’t cut it with regulators,” Ajder said. “Limiting functionality to paying users will not stop the generation of this content; a month’s subscription is not a robust solution.”

In the U.S., the situation is also likely to test existing laws, like Section 230 of the Communications Decency Act, which shields online providers from liability for content created by users. U.S. Senators Ron Wyden, Edward J. Markey, and Ben Ray Luján have issued a statement urging Apple and Google to “immediately remove the X and Grok apps from their app stores” following Grok’s alleged use for generating “nonconsensual sexualized images of women and children at scale.” The lawmakers called the images “disturbing and likely illegal,” and said the apps should remain unavailable until Musk addresses the concerns.

The Council on American-Islamic Relations (CAIR) has also called for Grok to be blocked from generating “sexually explicit images of children and women, including prominent Muslim women.”

Riana Pfefferkorn of Stanford’s Institute for Human-Centered Artificial Intelligence previously told Fortune that liability surrounding AI-generated images is murky. “We have this situation where for the first time, it is the platform itself that is at scale generating non-consensual pornography of adults and minors alike,” she said. “From a liability perspective as well as a PR perspective, the CSAM laws pose the biggest potential liability risk here.”

Musk has previously stated that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” However, it remains unclear how accounts will be held accountable.



Source link

Continue Reading

Trending

Copyright © Miami Select.