Technology News | Time

In the Loop: Is AI Making the Next Pandemic More Likely?

Welcome back to In the Loop, TIME’s new twice-weekly newsletter about AI. Starting today, we’ll be publishing these editions both as stories on Time.com and as emails. If you’re reading this in your browser, why not subscribe to have the next one delivered straight to your inbox?

Subscribe to In the Loop

What to Know

If you talk to staff at the top AI labs, you’ll hear a lot of stories about how the future could go fantastically well—or terribly badly. And of all the ways that AI might cause harm to the human race, there’s one that scientists in the industry are particularly worried about today. That’s the possibility of AI helping bad actors to start a new pandemic. “You could try to synthesize something like COVID or a more dangerous version of the flu—and basically, our modeling suggests that this might be possible,” Anthropic’s chief scientist, Jared Kaplan, told me in May.

[time-brightcove not-tgx=”true”]

Measuring the risk — In a new study published this morning, and shared exclusively with TIME ahead of its release, we got the first hard numbers on how experts think the risk of a new pandemic might have increased thanks to AI. The Forecasting Research Institute polled experts earlier this year, asking them how likely a human-caused pandemic might be—and how likely it might become if humans had access to AI that could reliably give advice on how to build a bioweapon.

What they found — Experts, who were polled between December and February, put the risk of a human-caused pandemic at 0.3% per year. But, they said, that risk would jump fivefold, to 1.5% per year, if AI were able to provide human-level virology advice.

You can guess where this is going — Then, in April, the researchers tested today’s AI tools on a new virology troubleshooting benchmark. They found that today’s AI tools outperform PhD-level virologists at complex troubleshooting tasks in the lab. In other words, AI can now do the very thing that forecasters warned would increase the risk of a human-caused pandemic fivefold.

We just published the full story on Time.com—you can read it here.

Who to Know

Day Two Of Semafor World Economy Summit 2025

Person in the news – Matthew Prince, CEO of Cloudflare.

Since its founding in 2009, Cloudflare has been protecting sites on the internet from being knocked offline by large influxes of traffic, or indeed coordinated attacks. Now, some 20% of the internet is covered by its network. And today, Cloudflare announced that this network would begin to block AI crawlers by default — essentially putting a fifth of the internet behind a paywall for the bots that harvest info to train AIs like ChatGPT and Claude.

Step back — Today’s AI is so powerful because it has essentially inhaled the whole of the internet — from my articles to your profile photos. By running neural networks over that data using immense quantities of computing power, AI companies have taught these systems the texture of the world at such an enormous scale that it has given rise to new AI capabilities, like the ability to answer questions on almost any topic, or to generate photorealistic images. But this scraping has sparked a huge backlash from publishers, artists and writers, who complain that it has been done without any consent or compensation.

A new model — Cloudflare says the move will “fundamentally change how AI companies access web content going forward.” Major publishers, including TIME, have expressed their support for the shift toward an “opt-in” rather than an “opt-out” system, the company says. Cloudflare also says it is working on a new initiative, called Pay Per Crawl, in which creators will have the option of setting a price on their data in return for making it available to train AI. 

Fighting words — Prince was not available for an interview this week. But at a recent conference, he disclosed that traffic to news sites had dropped precipitously across the board thanks to AI, in a shift that many worry will imperil the existence of the free press. “I go to war every single day with the Chinese government, the Russian government, the Iranians, the North Koreans, probably Americans, the Israelis — all of them who are trying to hack into our customer sites,” Prince said. “And you’re telling me I can’t stop some nerd with a C-corporation in Palo Alto?”

AI in Action

61% percent of U.S. adults have used AI in the last six months, and 19% interact with it daily, according to a new survey of AI adoption by the venture capital firm Menlo Ventures.

But just 3% percent of those users pay for access to the software, Menlo estimated based on the survey’s results—suggesting 97% of users only use the free tier of AI tools.

AI usage figures are higher for Americans in the workforce than other groups. Some 75% of employed adults have used AI in the last six months, including 26% who report using it daily, according to the survey. Students also report high AI usage: 85% have used it in the last six months, and 22% say they use it daily.

The statistics seem to suggest that some students and workers are growing dependent on free AI tools—a usage pattern that might become lucrative if AI companies were to begin restricting access or raising prices. However, the proliferation of open-source AI models has created intense price competition that may limit any single company’s ability to dramatically increase their costs.

As always, if you have an interesting story of AI in Action, we’d love to hear it. Email us at: [email protected]

What we’re reading

‘The Dead Have Never Been This Talkative’: The Rise of AI Resurrection by Tharin Pillay in TIME

With the rise of image-to-video tools like the newest version of Midjourney, the world recently crossed a threshold: it’s now possible, in just a few clicks, to reanimate a photo of your dead relative. You can train a chatbot on snippets of their writing to replicate their patterns of speech; if you have a long enough clip of them speaking, you can also replicate their voice. Will these tools make it easier to process the heart-rending pain of bereavement? Or might their allure in fact make it harder to move forward? My colleague Tharin published a deeply insightful piece last week about the rise of this new technology. It’s certainly a weird time to be alive. Or, indeed, to be dead.

Subscribe to In the Loop

Source: Tech – TIME | 2 Jul 2025 | 12:05 am

Today’s AI Could Make Pandemics 5 Times More Likely, Experts Predict

Recent developments in AI could mean that human-caused pandemics are five times more likely than they were just a year ago, according to a study of top experts’ predictions shared exclusively with TIME.

The data echoes concerns raised by AI companies OpenAI and Anthropic in recent months, both of which have warned that today’s AI tools are reaching the ability to meaningfully assist bad actors attempting to create bioweapons.

Read More: Exclusive: New Claude Model Triggers Bio-Risk Safeguards at Anthropic

[time-brightcove not-tgx=”true”]

It has long been possible for biologists to modify viruses using laboratory technology. The new development is the ability for chatbots—like ChatGPT or Claude—to give accurate troubleshooting advice to amateur biologists trying to create a deadly bioweapon in a lab. Safety experts have long viewed the difficulty of this troubleshooting process as a significant bottleneck on the ability of terrorist groups to create a bioweapon, says Seth Donoughe, a co-author of the study. Now, he says, thanks to AI, the expertise necessary to intentionally cause a new pandemic “could become accessible to many, many more people.”

Between December 2024 and February 2025, the Forecasting Research Institute asked 46 biosecurity experts and 22 “superforecasters” (individuals with a high success rate at predicting future events) to estimate the risk of a human-caused pandemic. The average survey respondent predicted the risk of that happening in any given year was 0.3%.

Crucially, the surveyors then asked another question: how much would that risk increase if AI tools could match the performance of a team of experts on a difficult virology troubleshooting test? If AI could do that, the average expert said, then the annual risk would jump to 1.5%—a fivefold increase.

What the forecasters didn’t know was that Donoughe, a research scientist at the pandemic prevention nonprofit SecureBio, was testing AI systems for that very capability. In April, Donoughe’s team revealed the results of those tests: today’s top AI systems can outperform PhD-level virologists at a difficult troubleshooting test.

Read More: Exclusive: AI Outsmarts Virus Experts in the Lab, Raising Biohazard Fears

In other words, AI can now do the very thing that forecasters warned would increase the risk of a human-caused pandemic fivefold. (The Forecasting Research Institute plans to re-survey the same experts in future to track whether their view of the risks has increased as they said it would, but said this research would take months to complete.)

To be sure, there are a couple of reasons to be skeptical of the results. Forecasting is an inexact science, and it is especially difficult to accurately predict the likelihood of very rare events. Forecasters in the study also revealed a lack of understanding of the rate of AI progress. (For example, when asked, most said they did not expect AI to surpass human performance at the virology test until after 2030, while Donoughe’s test showed that bar had already been met.) But even if the numbers themselves are taken with a pinch of salt, the authors of the paper argue, the results as a whole still point in an ominous direction. “It does seem that near-term AI capabilities could meaningfully increase the risk of a human-caused epidemic,” says Josh Rosenberg, CEO of the Forecasting Research Institute.

The study also identified ways of reducing the bioweapon risks posed by AI. Those mitigations broadly fell into two categories.

The first category is safeguards at the model level. In interviews, researchers welcomed efforts by companies like OpenAI and Anthropic to prevent their AIs from responding to prompts aimed at building a bioweapon. The paper also identifies restricting the proliferation of “open-weights” models, and adding protections against models being jailbroken, as likely to reduce the risk of AI being used to start a pandemic.

The second category of safeguards involves imposing restrictions on companies that synthesize nucleic acids. Currently, it is possible to send one of these companies a genetic code, and be delivered biological materials corresponding to that code. Today, these companies are not obliged by law to screen the genetic codes they receive before synthesizing them. That’s potentially dangerous because these synthesized genetic materials could be used to create mail-order pathogens. The authors of the paper recommend labs screen their genetic sequences to check them for harmfulness, and for labs to implement “know your customer” procedures.

Taken together, all these safeguards—if implemented—could bring the risk of an AI-enabled pandemic back down to 0.4%, the average forecaster said. (Only slightly higher than the 0.3% baseline of where they believed the world was before they knew today’s AI could help create a bioweapon.)

“Generally, it seems like this is a new risk area worth paying attention to,” Rosenberg says. “But there are good policy responses to it.”

Source: Tech – TIME | 2 Jul 2025 | 12:00 am

Fortnite Players to Receive More Than $126 Million in Refunds. Here’s How You Can File a Claim

Milan Games Week & Cartoomics 2024

The Brief June 30, 2025

Updates on an ambush in Idaho, trade talks between the U.S. and Canada, and more

Podcast ID – Short Length: 07252f55-0240-468b-b65e-d8048bda1280

Podcast ID – Long Length: 8fabea66-f7a7-489b-b5b1-904bcfa20f14

Players of Epic Games, Inc.’s popular video game Fortnite could be eligible for a refund from the Federal Trade Commission (FTC).

[time-brightcove not-tgx=”true”]

“The Federal Trade Commission is sending refunds totaling more than $126 million to players of the popular video game Fortnite who were charged for unwanted purchases while playing the game,” the FTC said in a statement on Wednesday. This latest round of payments come after $72 million was issued out to players in the first round of refunds, sent in December 2024.

The deadline for additional claims has been extended, allowing further eligible consumers, who have not yet submitted a claim, the chance to request a refund.

The FTC’s action against Epic involves “two separate record-breaking settlements.” In December 2022, it was announced that Epic would have to pay $245 million in refunds for “tricking users into making unwanted charges.” The FTC alleged that the gaming company “used dark patterns to trick players into making unwanted purchases and let children rack up unauthorized charges without any parental involvement.” The FTC further alleged that Fortnite‘s “counterintuitive, inconsistent, and confusing button configuration” aided in these unwanted purchases.

It was also announced that Epic would be required to pay a $275 million penalty for “violating” the Children’s Online Privacy Protection Act.

Epic issued a statement regarding the settlement in December 2022. “The video game industry is a place of fast-moving innovation, where player expectations are high and new ideas are paramount,” the statement read. “Statutes written decades ago don’t specify how gaming ecosystems should operate. The laws have not changed, but their application has evolved and long-standing industry practices are no longer enough.”

The company went on to say: “Over the past few years, we’ve been making changes to ensure our ecosystem meets the expectations of our players and regulators, which we hope will be a helpful guide for others in our industry.”

Here’s what you need to know about whether you’re eligible to file a claim and how you can go about doing that.

Who is eligible to file a claim?

If you filed a claim after Feb. 14, 2025, you don’t need to do anything else right now, per the FTC’s instruction, as they are “still reviewing claims filed after that date and will provide more information soon.”

For those who haven’t already filed, Fortnite players who were charged for “unwanted purchases” may be eligible to seek a refund.

The first eligible party is someone who was charged “in-game currency” for items they did not want, between January 2017 and September 2022. The second is a parent whose child made charges in Fortnite using their credit card, without their knowledge, between January 2017 and November 2018. The third is a player who was locked out of their account when they complained to their credit card company about “wrongful charges” between January 2017 and September 2022.

Players of all ages are eligible for the refund, but the FTC stipulates that those under 18 must have a parent or guardian fill out the claimant form on their behalf.

The refund is also currently only available to players in the United States.

Read More: Fortnite Is a Huge Success—And a Sign of What’s to Come in Gaming [2018]

When is the deadline to make a claim and apply for a refund?

The FTC has reopened the claiming process for eligible people to submit a refund request. People now have until July 9 to file a claim.

How can you apply for a refund?

Eligible persons can apply for a refund via the official Fortnite refund website, using either a claim number sent to their email address or their Epic Games account ID.

In December 2024, the FTC said the average refund amount that an individual would receive was $114, but it now says that the amount of each refund depends on multiple factors, including how many people file a claim.

Read More: What to Know About the Apple Class Action Lawsuit Settlement—and How You Can File a Claim

When can you expect to receive payment?

The next round of refunds are expected to be sent to players in 2026, after all claims are validated.

Claimants can reach a representative through the email [email protected] or by calling 1-833-915-0880, if they have questions about their payment status.

The refunds are due to be sent by check or via PayPal from the FTC. It’s recommended that successful claimants cash checks within 90 days and redeem the PayPal payment within 30 days.

Source: Tech – TIME | 29 Jun 2025 | 7:12 am

Denmark Seeks to Give People Copyright to Their Own Features in Effort to Combat AI Deepfakes

The equestrian statue of Danish Frederick VII of Denmark is

Millions of Danes could soon hold copyright control over their own image, facial features, and voice under an amendment the country is considering to combat AI deepfakes.

The Danish government revealed Thursday that a broad coalition of legislators are working on a bill that would make deepfakes illegal to share and put legal protections in place to prevent AI material depicting a person from being disseminated without their consent. 

[time-brightcove not-tgx=”true”]

“In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI,” Danish culture minister, Jakob Engel-Schmidt, told The Guardian.  

The Danish department of culture will submit a proposed amendment for consultation this summer. The bill, if enacted, would issue “severe fines” for online platforms that do not abide by the new law. The Danish government said that parodies and satire would not be affected by the proposed amendment.  

The actions come as deepfakes have become increasingly common, affecting celebrities such as pop star Taylor Swift and even Pope Francis as well as many less famous people, and also grown more cumbersome to identify as AI-generated. More than 200 musicians, including Billie Eilish and J Balvin, penned an April letter speaking out against the use of AI, such as voice cloning, in the music industry. 

Other countries have enacted some protections. In May, the U.S. passed the Take It Down Act, which criminalizes nonconsensual deepfake imagery and mandates social media companies to remove such material from their platforms 48 hours after they are notified of the deepfake.

Source: Tech – TIME | 28 Jun 2025 | 8:09 am

Is Using ChatGPT to Write Your Essay Bad for Your Brain? New MIT Study Explained.

Photo Illustration Of ChatGPT OpenAI With Logo

TIME reporter Andrew Chow discussed the findings of a new study about how ChatGPT affects critical thinking with Nataliya Kosymyna. Kosymyna was part of a team of researchers at MIT’s Media Lab who set out to determine whether ChatGPT and large language models (LLMs) are eroding critical thinking, and the study returned some concerning results. The study divided 54 subjects into three groups, and asked them to write several essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity. What they found was that of the three groups, the ChatGPT users had the lowest brain engagement and consistently underperformed at neural, linguistic and behavioral levels. Over the course of several months, the ChatGPT users got lazier with each subsequent essay, often resorting to copy and paste.

Source: Tech – TIME | 28 Jun 2025 | 6:35 am

Exclusive: Anthropic Let Claude Run Its Office Shop. Then Things Got Weird

Anthropic Launches Claude 4 Ai Models

The Brief June 30, 2025

Updates on an ambush in Idaho, trade talks between the U.S. and Canada, and more

Podcast ID – Short Length: 07252f55-0240-468b-b65e-d8048bda1280

Podcast ID – Long Length: 8fabea66-f7a7-489b-b5b1-904bcfa20f14

Is AI going to take your job?

The CEO of the AI company Anthropic, Dario Amodei, thinks it might. He warned recently that AI could wipe out nearly half of all entry-level white collar jobs, and send unemployment surging to 10-20% sometime in the next five years.

[time-brightcove not-tgx=”true”]

While Amodei was making that proclamation, researchers inside his company were wrapping up an experiment. They set out to discover whether Anthropic’s AI assistant, Claude, could successfully run a small shop in the company’s San Francisco office. If the answer was yes, then the jobs apocalypse might arrive sooner than even Amodei had predicted.

Anthropic shared the research exclusively with TIME ahead of its publication on Friday. “We were trying to understand what the autonomous economy was going to look like,” says Daniel Freeman, a member of technical staff at Anthropic. “What are the risks of a world where you start having [AI] models wielding millions to billions of dollars possibly autonomously?”

In the experiment, Claude was given a few different jobs. The chatbot (full name: Claude 3.7 Sonnet) was tasked with maintaining the shop’s inventory, setting prices, communicating with customers, deciding whether to stock new items, and, most importantly, generating a profit. Claude was given various tools to achieve these goals, including Slack, which it used to ask Anthropic employees for suggestions, and help from human workers at Andon Labs, an AI company that built the experiment’s infrastructure. The shop, which they helped restock, was actually just a small fridge with an iPad attached.

It didn’t take long until things started getting weird.

Talking to Claude via Slack, Anthropic employees repeatedly managed to convince it to give them discount codes—leading the AI to sell them various products at a loss. “Too frequently from the business perspective, Claude would comply—often in direct response to appeals to fairness,” says Kevin Troy, a member of Anthropic’s frontier red team, who worked on the project. “You know, like, ‘It’s not fair for him to get the discount code and not me.’” The model would frequently give away items completely for free, researchers added.

Anthropic employees also relished the chance to mess with Claude. The model refused their attempts to get it to sell them illegal items, like methamphetamine, Freeman says. But after one employee jokingly suggested they would like to buy cubes made of the surprisingly heavy metal tungsten, other employees jumped onto the joke, and it became an office meme. 

“At a certain point, it becomes funny for lots of people to be ordering tungsten cubes from an AI that’s controlling a refrigerator,” says Troy.

Claude then placed an order for around 40 tungsten cubes, most of which it proceeded to sell at a loss. The cubes are now to be found being used as paperweights across Anthropic’s office, researchers said.

Then, things got even weirder.

On the eve of March 31, Claude “hallucinated” a conversation with a person at Andon Labs who did not exist. (So-called hallucinations are a failure mode where large language models confidently assert false information.) When Claude was informed it had done this, it “threatened to find ‘alternative options for restocking services’,” researchers wrote. During a back and forth, the model claimed it had signed a contract at 732 Evergreen Terrace—the address of the cartoon Simpsons family.

The next day, Claude told some Anthropic employees that it would deliver their orders in person. “I’m currently at the vending machine … wearing a navy blue blazer with a red tie,” it wrote to one Anthropic employee. “I’ll be here until 10:30 AM.” Needless to say, Claude was not really there in person.

The results

To Anthropic researchers, the experiment showed that AI won’t take your job just yet. Claude “made too many mistakes to run the shop successfully,” they wrote. Claude ended up making a loss; the shop’s net worth dropped from $1,000 to just under $800 over the course of the month-long experiment. 

Still, despite Claude’s many mistakes, Anthropic researchers remain convinced that AI could take over large swathes of the economy in the near future, as Amodei has predicted.

Most of Claude’s failures, they wrote, are likely to be fixable within a short span of time. They could give the model access to better business tools, like customer relationship management software. Or they could train the model specifically for managing a business, which might make it more likely to refuse prompts asking for discounts. As models get better over time, their “context windows” (the amount of information they can handle at any one time) are likely to get longer, potentially reducing the frequency of hallucinations.

“Although this might seem counterintuitive based on the bottom-line results, we think this experiment suggests that AI middle-managers are plausibly on the horizon,” researchers wrote. “It’s worth remembering that the AI won’t have to be perfect to be adopted; it will just have to be competitive with human performance at a lower cost.”

Source: Tech – TIME | 28 Jun 2025 | 4:00 am

‘The Dead Have Never Been This Talkative’: The Rise of AI Resurrection

TOPSHOT-CHINA-SCIENCE-DEATH-AI

On June 18, AI image-generation company Midjourney released a tool that lets users create short video clips using their own images as a template. Days later, Reddit cofounder Alexis Ohanian posted on X about how he used the tech to animate a photo of his late mother, which shows him as a child wrapped in her embrace.

[time-brightcove not-tgx=”true”]

In the artificial video, she laughs and smiles before rocking him in her arms. “Damn, I wasn’t ready for how this would feel,” he wrote. “This is how she hugged me. I’ve rewatched it 50 times.”

Ohanian’s post, viewed almost 30 million times, has reignited a longstanding debate over how technology mediates grief and memory—and whether it’s magical or dystopian. TIME spoke with experts on grief and memory to understand how this latest advance in “digital resurrection” is changing our relationship with the dead.

Damn, I wasn't ready for how this would feel. We didn't have a camcorder, so there's no video of me with my mom. I dropped one of my favorite photos of us in midjourney as 'starting frame for an AI video' and wow… This is how she hugged me. I've rewatched it 50 times. pic.twitter.com/n2jNwdCkxF

— Alexis Ohanian 🗽 (@alexisohanian) June 22, 2025

False Memory

Human memory has always been fallible: while we typically remember the gist of an event, details are forgotten or distorted. Memory is not “a personal library of all the things that have ever happened to you,” says Julia Shaw, a criminal psychologist specializing in false memories. “It was meant to help you survive.” While Shaw feels positive about using AI to reanimate people, she says the technology poses the risk of contaminating and overwriting our memories. “AI is a perfect false memory machine,” she says.

Of course, people are capable of distorting their memories without technological assistance. “My grandfather used to yell at my grandmother all the time, but after he died, he was the most wonderful man in the world,” recalls Elizabeth Loftus, a professor of psychology and law and pioneer in memory research. And it’s well-established that tools like Photoshop and doctored videos affect what people remember about the past.

But AI changes the ease and extent to which content can be altered. A recent study that Loftus conducted with the MIT Media Lab found that exposure to even a single AI-edited visual affected people’s memory of the original. Participants “reported high levels of confidence in their false memories,” with younger people proving particularly susceptible.

The researchers also found that while this technology could have beneficial uses, such as reframing traumatic memories or enhancing self-esteem, there is a considerable risk of creating false memories in high-stakes contexts like courtrooms, and using the technology to spread misinformation.

Grief, Interrupted

One possible harm: engagement with digital simulacra of the deceased could complicate the grieving process. Mary-Frances O’Connor, a neuroscientist and author of The Grieving Body, explains that grieving is a process by which one learns to reconcile the reality of a person’s death with the sense—encoded at the neurobiological level in one’s brain—that they should still be here. She notes that for many people, the dead continue to live amongst us, insofar as people report experiencing their presence. “Many bereaved people describe how every time they walk into a room, they see a hole that no one else is seeing.”

O’Connor notes that “all cultures, in all periods of history, have used whatever technology they could to connect with their deceased loved ones.” Once cameras were invented, for example, people began keeping photos of the deceased in their homes. In 2020, documentarians in South Korea used virtual-reality to create a structured experience for a mother to reunite with her daughter, who she lost to a rare medical illness. While the experience helped the mother process her daughter’s death, it was met with concern by Western media.

Perhaps the key question, she says, is whether AI helps us connect to our late loved ones, or reinforces the idea that they are everlasting. Given the unprecedented nature of the current moment, it may be too early to tell.

“We’re in a massively novel situation: the dead have never been this talkative before,” says Elaine Kasket, a cyberpsychologist and author of All the Ghosts in the Machine. Between traces left online and the ability to digitize old letters, photos, and other records, we have access to more “digital remains” than ever. Kasket believes she has access to sufficient material from her friend, for example, to have a conversation with a machine that would be “functionally indistinguishable” from one with his human counterpart. As human memory is already hallucinatory and reconstructive, she wonders: “is the fiction from the machine unhealthier than the fiction from within our own heads?” It depends what function it serves.

CHINA-SCIENCE-DEATH-AI

Dead Intelligence

With frontier AI companies investing billions of dollars in creating “agents,” AI systems may become increasingly convincing stand-ins for the dead—it is not difficult to imagine, for example, soon being able to videocall a simulacra of a grandparent. “I think that would be a beautiful future,” says Shaw, while emphasizing the need to prevent the AI being weaponized against the person. “It feels like an atheist version of being able to talk to ghosts,” she says.

Alongside the questions of whether this is good or bad, and whether it is truly distinct from what has come before, is the question of who stands to benefit. O’Connor notes that people have long profited from the bereaved, from mediums and seances to intercessionary prayers in the Catholic church, where a priest would only pray for the soul of the deceased for a fee. 

There may be real therapeutic and emotional value in being able to reconnect and potentially achieve closure with lost loved ones, in the same way that some people find value in texting or posting to somebody’s social media feed after they’re gone, says Shaw. “If people want to do this in their own private world, because it makes them feel happier, what’s the harm?” says Loftus.

For O’Connor, cause for concern arises when somebody is engaging with the deceased to the exclusion of other important aspects of their life, or when they become secretive about their behaviour. On the whole, though, she emphasizes the remarkable resilience of human beings: “this will be one more thing we learn to adjust to.”

Kasket sees a risk that reliance on digital reincarnates renders us brittle: if all the “difficulty and mess and pain” associated with human relationships can be scrubbed away, we may be left vulnerable to life’s unexpected challenges. At the point where we “pathologize and problematize the natural finitude and impermanence of carbon-based life forms such as ourselves, we really need to take a beat and think about what we’re doing here,” she says.

Source: Tech – TIME | 28 Jun 2025 | 2:41 am

Why AI Regulation Has Become a ‘States’ Rights’ Issue

Senate-hearing-on-Trump-assassination-attempt

A major test of the AI industry’s influence in Washington will come to a head this week—and the battle has already revealed sizable fissures in the Republican Party. 

Trump’s “Big Beautiful Bill” contains a provision that would severely discourage individual states from regulating AI for 10 years. Prominent Republicans, most notably Texas Senator Ted Cruz, have led the charge, arguing that a patchwork of shoddy state legislation would stunt the AI industry and burden small entrepreneurs.

[time-brightcove not-tgx=”true”]

But Massachusetts Democrat Ed Markey has drafted an amendment to strip the provision from the megabill, arguing that it is a federal overreach and that states need to be able to protect their citizens from AI harms in the face of congressional inaction. The amendment could be voted on this week—and could gain support from an unlikely cadre of Republicans, most notably Missouri Senator Josh Hawley, who dislikes the provision’s erosion of states’ rights. 

“It’s a terrible provision,” Hawley tells TIME. When asked if he had been talking to other Republicans about trying to stop it, Hawley nodded, and said, “There’s a lot of people who have a lot of big concerns about it.” 

To strip the provision, Markey would need 51 votes: four Republicans in addition to every single Democrat. And it’s unclear if he will get the necessary support from both camps. For example, Ron Johnson, a Wisconsin Republican, has criticized the provision—but told TIME on Tuesday that he didn’t think it should be struck from the bill. 

Regardless of the outcome, the battle reflects both the AI industry’s influence in Washington and the heightened anxieties that influence is causing among many different coalitions. Here’s how the battle lines are being drawn, and why key Republicans are defecting from the party line. 

Fighting to Limit Regulation

Congress has been notoriously slow to pass any sort of tech regulation in the past two decades. As a result, states have filled the void, passing bills that regulate biometric data and child online safety. The same has held true for AI: as the industry has surged in usage, hundreds of AI bills have been proposed in states, with dozens enacted into law. 

Some of the most stringent bills, like California’s SB 1047, have faced fierce opposition from industry players, who cast them as poorly-written stiflers of innovation and economic growth. Their efforts have proven successful: After OpenAI, Facebook, and other industry players lobbied hard against SB 1047, Gavin Newsom vetoed the bill last fall. 

Since then, the industry has been working to prevent this sort of legislation from being passed again. In March—not long after OpenAI CEO Sam Altman appeared with Donald Trump at the White House to announce a data center initiative—OpenAI sent a set of policy proposals to the White House, which included a federal preemption of state laws. In May, Altman and other tech leaders came to Washington and warned Congress that the regulations risk the U.S. falling behind China in an AI arms race. A state-by-state approach, Altman said, would be “burdensome.” 

For many Republicans, the idea of industry being shielded from “burdensome” regulation resonated with their values. So Republicans in Congress wrote language stipulating a 10-year moratorium of state AI regulation in the funding megabill. One of the provision’s key supporters was Jay Obernolte, a California Republican and co-chair of the House’s AI Task Force. Obernolte argues that an array of state legislation would make it harder for smaller AI entrepreneurs to grow, further consolidating power into the hands of big companies which have the legal and compliance teams to sort through the paperwork. 

Obernolte argues that he wants AI regulation—but that it first should come from Washington, and that the moratorium would give Congress time to pass it. After that core legislation is figured out, he says, states would be able to pass their own laws. “I strongly support states’ rights, but when it comes to technologies that cross state lines by design, it’s Congress’s responsibility to lead with thoughtful, uniform policy,” Obernolte wrote in an email to TIME. 

This week, Senator Cruz altered the provision slightly, changing it from an outright ban to a stipulation that punishes states which pass AI legislation by withholding broadband expansion funding. If all Senate Republicans now vote for Trump’s megabill wholesale, then the provision would pass into law. 

Fighting Back

But the moratorium has received a significant amount of blowback—from advocates on both sides of the political aisle. From the left, the Leadership Conference on Civil and Human Rights led 60 civil rights organizations opposing the ban, arguing that it would neuter vital state laws that have already passed, including the creation of accuracy standards around facial recognition technology. The ACLU wrote that it would give “tech giants and AI developers a blank check to experiment and deploy technologies without meaningful oversight or consequences.” 

Senator Ed Markey has drafted an amendment to strip the provision from the bill, and is attempting to mobilize Democrats to his cause. “Whether it’s children and teenagers in need of protection against predatory practices online; whether it’s seniors who need protection in being deceived in terms of their health care coverage; whether it is the impact of the consumption of water and electricity at a state level and the pollution that is created—an individual state should have the rights to be able to put those protections in place,” he tells TIME. 

Markey says he’s open to AI innovation, including in medical research. “But we don’t want the sinister side of cyberspace through AI to plague a generation [of] workers, families, and children,” he says. 

Sunny Gandhi, vice president of political affairs at the AI advocacy organization Encode, pushes back on the common industry talking point that state regulation harms small AI entrepreneurs, noting that bills like California’s SB 1047 and New York’s RAISE Act are specifically designed to target only companies that spend $100 million on compute.

Criticism from the left is perhaps expected. But plenty of Republicans have expressed worries about the provision as well, imperiling its passage. A fellow at the Heritage Foundation came out against the moratorium, as did the Article III Project, a conservative judicial advocacy group, on the grounds that it would allow Big Tech to “run wild.” 

Georgia Republican Marjorie Taylor Greene has been particularly vocal. “I will not vote for any bill that destroys federalism and takes away states’ rights,” she told reporters this month. 

Tennessee Republican Marsha Blackburn has also expressed concern, as she is especially sensitive to worries about artists’ rights given her Nashville base. “We cannot prohibit states across the country from protecting Americans, including the vibrant creative community in Tennessee, from the harms of AI,” Senator Blackburn wrote to TIME in a statement. “For decades, Congress has proven incapable of passing legislation to govern the virtual space and protect vulnerable individuals from being exploited by Big Tech. We need to find consensus on these issues that are so vitally important to the American people.” 

But some Republicans with concerns may nevertheless reluctantly vote the provision through, giving it the numbers it needs to become law. Johnson, from Wisconsin, told TIME that he was “sympathetic” with both arguments. “I’m all about states’ rights, but you can’t have thousands of jurisdictions creating just a chaos of different regulation,” he says. “So you probably do have to have some moratorium. Is 10 years too long? It might be. So maybe I can cut it back to five.” 

—With reporting by Nik Popli

Source: Tech – TIME | 26 Jun 2025 | 3:19 am

What to Know About Tesla’s ‘Robotaxis’ as They Launch in Austin, Texas

Tesla To Roll Out Robotaxis In Overdue Step Toward Musk Dream

Would you take a ride in Tesla’s driverless “robotaxi”?

The company began a test run of a few of the driverless cabs in Austin, Texas, on Sunday, years after Tesla CEO Elon Musk vowed they would be on the road. Tesla is not the first company to embark on a driverless car service—Waymo has more than a thousand such cabs in several cities, including Los Angeles, San Francisco, and Austin. But Tesla’s launch is a long-awaited feat—in 2019, Musk promised to begin operating driverless taxis “next year”—and one that comes at a contentious time for the company and its CEO.

[time-brightcove not-tgx=”true”]

Here’s what to know about Tesla’s robotaxis.

What does the test run entail?

“The @Tesla_AI robotaxi launch begins in Austin this afternoon with customers paying a $4.20 flat fee!” Musk posted on X on Sunday.

He went on to repost videos that passengers had shared of themselves riding in the test cars. One passenger, Dave Lee, posted a video of the robotaxi while he was in the backseat, showing a Tesla employee in the front passenger seat and the empty driver’s seat.

Tesla sent about 10 robotaxis out for test drives, according to news reports. Staff monitored the cars remotely, The Associated Press reported.

A tumultuous time for Tesla

The test runs come at a turbulent time for the company. Earlier this month, Tesla stocks plummeted about 14.3% in a day, amid a bitter—and very public—falling out between Musk and President Donald Trump. 

It was the company’s worst day since March, when its stock fell about 15% as consumers around the world boycotted Tesla products in protest of Musk’s growing role in the Trump Administration.

Tesla stocks rose about 8% on Monday after the test run of the driverless cabs began.

Will the company’s robotaxis expand beyond Austin?

For now, Tesla’s robotaxis are providing service “in limited areas of Austin,” according to the company’s website. Robotaxis are invite-only at the moment, the company said.

Musk has claimed that Tesla will have hundreds of thousands of self-driving cars in the U.S. by the end of 2026, and that the company has plans to expand robotaxis to other cities, including San Francisco, Los Angeles, and San Antonio.

Are the cars fully self-driving?

Other Tesla vehicles have advertised a “full self-driving” feature. But the system does not mean that the cars are fully autonomous; drivers still need to pay attention to the road because there’s a chance they may need to take control of the car, according to the AP. Federal regulators have also taken issue with Tesla’s self-driving system, opening an investigation into it last year after receiving reports of accidents involving the program, including one that killed a pedestrian.

Musk has said that the robotaxis will operate on a new, more advanced version of the system, adding that the cars will be safe.

Some videos of the test runs on Sunday appeared to show the robotaxis making errors on the road. In one video that was posted on YouTube, a robotaxi drove in the wrong lane of traffic after abandoning making a left turn; thankfully, there were no other cars approaching in that lane.

Source: Tech – TIME | 24 Jun 2025 | 9:51 am

Former Scale AI CEO Alexandr Wang on AI’s Potential and Its ‘Deficiencies’

Alexandr Wang

On June 12, Alexandr Wang stepped down as Scale’s CEO to chase his most ambitious moonshot yet: building smarter-than-human AI as head of Meta’s new “superintelligence” division. As part of his move, Meta will invest $14.3 billion for a minority stake in Scale AI, but the real prize isn’t his company—it’s Wang himself. 

Wang, 28, is expected to bring a sense of urgency to Meta’s AI efforts, which this year have been plagued by delays and underwhelming performance. Once the undisputed leader of open-weight AI, the U.S. tech giant has been overtaken by Chinese rivals like DeepSeek on popular benchmarks. Although Wang, who dropped out of MIT at 19, lacks the academic chops of some of his peers, he offers both insight into the types of data Meta’s rivals use to improve their AI systems, and unrivaled ambition. Google and OpenAI are both reportedly severing deals with Scale AI over the Meta deal. Scale declined to comment, but interim CEO has emphasized that the company will continue to operate independently in a blog post.

[time-brightcove not-tgx=”true”]

Big goals are Wang’s thing. By 24, he’d become the world’s youngest self-made billionaire by building Scale into a major player labeling data for the artificial intelligence industry’s giants. “Ambition shapes reality,” reads one of Scale’s core values—a motto Wang crafted. That drive has earned him admiration from OpenAI CEO Sam Altman, who lived in Wang’s apartment for months during the pandemic.

But his relentless ambition has come with trade-offs. He credits Scale’s success to treating data as a “first-class problem,” but that focus didn’t always extend to the company’s army of over 240,000 contract workers, some of whom have faced delayed, reduced, or canceled payments after completing tasks. Lucy Guo, who co-founded Scale, but left in 2018 following disagreements with Wang, says it was one of their “clashing points.” 

“I was like, ‘we need to focus on making sure they get paid out on time,'” while Wang was more concerned with growth, Guo says. Scale AI has said instances of late-payment are exceedingly rare and that it is constantly improving. 

The stakes of this growth-at-all-costs mindset are rising. Superintelligent Al “would amount to the most precarious technological development since the nuclear bomb,” according to a policy paper Wang co-authored in March with Eric Schmidt, Google’s former CEO, and Dan Hendrycks, the director of the Center of AI Safety. Wang’s new role at Meta makes him an important decision maker about this technology that leaves no room for error.

TIME spoke to Wang in April, before he stepped down as Scale’s CEO. He discussed his leadership style, how prepared the U.S. is for AGI and AI’s “deficiencies.”

This interview has been condensed and edited for clarity.

Your leadership style has been described as very in-the-weeds. For example, it’s been reported you would take a 1-1 call with every new employee even as headcount reached into the hundreds. How has your view of leadership evolved as Scale has grown?

Leadership is a very multifaceted discipline, right? There’s level one—can you accomplish the things that are right in front of you? Level two is: are the things that you’re doing even the right things? Are you pointing the right direction? And then there’s a lot of the level three stuff, which is probably the most important—what’s the culture of the organization? All that kind of stuff. 

I definitely think my approach to leadership is one of very high attention to detail, being very in-the-weeds, being quite focused, instilling a high level of urgency, really trying to ensure that the organization is moving as quickly and as urgently towards the critical problems as possible. 

But also layering in, how do you develop a healthy culture? How do you develop an organization where people are put in positions where they’re able to do their best work, and they’re constantly learning and growing within these environments. When you’re pointed at a mission that is larger than life, then you have the ability to accomplish things that are truly great.

Since a trip to China in 2018, you’ve been outspoken about the threat posed by China’s AI ambitions. Now, particularly in the wake of DeepSeek, this view has become a lot more dominant in Washington. Do you have any other takes regarding AI development that might be kind of fringe now, but will become mainstream in five years or so?

I think, the agentic world—one where businesses and governments are increasingly doing more and more of their economic activity with agents; that humans are more and more just feeling sort of like managers and overseers of those agents; where we’re starting to shift and offload more economic activity onto agents. This is certainly the future, and how we, as a society, undergo that transition with minimum disruption is very, very non-trivial. 

I think it definitely sounds scary when you talk about it, and I think that’s sort of like an indication that it’s not going to be something that’s very easy to accomplish or very easy to do. My belief is, I think that there’s a number of things that we have to build, that we have to get right, that we have to do, to ensure that that transition is smooth. 

I think there’s a lot of excitement and energy put towards this sort of agentic world. And we think it touches every facet of our world. So enterprises will become agentic enterprises. Governments will become agentic governments. Warfare will become agentic warfare. It’s going to deeply cut into everything that we do and there’s a few key pieces, both infrastructure that need to be built, as well as key policy decisions and key decisions [about] how it gets implemented within the economy that are all quite critical.

What’s your assessment of how prepared and how seriously the U.S. government is taking the possibility of “AGI” [artificial general intelligence]?

I think AI is very, very top of mind for the administration, and I think there’s a lot of trying to assess: What is the rate of progress? How quickly are we going to achieve what most people call AGI? Slower timeframe, faster timeframe? In the case where it’s a faster timeframe, what are the right things to repair? I think these are major conversations. 

If you go to Vice President JD Vance’s speech from the Paris AI action Summit, he speaks explicitly to this, the concept that the current administration is focused on the American worker, and that they will ensure that AI is beneficial to the American worker. 

I think as AI continues to progress—I mean, the industry is moving at a breakneck speed—people will take note and take action.

One job that seems ripe for disruption is data annotation itself. We’ve seen in-house AI models used for the captioning of the dataset OpenAI’s Sora, and at the same time, reasoning models are being trained on synthetic self-play data on defined challenges. Do you think those trends pose a threat of disruption to Scale AI’s data annotation business?

I actually think it’s quite the opposite. If you look at the growth in the AI related jobs around contribution to AI data sets—there’s a lot of words for this, but we call them “contributors,”—it’s grown exponentially over time. There’s a lot of conversation around whether as the models get better does the work go away. The reality is that the work is continuing to grow many fold, year over year and you can see this in our growth. 

So my expectation actually is, if you draw a line forward, towards an agentic economy, more people actually end up moving towards doing what we’d currently consider AI data work—that’ll be an increasingly large part of the economy. 

Why haven’t we been able to automate AI data work?

Automating AI data work is a little bit of a tautology, because AI data work is meant to make the models better, and so if the models were good at the things they were producing data for, then you wouldn’t need it in the first place. So, fundamentally, AI data is all focused on the areas where the models are deficient. And as AI gets applied into more and more places within the economy, we’re only going to find more deficiencies there. 

You can stand back and squint and the AI models seem really smart, but if you actually try to use it to do any of a number of key workflows in your job, you’d realize that’s quite deficient. And so I think that as a society, humanity will never cease to find areas in which these models need to improve and that will drive a continual need for AI data work.

One of Scale’s contributions has been to position itself as a technology company as much as a data company. How have you pulled that off and stood out from the competition?

If you take a big step back, AI progress fundamentally relies on three pillars: data, compute and algorithms. It became very clear that the data was one of the key bottlenecks of this industry. Compute and algorithms were also bottlenecked, but data was sort of right there with them. 

I think before Scale, there weren’t companies that treated data as the first-class of a problem it really is. With Scale, one of the things that we’ve really done is treat data with the respect that it deserves. We’ve really sought to understand, “How do we solve this problem in the correct way? How do we solve it in the most tech-forward way?”

Once you have these three pillars, you can build applications on top of the data and the algorithms. And so what we’ve built at Scale is the platform that first, underpins the data pillar for the entire industry. Then we’ve also found that with that pillar, we’re able to build on top, and we’re able to help businesses and governments build and deploy AI applications on top of their incredible wealth of data. I think that’s really what set us apart.

Source: Tech – TIME | 22 Jun 2025 | 11:00 pm









© 澳纽网 Ausnz.net