Technology News | Time

Federal Court Upholds Law Requiring Sale or Ban of TikTok in U.S.

Australia Passes Law Banning Social Media Access For Under 16s

A federal appeals court panel on Friday upheld a law that could lead to a ban on TikTok in a few short months, handing a resounding defeat to the popular social media platform as it fights for its survival in the U.S.

The U.S. Court of Appeals for the District of Columbia Circuit ruled that the law, which requires TikTok to break ties with its China-based parent company ByteDance or be banned by mid-January, is constitutional, rebuffing TikTok’s challenge that the statute ran afoul of the First Amendment and unfairly targeted the platform.

[time-brightcove not-tgx=”true”]

“The First Amendment exists to protect free speech in the United States,” said the court’s opinion. “Here the Government acted solely to protect that freedom from a foreign adversary nation and to limit that adversary’s ability to gather data on people in the United States.”

TikTok and ByteDance — another plaintiff in the lawsuit — are expected to appeal to the Supreme Court. Meanwhile, President-elect Donald Trump, who tried to ban TikTok during his first term and whose Justice Department would have to enforce the law, said during the presidential campaign that he is now against a TikTok ban and would work to “save” the social media platform.

The law, signed by President Joe Biden in April, was the culmination of a years-long saga in Washington over the short-form video-sharing app, which the government sees as a national security threat due to its connections to China.

The U.S. has said it’s concerned about TikTok collecting vast swaths of user data, including sensitive information on viewing habits, that could fall into the hands of the Chinese government through coercion. Officials have also warned the proprietary algorithm that fuels what users see on the app is vulnerable to manipulation by Chinese authorities, who can use it to shape content on the platform in a way that’s difficult to detect.

Read More: As a Potential TikTok Ban Looms, Creators Worry About More Than Just Their Bottom Lines

However, a significant portion of the government’s information in the case has been redacted and hidden from the public as well as the two companies.

TikTok, which sued the government over the law in May, has long denied it could be used by Beijing to spy on or manipulate Americans. Its attorneys have accurately pointed out that the U.S. hasn’t provided evidence to show that the company handed over user data to the Chinese government, or manipulated content for Beijing’s benefit in the U.S. They have also argued the law is predicated on future risks, which the Department of Justice has emphasized pointing in part to unspecified action it claims the two companies have taken in the past due to demands from the Chinese government.

Friday’s ruling came after the appeals court panel heard oral arguments in September.

Some legal experts said at the time that it was challenging to read the tea leaves on how the judges would rule.

In a court hearing that lasted more than two hours, the panel – composed of two Republican and one Democrat appointed judges – appeared to grapple with how TikTok’s foreign ownership affects its rights under the Constitution and how far the government could go to curtail potential influence from abroad on a foreign-owned platform.

The judges pressed Daniel Tenny, a Department of Justice attorney, on the implications the case could have on the First Amendment. But they also expressed some skepticism at TikTok’s arguments, challenging the company’s attorney – Andrew Pincus – on whether any First Amendment rights preclude the government from curtailing a powerful company subject to the laws and influence of a foreign adversary.

In parts of their questions about TikTok’s ownership, the judges cited wartime precedent that allows the U.S. to restrict foreign ownership of broadcast licenses and asked if the arguments presented by TikTok would apply if the U.S. was engaged in war.

To assuage concerns about the company’s owners, TikTok says it has invested more than $2 billion to bolster protections around U.S. user data.

The company also argues the government’s broader concerns could have been resolved in a draft agreement it provided the Biden administration more than two years ago during talks between the two sides. It has blamed the government for walking away from further negotiations on the agreement, which the Justice Department argues is insufficient.

Read More: Here’s All the Countries With TikTok Bans as Platform’s Future in U.S. Hangs In Balance

Attorneys for the two companies have claimed it’s impossible to divest the platform commercially and technologically. They also say any sale of TikTok without the coveted algorithm – the platform’s secret sauce that Chinese authorities would likely block under any divesture plan – would turn the U.S. version of TikTok into an island disconnected from other global content.

Still, some investors, including Trump’s former Treasury Secretary Steven Mnuchin and billionaire Frank McCourt, have expressed interest in purchasing the platform. Both men said earlier this year that they were launching a consortium to purchase TikTok’s U.S. business.

This week, a spokesperson for McCourt’s Project Liberty initiative, which aims to protect online privacy, said unnamed participants in their bid have made informal commitments of more than $20 billion in capital.

TikTok’s lawsuit was consolidated with a second legal challenge brought by several content creators – for which the company is covering legal costs – as well as a third one filed on behalf of conservative creators who work with a nonprofit called BASED Politics Inc.

If TikTok appeals and the courts continue to uphold the law, it would fall on Trump’s Justice Department to enforce it and punish any potential violations with fines. The penalties would apply to app stores that would be prohibited from offering TikTok, and internet hosting services that would be barred from supporting it.

Source: Tech – TIME | 7 Dec 2024 | 4:49 am

OpenAI’s New Ad Shows ‘Reasoning’ AI Making Basic Errors

Digital Company Logos

OpenAI released its most advanced AI model yet, called o1, for paying users on Thursday. The launch kicked off the company’s “12 Days of OpenAI” event—a dozen consecutive releases to celebrate the holiday season.

OpenAI has touted o1’s “complex reasoning” capabilities, and announced on Thursday that unlimited access to the model would cost $200 per month. In the video the company released to show the model’s strengths, a user uploads a picture of a wooden birdhouse and asks the model for advice on how to build a similar one. The model “thinks” for a short period and then spits out what on the surface appears to be a comprehensive set of instructions.

[time-brightcove not-tgx=”true”]

Close examination reveals the instructions to be almost useless. The AI measures the amount of paint, glue, and sealant required for the task in inches. It only gives the dimensions for the front panel of the birdhouse, and no others. It recommends cutting a piece of sandpaper to another set of dimensions, for no apparent reason. And in a separate part of the list of instructions, it says “the exact dimensions are as follows…” and then proceeds to give no exact dimensions.

“You would know just as much about building the birdhouse from the image as you would the text, which kind of defeats the whole purpose of the AI tool,” says James Filus, the director of the Institute of Carpenters, a U.K.-based trade body, in an email. He notes that the list of materials includes nails, but the list of tools required does not include a hammer, and that the cost of building the simple birdhouse would be “nowhere near” the $20-50 estimated by o1. “Simply saying ‘install a small hinge’ doesn’t really cover what’s perhaps the most complex part of the design,” he adds, referring to a different part of the video that purports to explain how to add an opening roof to the birdhouse.

OpenAI did not immediately respond to a request for comment.

It’s just the latest example of an AI product demo doing the opposite of its intended purpose. Last year, a Google advert for an AI-assisted search tool mistakenly said that the James Webb telescope had made a discovery it had not, a gaffe that sent the company’s stock price plummeting. More recently, an updated version of a similar Google tool told early users that it was safe to eat rocks, and that they could use glue to stick cheese to their pizza.

OpenAI’s o1, which according to public benchmarks is its most capable model to date, takes a different approach than ChatGPT for answering questions. It is still essentially a very advanced next-word predictor, trained using machine learning on billions of words of text from the Internet and beyond. But instead of immediately spitting out words in response to a prompt, it uses a technique called “chain of thought” reasoning to essentially “think” about an answer for a period of time behind the scenes, and then gives its answer only after that. This technique often yields more accurate answers than having a model spit out an answer reflexively, and OpenAI has touted o1’s reasoning capabilities—especially when it comes to math and coding. It can answer 78% of PhD-level science questions accurately, according to data that OpenAI published alongside a preview version of the model released in September.

But clearly some basic logical errors can still slip through.

Source: Tech – TIME | 7 Dec 2024 | 3:07 am

TIME Is Looking For the World’s Top EdTech Companies of 2025

View of a parent using tablet computer with a child at home

In 2025, TIME will once again publish its ranking of the World’s Top EdTech Companies, in partnership with Statista, a leading international provider of market and consumer data and rankings. This list identifies the most innovative, impactful, and growing companies in EdTech, which have established themselves as leaders in the EdTech industry.

Companies that focus primarily on developing and providing education technology are encouraged to submit applications as part of the research phase. An application guarantees consideration for the list, but does not guarantee a spot on the list, nor is the final list limited to applicants.

To apply, click here.

More information visit: https://www.statista.com/page/ed-tech-rankings. Winners will be announced on TIME.com in April 2025.

Source: Tech – TIME | 3 Dec 2024 | 8:42 am

Intel CEO Pat Gelsinger Retires

Pat Gelsinger, Intel

Intel CEO Pat Gelsinger has retired, with David Zinsner and Michelle Johnston Holthaus named as interim co-CEOs.

Gelsinger, whose career has spanned more than 40 years, also stepped down from the company’s board. He started at Intel in 1979 at Intel and was its first chief technology officer. He returned to Intel as chief executive in 2021.

[time-brightcove not-tgx=”true”]

Intel said Monday that it will conduct a search for a new CEO.

Read More: Intel’s CEO on Turning Skeptics Into Believers

Zinsner is executive vice president and chief financial officer at Intel. Holthaus was appointed to the newly created position of CEO of Intel Products, which includes the client computing group, data center and AI group and etwork and Edge Group.

Frank Yeary, independent chair of Intel’s board, will become interim executive chair.

“Pat spent his formative years at Intel, then returned at a critical time for the company in 2021,” Yeary said in a statement. “As a leader, Pat helped launch and revitalize process manufacturing by investing in state-of-the-art semiconductor manufacturing, while working tirelessly to drive innovation throughout the company.”

Last week it was revealed that the Biden administration plans on reducing part of Intel’s $8.5 billion in federal funding for computer chip plants around the country, according to three people familiar with the grant who spoke on the condition of anonymity to discuss private conversations.

The reduction is largely a byproduct of the $3 billion that Intel is also receiving to provide computer chips to the military. President Joe Biden announced the agreement to provide Intel with up to $8.5 billion in direct funding and $11 billion in loans in March.

The changes to Intel’s funding are not related to the company’s financial record or milestones, the people familiar with the grant told The Associated Press. In August, the chipmaker announced that it would cut 15% of its workforce — about 15,000 jobs — in an attempt to turn its business around to compete with more successful rivals like Nvidia and AMD.

Unlike some of its rivals, Intel manufactures chips in addition to designing them.

Shares of the Santa Clara, California, company, jumped more than 4% in premarket trading.

Source: Tech – TIME | 3 Dec 2024 | 3:30 am

Australian Senate Passes Social Media Ban for Under-16s, Will Soon Become World-First Law

Australia Social Media Bill

MELBOURNE, Australia — A social media ban for children under 16 passed the Australian Parliament on Friday in a world-first law.

The law will make platforms including TikTok, Facebook, Snapchat, Reddit, X and Instagram liable for fines of up to 50 million Australian dollars ($33 million) for systemic failures to prevent children younger than 16 from holding accounts.

[time-brightcove not-tgx=”true”]

The Senate passed the bill on Thursday 34 votes to 19. The House of Representatives on Wednesday overwhelmingly approved the legislation by 102 votes to 13.

The House on Friday endorsed opposition amendments made in the Senate, making the bill law.

Prime Minister Anthony Albanese said the law supported parents concerned by online harms to their children.

“Platforms now have a social responsibility to ensure the safety of our kids is a priority for them,” Albanese told reporters.

The platforms have one year to work out how they could implement the ban before penalties are enforced.

Meta Platforms, which owns Facebook and Instagram, said the legislation had been “rushed.”

Digital Industry Group Inc., an advocate for the platforms in Australia, said questions remain about the law’s impact on children, its technical foundations and scope.

“The social media ban legislation has been released and passed within a week and, as a result, no one can confidently explain how it will work in practice – the community and platforms are in the dark about what exactly is required of them,” DIGI managing director Sunita Bose said.

The amendments passed on Friday bolster privacy protections. Platforms would not be allowed to compel users to provide government-issued identity documents including passports or driver’s licenses, nor could they demand digital identification through a government system.

Critics of the legislation fear that banning young children from social media will impact the privacy of all users who must establish they are older than 16.

While the major parties support the ban, many child welfare and mental health advocates are concerned about unintended consequences.

Sen. David Shoebridge, from the minority Greens party, said mental health experts agreed that the ban could dangerously isolate many children who used social media to find support.

“This policy will hurt vulnerable young people the most, especially in regional communities and especially the LGBTQI community, by cutting them off,” Shoebridge told the Senate.

Exemptions will apply for health and education services including YouTube, Messenger Kids, WhatsApp, Kids Helpline and Google Classroom.

Opposition Sen. Maria Kovacic said the bill was not radical but necessary. “The core focus of this legislation is simple: It demands that social media companies take reasonable steps to identify and remove underage users from their platforms,” Kovacic told the Senate.

“This is a responsibility these companies should have been fulfilling long ago, but for too long they have shirked these responsibilities in favor of profit,” she added.

Online safety campaigner Sonya Ryan, whose 15-year-old daughter Carly was murdered by a 50-year-old pedophile who pretended to be a teenager online, described the Senate vote as a “monumental moment in protecting our children from horrendous harms online.”

“It’s too late for my daughter, Carly, and the many other children who have suffered terribly and those who have lost their lives in Australia, but let us stand together on their behalf and embrace this together,” she said.

Wayne Holdsworth, whose teenage son Mac took his own life after falling victim to an online sextortion scam, had advocated for the age restriction and took pride in its passage.

“I have always been a proud Australian, but for me subsequent to today’s Senate decision, I am bursting with pride,” Holdsworth said.

Christopher Stone, executive director of Suicide Prevention Australia, the governing body for the suicide prevention sector, said the legislation failed to consider positive aspects of social media in supporting young people’s mental health and sense of connection.

“The government is running blindfolded into a brick wall by rushing this legislation. Young Australians deserve evidence-based policies, not decisions made in haste,” Stone said.

The platforms had complained that the law would be unworkable and had urged the Senate to delay the vote until at least June 2025 when a government-commissioned evaluation of age assurance technologies will report on how young children could be excluded.

“Naturally, we respect the laws decided by the Australian Parliament,” Facebook and Instagram owner Meta Platforms said. “However, we are concerned about the process which rushed the legislation through while failing to properly consider the evidence, what industry already does to ensure age-appropriate experiences, and the voices of young people.”

Snapchat said it was also concerned by the law and would cooperate with the government regulator, the eSafety Commissioner.

“While there are many unanswered questions about how this law will be implemented in practice, we will engage closely with the Government and the eSafety Commissioner during the 12-month implementation period to help develop an approach that balances privacy, safety and practicality. As always, Snap will comply with any applicable laws and regulations in Australia,” Snapchat said in a statement.

Critics argue the government is attempting to convince parents it is protecting their children ahead of a general election due by May. The government hopes that voters will reward it for responding to parents’ concerns about their children’s addiction to social media. Some argue the legislation could cause more harm than it prevents.

Criticisms include that the legislation was rushed through Parliament without adequate scrutiny, is ineffective, poses privacy risks for all users, and undermines the authority of parents to make decisions for their children.

Opponents also argue the ban would isolate children, deprive them of the positive aspects of social media, drive them to the dark web, discourage children too young for social media to report harm, and reduce incentives for platforms to improve online safety.

—AP Business Writer Kelvin Chan in London contributed to this report.

Source: Tech – TIME | 29 Nov 2024 | 2:51 am

Australia’s Social Media Ban for Children Is Closer to Becoming Law. Here’s What to Know

Australia Social Media

MELBOURNE, Australia — Australia’s House of Representatives on Wednesday passed a bill that would ban children younger than 16 years old from social media, leaving it to the Senate to finalize the world-first law.

The major parties backed the bill that would make platforms including TikTok, Facebook, Snapchat, Reddit, X and Instagram liable for fines of up to 50 million Australian dollars ($33 million) for systemic failures to prevent young children from holding accounts.

[time-brightcove not-tgx=”true”]

The legislation passed 102 to 13. If the bill becomes law this week, the platforms would have one year to work out how to implement the age restrictions before the penalties are enforced.

Opposition lawmaker Dan Tehan told Parliament the government had agreed to accept amendments in the Senate that would bolster privacy protections. Platforms would not be allowed to compel users to provide government-issued identity documents including passports or driver’s licenses, nor could they demand digital identification through a government system.

“Will it be perfect? No. But is any law perfect? No, it’s not. But if it helps, even if it helps in just the smallest of ways, it will make a huge difference to people’s lives,” Tehan told Parliament.

The bill was introduced to the Senate late Wednesday but it adjourned for the day hours later without putting it to a vote. The legislation will likely be passed on Thursday, the Parliament’s final session for the year and potentially the last before elections, which are due within months.

The major parties’ support all but guarantees the legislation will pass in the Senate, where no party holds a majority of seats.

Lawmakers who were not aligned with either the government or the opposition were most critical of the legislation during debate on Tuesday and Wednesday.

Criticisms include that the legislation had been rushed through Parliament without adequate scrutiny, would not work, would create privacy risks for users of all ages and would take away parents’ authority to decide what’s best for their children.

Critics also argue the ban would isolate children, deprive them of positive aspects of social media, drive children to the dark web, make children too young for social media reluctant to report harms encountered, and take away incentives for platforms to make online spaces safer.

Independent lawmaker Zoe Daniel said the legislation would “make zero difference to the harms that are inherent to social media.”

“The true object of this legislation is not to make social media safe by design, but to make parents and voters feel like the government is doing something about it,” Daniel told Parliament.

“There is a reason why the government parades this legislation as world-leading, that’s because no other country wants to do it,” she added.

The platforms had asked for the vote to be delayed until at least June next year when a government-commissioned evaluation of age assurance technologies made its report on how the ban could be enforced.

Melbourne resident Wayne Holdsworth, whose 17-year-old son Mac took his own life last year after falling victim to an online sextortion scam, described the bill as “absolutely essential for the safety of our children.”

“It’s not the only thing that we need to do to protect them because education is the key, but to provide some immediate support for our children and parents to be able to manage this, it’s a great step,” the 65-year-old online safety campaigner told The Associated Press on Tuesday.

“And in my opinion, it’s the greatest time in our country’s history,” he added, referring to the pending legal reform.

Source: Tech – TIME | 28 Nov 2024 | 12:59 am

Australia Is Moving to Ban Children From Social Media. Will It Work?

Australia Wants To Ban Kids From Social Media

MELBOURNE, Australia — How do you remove children from the harms of social media? Politically the answer appears simple in Australia, but practically the solution could be far more difficult.

The Australian government’s plan to ban children from social media platforms including X, TikTok, Facebook and Instagram until their 16th birthdays is politically popular. The opposition party says it would have done the same after winning elections due within months if the government hadn’t moved first.

[time-brightcove not-tgx=”true”]

The leaders of all eight Australian states and mainland territories have unanimously backed the plan, although Tasmania, the smallest state, would have preferred the threshold was set at 14.

But a vocal assortment of experts in the fields of technology and child welfare have responded with alarm. More than 140 such experts signed an open letter to Prime Minister Anthony Albanese condemning the 16-year age limit as “too blunt an instrument to address risks effectively.”

Details of how it will be implemented are scant. Lawmakers debated the bill in parliament this week, and it was expected to be passed into law with the support of major parties.

Here’s a look at how some Australians are viewing the issue.

The concerned teen

Leo Puglisi, a 17-year-old Melbourne student who founded online streaming service 6 News Australia at the age of 11, worries that lawmakers imposing the ban don’t understand social media as well as young people at home in the digital age.

“With respect to the government and prime minister, they didn’t grow up in the social media age, they’re not growing up in the social media age, and what a lot of people are failing to understand here is that, like it or not, social media is a part of people’s daily lives,” Leo said.

“It’s part of their communities, it’s part of work, it’s part of entertainment, it’s where they watch content – young people aren’t listening to the radio or reading newspapers or watching free-to-air TV – and so it can’t be ignored. The reality is this ban, if implemented, is just kicking the can down the road for when a young person goes on social media,” Leo added.

Leo has been applauded for his work online. He was a finalist in his home state Victoria’s nomination for the Young Australian of the Year award, which will be announced in January. His nomination bid credits his platform with “fostering a new generation of informed, critical thinkers.”

The grieving mom-turned-activist

One of the proposal’s supporters, cyber safety campaigner Sonya Ryan, knows personally how dangerous social media can be for children.

Her 15-year-old daughter Carly Ryan was murdered in 2007 in South Australia state by a 50-year-old pedophile who pretended to be a teenager online. In a grim milestone of the digital age, Carly was the first person in Australia to be killed by an online predator.

“Kids are being exposed to harmful pornography, they’re being fed misinformation, there are body image issues, there’s sextortion, online predators, bullying. There are so many different harms for them to try and manage and kids just don’t have the skills or the life experience to be able to manage those well,” Sonya Ryan said.

“The result of that is we’re losing our kids. Not only what happened to Carly, predatory behavior, but also we’re seeing an alarming rise in suicide of young people,” she added.

Sonya Ryan is part of a group advising the government on a national strategy to prevent and respond to child sexual abuse in Australia.

She wholeheartedly supports Australia setting the social media age limit at 16.

“We’re not going to get this perfect,” she said. “We have to make sure that there are mechanisms in place to deal with what we already have which is an anxious generation and an addicted generation of children to social media.”

A major concern for social media users of all ages is the legislation’s potential privacy implications.

Age estimation technology has proved inaccurate, so digital identification appears to be the most likely option for assuring a user is at least 16.

The skeptical internet expert

Tama Leaver, professor of internet studies at Curtin University, fears that the government will make the platforms hold the users’ identification data.

The government has already said the onus will be on the platforms, rather than on children or their parents, to ensure everyone meets the age limit.

“The worst possible outcome seems to be the one that the government may be inadvertently pushing towards, which would be that the social media platforms themselves would end up being the identity arbiter,” Leaver said.

“They would be the holder of identity documents which would be absolutely terrible because they have a fairly poor track record so far of holding on to personal data well,” he added.

The platforms will have a year once the legislation has become law to work out how the ban can be implemented.

Ryan, who divides her time between Adelaide in South Australia and Fort Worth, Texas, said privacy concerns should not stand in the way of removing children from social media.

“What is the cost if we don’t? If we don’t put the safety of our children ahead of profit and privacy?” she asked.

Source: Tech – TIME | 27 Nov 2024 | 12:03 am

Where Trump 2.0 Might Look Very Different From Trump 1.0

Trump Campaign

As he prepares for his second term as President, Donald Trump’s approach on some issues is poised to mirror that of his first term. He’s set to once again increase tariffs on imported goods and beef up border enforcement. 

But in some areas, Trump 2.0 is likely to look very different from Trump 1.0.

[time-brightcove not-tgx=”true”]

After taking credit for spearheading the development of COVID-19 vaccines in 2020, Trump now plans to bring an anti-vaxxer into his cabinet. He’s gotten over his early skepticism of Bitcoin, and now wants to strip away regulations and guardrails on cryptocurrency. And after trying to ban TikTok four years ago, Trump now promises to “save” the app.

As Trump prepares to take office in late January, here’s a look at key policies where Trump’s changed his tune.

Cryptocurrency

It’s no secret that Trump was skeptical of cryptocurrency during his first term. He repeatedly criticized Bitcoin and other digital assets, dismissing volatile, speculative assets as “not money” and “a scam.” Trump instead championed the U.S. dollar as the nation’s only legitimate currency. It’s a position he maintained even after leaving office, saying in 2021 that cryptocurrencies seemed like a “disaster waiting to happen.”

But as Trump prepares for his return to the White House, he now stands as one of crypto’s most vocal proponents. He has pledged to make the U.S. the “crypto capital of the planet” and to establish a national cryptocurrency reserve. Bitcoin prices spiked after Trump’s victory. 

Trump’s pivot was fueled by several factors, including the growing clout of the cryptocurrency industry in Washington. Once seen as a fringe element, the crypto sector now boasts substantial financial influence. The industry’s top players poured millions into political campaigns, and in particular those supporting Trump. Super PACs aligned with the industry spent $131 million in the 2024 election cycle, helping to elect pro-crypto lawmakers across the country. The efforts were motivated by a single, unifying goal: to push for a more crypto-friendly regulatory environment.

In return, Trump has promised that under his leadership, cryptocurrency would not just survive but thrive. He has vowed to remove Gary Gensler, the chairman of the Securities and Exchange Commission (SEC), whose aggressive stance against the crypto industry has made him unpopular among crypto advocates. (Gensler announced on Thursday he would step down on Jan. 20, allowing Trump to immediately appoint a successor.) Trump has proposed restructuring the SEC to take a softer approach toward digital assets. For some crypto companies currently being sued or investigated by the agency, that could mean their cases get dropped.

Trump’s shift coincided with significant backing from prominent figures in the crypto world, including Elon Musk, the Tesla CEO who has crypto investments. Industry leaders have been lobbying Trump for a regulatory framework that would establish clearer rules for crypto and prevent its migration overseas, as some foreign markets have proven more accommodating to digital assets. In September, Trump and his family launched World Liberty Financial, a cryptocurrency venture that will likely further entangle his business interests with the burgeoning digital currency sector. 

While his newfound enthusiasm for cryptocurrency has earned him praise from crypto advocates, it remains to be seen whether his promises will translate into concrete policy changes in his second term. The crypto industry, once rocked by the implosion of companies like FTX, now faces a complex regulatory future, with ongoing debates over how much oversight is necessary without stifling innovation. 

TikTok

In his first term, Trump was a staunch opponent of TikTok, the Chinese-owned social media giant. He sought to ban the app from the U.S. on national security grounds. Now, as he prepares for a second term, Trump has reversed himself, vowing to protect TikTok from a looming U.S. ban.

Trump’s initial attack on TikTok began in 2020, as his Administration accused the app of enabling the Chinese government to collect sensitive data on U.S. users. In an executive order, Trump declared TikTok a national emergency, citing concerns about espionage and the app’s potential use to “track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.” His administration pushed for a forced sale of TikTok’s U.S. operations, with the hopes of forcing Chinese conglomerate ByteDance to divest its ownership. 

Despite intense legal battles, including a failed attempt to orchestrate a sale, Trump’s efforts to ban the app failed. But bipartisan concerns about TikTok lingered, and in April Congress passed a law, which President Biden signed, mandating that ByteDance sell TikTok by January 2025, or face a nationwide ban. During his campaign, Trump promised to intervene on the app’s behalf, saying he would allow it to continue operating freely. “For all of those who want to save TikTok in America, vote for Trump. The other side is closing it up, but I’m now a big star on TikTok,” Trump said in a Sept. 4 video posted on Truth Social.The Washington Post reported this month that he is expected to intervene next year to block a ban on TikTok if necessary.

Trump’s pivot on TikTok may be his most dramatic. But his position also stands in contrast to many Republicans who still view the app with suspicion, with some lawmakers warning that the app remains a potential security threat. Despite this, Trump’s policy shift does not guarantee that TikTok will be safe from government action. Lawmakers, particularly national security hawks, may push for the ban to go ahead. Republican Sen. Marco Rubio, Trump’s pick for Secretary of State, previously called for a complete ban on TikTok, calling it a “spyware” tool for China. Trump’s pick to run the Federal Communications Commission, Brendan Carr, told TIME in Nov. 2022 that he does “not see a path forward that’s short of a ban” and that TikTok poses a “huge national security concern.”

Vaccines

The final year of Trump’s first term was dominated by the pandemic. Trump oversaw Operation Warp Speed, a historic initiative that expedited the development and distribution of COVID-19 vaccines, saving millions of lives. But as vaccine skepticism grew among many of his supporters, his rhetoric shifted. 

The extent of that shift became clear this month, when he announced that his Health and Human Services Secretary would be Robert F. Kennedy Jr., who has a history of spreading misinformation on vaccines, including promoting the debunked claim that they are linked to autism. The pick has some experts worried that Trump will allow Kennedy to discourage people from receiving the same vaccines that Trump once championed. 

After leaving office in 2021, Trump distanced himself from promoting the vaccines his administration helped develop. He has also previously pledged to cut funding to schools with vaccine mandates, including those for childhood diseases like polio and measles. He has suggested that people should not be forced to take vaccines, framing his approach as a defense of individual freedom, which stands in stark contrast to his Administration’s early push for widespread vaccination during the COVID-19 pandemic.

Kennedy has said that the Trump Administration does not plan to take vaccines off the market, despite widespread speculation that he might. Yet public hearth experts fear even installing Kennedy in such a high-profile role could give his vaccines view more legitimacy and erode immunization rates.

Department of Education

Days after winning the election, Trump released a video announcing the Department of Education’s days were numbered. “One other thing I’ll be doing very early in the administration is closing up the Department of Education in Washington, D.C. and sending all education and education work and needs back in the states.”

The video marked an escalation of Trump’s long-standing efforts to shrink the federal government’s role in education. In his first term, Trump proposed merging the Department of Education with the Labor Department, but couldn’t get Congress on board. Now his goal is to shutter the department entirely over the next four years.

The Department of Education was set up in 1980 during the final year of the Carter Administration. Its main function is to direct funds that Congress allocates to local schools and universities. It does not have a role in setting curriculum or deciding issues of enrollment, which lies with states and local school boards. 

For his Education Secretary, Trump has picked Linda McMahon, the co-founder of World Wrestling Entertainment, who served as head of the U.S. Small Business Administration during Trump’s first term. She is an advocate for making it easier for states to use education funding for private schools and homeschooling. 

Even as he prepares to take office with Republicans in control of the House and Senate, closing the department outright remains unlikely. Doing so would require 60 votes in the Senate—which would require support from some Democrats—or a suspension of the filibuster rules to allow a simple majority vote, which the incoming Republican leaders have ruled out.

Affordable Care Act

During the 2024 election, Trump appeared to step away from his multi-year effort to eliminate the Affordable Care Act. In 2016, he had campaigned on ending the law, which is also known as Obamacare, calling it a “disaster.” When he was President, Trump supported repeated efforts by Republicans in Congress to kill the ACA. And his Administration asked the Supreme Court to block the law, but the court dismissed it. 

Trump also worked to undermine Obamacare, scaling back outreach efforts to enroll people in the subsidized health plans. In the four years Trump was President, the number of uninsured Americans rose by 2.3 million. 

But during his 2024 campaign, Trump said he no longer supported a direct repeal of the ACA. In March, Trump wrote on Truth Social that he is “not running to terminate” the ACA, and that he wants to make it “better.” Other times, his stance on the law was difficult to parse. During his debate with Harris in September, he said he has the “concepts of a plan” to replace the ACA, but didn’t give more detail. 

The law remains popular. A KFF tracking poll in April found that 62% of Americans have a favorable view of the ACA. More than 45 million Americans are enrolled in medical insurance plans made cheaper and more accessible by the law. The law also forbids insurers from rejecting customers who have existing medical conditions. 

Trump’s true position will be tested next year when low-income subsidies on health plans in the ACA expire and need to be reupped. House Speaker Mike Johnson said in October that the ACA needs “massive reform.”

Source: Tech – TIME | 26 Nov 2024 | 12:00 am

Has AI Progress Really Slowed Down?

Everyday Life And Economy In Krakow

For over a decade, companies have bet on a tantalizing rule of thumb: that artificial intelligence systems would keep getting smarter if only they found ways to continue making them bigger. This wasn’t merely wishful thinking. In 2017, researchers at Chinese technology firm Baidu demonstrated that pouring more data and computing power into machine learning algorithms yielded mathematically predictable improvements—regardless of whether the system was designed to recognize images, speech, or generate language. Noticing the same trend, in 2020, OpenAI coined the term “scaling laws,” which has since become a touchstone of the industry.

[time-brightcove not-tgx=”true”]

This thesis prompted AI firms to bet hundreds of millions on ever-larger computing clusters and datasets. The gamble paid off handsomely, transforming crude text machines into today’s articulate chatbots.

But now, that bigger-is-better gospel is being called into question. 

Last week, reports by Reuters and Bloomberg suggested that leading AI companies are experiencing diminishing returns on scaling their AI systems. Days earlier, The Information reported doubts at OpenAI about continued advancement after the unreleased Orion model failed to meet expectations in internal testing. The co-founders of Andreessen Horowitz, a prominent Silicon Valley venture capital firm, have echoed these sentiments, noting that increasing computing power is no longer yielding the same “intelligence improvements.” 

What are tech companies saying?

Though, many leading AI companies seem confident that progress is marching full steam ahead. In a statement, a spokesperson for Anthropic, developer of the popular chatbot Claude, said “we haven’t seen any signs of deviations from scaling laws.” OpenAI declined to comment. Google DeepMind did not respond for comment. However, last week, after an experimental new version of Google’s Gemini model took GPT-4o’s top spot on a popular AI-performance leaderboard, the company’s CEO, Sundar Pichai posted to X saying “more to come.”

Read more: The Researcher Trying to Glimpse the Future of AI

Recent releases paint a somewhat mixed picture. Anthropic has updated its medium sized model, Sonnet, twice since its release in March, making it more capable than the company’s largest model, Opus, which has not received such updates. In June, the company said Opus would be updated “later this year,” but last week, speaking on the Lex Fridman podcast, co-founder and CEO Dario Amodei declined to give a specific timeline. Google updated its smaller Gemini Pro model in February, but the company’s larger Gemini Ultra model has yet to receive an update. OpenAI’s recently released o1-preview model outperforms GPT-4o in several benchmarks, but in others it falls short. o1-preview was reportedly called “GPT-4o with reasoning” internally, suggesting the underlying model is similar in scale to GPT-4. 

Parsing the truth is complicated by competing interests on all sides. If Anthropic cannot produce more powerful models, “we’ve failed deeply as a company,” Amodei said last week, offering a glimpse at the stakes for AI companies that have bet their futures on relentless progress. A slowdown could spook investors and trigger an economic reckoning. Meanwhile, Ilya Sutskever, OpenAI’s former chief scientist and once an ardent proponent of scaling, now says performance gains from bigger models have plateaued. But his stance carries its own baggage: Suskever’s new AI start up, Safe Superintelligence Inc., launched in June with less funding and computational firepower than its rivals. A breakdown in the scaling hypothesis would conveniently help level the playing field.

“They had these things they thought were mathematical laws and they’re making predictions relative to those mathematical laws and the systems are not meeting them,” says Gary Marcus, a leading voice on AI, and author of several books including Taming Silicon Valley. He says the recent reports of diminishing returns suggest we have finally “hit a wall”—something he’s warned could happen since 2022. “I didn’t know exactly when it would happen, and we did get some more progress. Now it seems like we are stuck,” he says.

Have we run out of data?

A slowdown could be a reflection of the limits of current deep learning techniques, or simply that “there’s not enough fresh data anymore,” Marcus says. It’s a hypothesis that has gained ground among some following AI closely. Sasha Luccioni, AI and climate lead at Hugging Face, says there are limits to how much information can be learned from text and images. She points to how people are more likely to misinterpret your intentions over text messaging, as opposed to in person, as an example of text data’s limitations. “I think it’s like that with language models,” she says. 

The lack of data is particularly acute in certain domains like reasoning and mathematics, where we “just don’t have that much high quality data,” says Ege Erdil, senior researcher at Epoch AI, a nonprofit that studies trends in AI development. That doesn’t mean scaling is likely to stop—just that scaling alone might be insufficient. “At every order of magnitude scale up, different innovations have to be found,” he says, noting that it does not mean AI progress will slow overall. 

Read more: Is AI About to Run Out of Data? The History of Oil Says No

It’s not the first time critics have pronounced scaling dead. “At every stage of scaling, there are always arguments,” Amodei said last week. “The latest one we have today is, ‘we’re going to run out of data, or the data isn’t high quality enough or models can’t reason.,” “…I’ve seen the story happen for enough times to really believe that probably the scaling is going to continue,” he said. Reflecting on OpenAI’s early days on Y-Combinator’s podcast, company CEO Sam Altman partially credited the company’s success with a “religious level of belief” in scaling—a concept he says was considered “heretical” at the time. In response to a recent post on X from Marcus saying his predictions of diminishing returns were right, Altman posted saying “there is no wall.”

Though there could be another reason we may be hearing echoes of new models failing to meet internal expectations, says Jaime Sevilla, director of Epoch AI. Following conversations with people at OpenAI and Anthropic, he came away with a sense that people had extremely high expectations. “They expected AI was going to be able to, already write a PhD thesis,” he says. “Maybe it feels a bit.. anti-climactic.”

A temporary lull does not necessarily signal a wider slowdown, Sevilla says. History shows significant gaps between major advances: GPT-4, released just 19 months ago, itself arrived 33 months after GPT-3. “We tend to forget that GPT three from GPT four was like 100x scale in compute,” Sevilla says. “If you want to do something like 100 times bigger than GPT-4, you’re gonna need up to a million GPUs,” Sevilla says. That is bigger than any known clusters currently in existence, though he notes that there have been concerted efforts to build AI infrastructure this year, such as Elon Musk’s 100,000 GPU supercomputer in Memphis—the largest of its kind—which was reportedly built from start to finish in three months. 

In the interim, AI companies are likely exploring other methods to improve performance after a model has been trained. OpenAI’s o1-preview has been heralded as one such example, which outperforms previous models on reasoning problems by being allowed more time to think. “This is something we already knew was possible,” Sevilla says, gesturing to an Epoch AI report published in July 2023. 

Read more: Elon Musk’s New AI Data Center Raises Alarms Over Pollution

Policy and geopolitical implications

Prematurely diagnosing a slowdown could have repercussions beyond Silicon Valley and Wall St. The perceived speed of technological advancement following GPT-4’s release prompted an open letter calling for a six-month pause on the training of larger systems to give researchers and governments a chance to catch up. The letter garnered over 30,000 signatories, including Musk and Turing Award recipient Yoshua Bengio. It’s an open question whether a perceived slowdown could have the opposite effect, causing AI safety to slip from the agenda.

Much of the U.S.’s AI policy has been built on the belief that AI systems would continue to balloon in size. A provision in Biden’s sweeping executive order on AI, signed in October 2023 (and expected to be repealed by the Trump White House) required AI developers to share information with the government regarding models trained using computing power above a certain threshold. That threshold was set above the largest models available at the time, under the assumption that it would target future, larger models. This same assumption underpins export restrictions (restrictions on the sale of AI chips and technologies to certain countries) designed to limit China’s access to the powerful semiconductors needed to build large AI models. However, if breakthroughs in AI development begin to rely less on computing power and more on factors like better algorithms or specialized techniques, these restrictions may have a smaller impact on slowing China’s AI progress.

“The overarching thing that the U.S. needs to understand is that to some extent, export controls were built on a theory of timelines of the technology,” says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. In a world where the U.S. “stalls at the frontier,” he says, we could see a national push to drive breakthroughs in AI. He says a slip in the U.S.’s perceived lead in AI could spur a greater willingness to negotiate with China on safety principles.

Whether we’re seeing a genuine slowdown or just another pause ahead of a leap remains to be seen. “It’s unclear to me that a few months is a substantial enough reference point,” Singer says. “You could hit a plateau and then hit extremely rapid gains.”

Source: Tech – TIME | 22 Nov 2024 | 6:53 am

Landmark Bill to Ban Children From Social Media Introduced in Australia’s Parliament

MELBOURNE — Australia’s communications minister introduced a world-first law into Parliament on Thursday that would ban children under 16 from social media, saying online safety was one of parents’ toughest challenges.

Michelle Rowland said TikTok, Facebook, Snapchat, Reddit, X and Instagram were among the platforms that would face fines of up to 50 million Australian dollars ($33 million) for systemic failures to prevent young children from holding accounts.

“This bill seeks to set a new normative value in society that accessing social media is not the defining feature of growing up in Australia,” Rowland told Parliament.

[time-brightcove not-tgx=”true”]

“There is wide acknowledgement that something must be done in the immediate term to help prevent young teens and children from being exposed to streams of content unfiltered and infinite,” she added.

X owner Elon Musk warned that Australia intended to go further, posting on his platform: “Seems like a backdoor way to control access to the Internet by all Australians.”

The bill has wide political support. After it becomes law, the platforms would have one year to work out how to implement the age restriction.

“For too many young Australians, social media can be harmful,” Rowland said. “Almost two-thirds of 14- to 17-years-old Australians have viewed extremely harmful content online including drug abuse, suicide or self-harm as well as violent material. One quarter have been exposed to content promoting unsafe eating habits.”

Government research found that 95% of Australian care-givers find online safety to be one of their “toughest parenting challenges,” she said. Social media had a social responsibility and could do better in addressing harms on their platforms, she added.

“This is about protecting young people, not punishing or isolating them, and letting parents know that we’re in their corner when it comes to supporting their children’s health and wellbeing,” Rowland said.

Read More: Teens Are Stuck on Their Screens. Here’s How to Protect Them

Child welfare and internet experts have raised concerns about the ban, including isolating 14- and 15-year-olds from their already established online social networks.

Rowland said there would not be age restrictions placed on messaging services, online games or platforms that substantially support the health and education of users.

“We are not saying risks don’t exist on messaging apps or online gaming. While users can still be exposed to harmful content by other users, they do not face the same algorithmic curation of content and psychological manipulation to encourage near-endless engagement,” she said.

The government announced last week that a consortium led by British company Age Check Certification Scheme has been contracted to examine various technologies to estimate and verify ages.

In addition to removing children under 16 from social media, Australia is also looking for ways to prevent children under 18 from accessing online pornography, a government statement said.

Age Check Certification Scheme’s chief executive Tony Allen said Monday the technologies being considered included age estimation and age inference. Inference involves establishing a series of facts about individuals that point to them being at least a certain age.

Rowland said the platforms would also face fines of up to AU$50 million ($33 million) if they misused personal information of users gained for age-assurance purposes.

Information used for age assurances must be destroyed after serving that purpose unless the user consents to it being kept, she said.

Digital Industry Group Inc., an advocate for the digital industry in Australia, said with Parliament expected to vote on the bill next week, there might not be time for “meaningful consultation on the details of the globally unprecedented legislation.”

“Mainstream digital platforms have strict measures in place to keep young people safe, and a ban could push young people on to darker, less safe online spaces that don’t have safety guardrails,” DIGI managing director Sunita Bose said in a statement. “A blunt ban doesn’t encourage companies to continually improve safety because the focus is on keeping teenagers off the service, rather than keeping them safe when they’re on it.”

Source: Tech – TIME | 21 Nov 2024 | 8:30 pm









© 澳纽网 Ausnz.net