Technology News | Time

The Biggest Changes in Microsoft Windows 11

Microsoft’s Windows 11 is finally here—six years after the launch of its predecessor. The operating system promises larger changes (like Android apps) on the way, but longtime Windows users will be just fine adjusting to the only slightly altered layout like the now-centered taskbar and simplified widget bar.

The biggest change to Windows 11 is its more stringent hardware requirements. If your PC is recent, say, within the past five years, you should be alright, but older devices may be stuck with Windows 10 for the foreseeable future. That’s not to say the company won’t be providing security updates to Windows 10, but that’s about as much as you can hope for. Since the controversial hardware decision—implemented for both performance and security reasons—Microsoft added Windows 11 support for some older computers, but the company still recommends against it.
[time-brightcove not-tgx=”true”]

Future updates to Windows 11 are scheduled to bring the hotly anticipated feature of Android apps to the OS, thanks to a partnership with Amazon and Microsoft’s own Android app store on its tablet devices. Right now, the feature is only available to users of the experimental “insider” version of Windows 11.

Still, after using it for a few days it’s quite clear Windows 11 is a marked improvement over its predecessor, refining already useful features like window management, extra video conferencing features, and improved support for high-end games. Windows 11 is a productivity machine, no pun intended. So what can you expect from the upgrade? And is it worth potentially buying a new PC to experience it? Depends on how you feel about some of the biggest changes coming to Microsoft’s primary OS.

 

Microsoft

The new taskbar keeps clutter at bay

Instead of the somewhat clunky tiling system found in the previous iteration of the iconic taskbar, Windows 11 peels away the cruft and gives you what you need. In my case, it’s a list of recently accessed files (both locally and in the cloud) along with a tray of pinned apps for easy access, with a universal search bar at the top for easy web (or on-device) searching. The clean lines and use of widgets to display information like weather, news, and photos is a welcome change of pace from the busy screen in Windows 10.

Microsoft

Windows handles windows even better

The improved Snap Layout and Snap Group features let you easily manage and resize windows on your monitor, as well as keep apps you need to use simultaneously grouped together. Hover your mouse over the window maximize button on the app of your choice to see the layout options, ranging from a side-by-side layout to a four-app grid. You can still grab windows and pull them to the edge of your screen to do some basic window management, but if you’re working from home or using more than one monitor, the easy window organizing can keep your desktop uncluttered and your eyes on the task at hand.

Smartphone aesthetics and accessibility options

The new Windows OS takes cues from its smartphone relatives, simplifying basic settings changes and making them easy to access. One click or tap in the corner of your taskbar to pull up a control panel similar to Apple’s Control Center, which lets you futz with settings like brightness and volume, connectivity, and more. Windows apps now feature more aesthetically pleasing curved corners, and the Settings app has more options to change how you interact with the OS thanks to more accessibility features. New sounds and audio cues are available for blind users, and themes for people with light sensitivity or those working long hours have been updated to be easier on the eyes.

For anyone interested in using closed captions, Windows 11 allows for further customization and preset closed captioning themes for easier reading.

Voice Typing, an update to the voice dictation tool in Windows 10, has gone through a modern-day update, and can now add punctuation to what was once a stream-of-consciousness-like experience. The option is available wherever you can input text, and is easy to pull up with a keyboard shortcut (and dismiss with a voice command). English, French, German, Italian, Portuguese, Spanish, and Simplified Chinese are supported.

Xbox features on your PC

Windows 11 is bringing some game-friendly features to the PC already available on the company’s Xbox line of game consoles. Features like Auto HDR will bring high dynamic range to PCs with the supported hardware, as well as DirectStorage, which stores data directly on graphics cards for faster access and therefore faster speeds. Paired with GamePass, Xbox’s game subscription service, it could turn your PC into the preferred entertainment device.

Microsoft

The app store finally makes sense

A more coherent and organized app store is the company’s effort to resolve something especially frustrating on nontraditional Windows devices like the ARM-powered Surface Pro X. Now you can easily see which apps are most compatible with your device—not just Microsoft’s preferred Universal Windows Apps—and can handle apps from third-party app stores. The Microsoft Store can also manage the installation of apps found on the web, meaning you can now manage all your apps from one place.

Note-taking is more pleasing

For those in love with the tactile note-taking power of Windows and the Surface Pen, Windows 11 brings some minor tweaks to the must-have feature that make it feel more like you’re writing on a sheet of paper. Haptic feedback means you’ll get some physical sensation when drawing thick lines on your screen or checking a box with your pen, and barely anything when doing more delicate work like sketching. The Ink Workspace now lets you add apps of your choice rather than the usual Whiteboard and snipping tool, so you’ll have quick access to your creativity tools as soon as you pull the pen off your Surface device.

Business-friendly security is standard

Microsoft’s security-focused Trusted Platform Module (TPM)(available on recent devices) handles tasks like data encryption and authentication services like Windows Hello or other biometric passwords so you don’t need to unlock your PC with a USB key when you use it. This is why you need a newer computer to operate Windows 11 at full capacity. It might be a hassle for some users of custom-built PCs or older machines, but the feature enables the company to chase some ambitious goals, like ending passwords for Microsoft accounts by using authentication apps, verification codes sent to secure devices, or biometrics.

Source: Tech – TIME | 16 Oct 2021 | 2:11 am

Inside the World of Black Bitcoin, Where Crypto Is About Making More Than Just Money

At the Black Blockchain Summit, there is almost no conversation about making money that does not carry with it the possibility of liberation.

This is not simply a gathering for those who would like to ride whatever bumps and shocks, gains and losses come with cryptocurrency. It is a space for discussing the relationship between money and man, the powers that be and what they have done with power. Online and in person, on the campus of Howard University in Washington, D.C., an estimated 1,500 mostly Black people have gathered to talk about crypto—decentralized digital money backed not by governments but by blockchain technology, a secure means of recording transactions—as a way to make money while disrupting centuries-long patterns of oppression.

[time-brightcove not-tgx=”true”]

“What we really need to be doing is to now utilize the technology behind blockchain to enhance the quality of life for our people,” says Christopher Mapondera, a Zimbabwean American and the first official speaker. As a white-haired engineer with the air of a lecturing statesman, Mapondera’s conviction feels very on-brand at a conference themed “Reparations and Revolutions.” Along with summit organizer Sinclair Skinner, Mapondera co-founded BillMari, a service that aims to make it easier to transmit cryptocurrency to wherever the sons and daughters of Africa have been scattered.

So, not exactly your stereotypical “Bitcoin bro.” Contrary to the image associated with cryptocurrency since it entered mainstream awareness, almost no one at the summit is a fleece-vest-wearing finance guy or an Elon Musk type with a grudge against regulators. What they are is a cross section of the world of Black crypto traders, educators, marketers and market makers—a world that seemingly mushroomed during the pandemic, rallying around the idea that this is the boon that Black America needs.

In fact, surveys indicate that people of color are investing in cryptocurrency in ways that outpace or equal other groups—something that can’t be said about most financial products. About 44% of those who own crypto are people of color, according to a June survey by the University of Chicago’s National Opinion Research Center. In April, a Harris Poll reported that while just 16% of U.S. adults overall own cryptocurrency, 18% of Black Americans have gotten in on it. (For Latino Americans, the figure is 20%.) The actor Hill Harper of The Good Doctor, a Harvard Law School friend of former President Barack Obama, is a pitchman for Black Wall Street, a digital wallet and crypto trading service developed with Najah Roberts, a Black crypto expert. And this summer, when the popular money-transfer service Cash App added the option to purchase Bitcoin, its choice to explain the move was the MC Megan Thee Stallion. “With my knowledge and your hustle, you’ll have your own empire in no time,” she says in an ad titled “Bitcoin for Hotties.”

Read more: Americans Have Learned to Talk About Racial Inequality. But They’ve Done Little to Solve It

But, as even Megan Thee Stallion acknowledges in that ad, pinning one’s economic hopes on crypto is inherently risky. Many economic experts have described crypto as little better than a bubble, mere fool’s gold. The rapid pace of innovation—it’s been little more than a decade since Bitcoin was created by the enigmatic, pseudonymous Satoshi Nakamoto—has left consumers with few protections. Whether the potential is worth those risks is the stuff of constant, and some would say, infernal debate.

Cleve Mesidor, who founded the National Policy Network of Women of Color in Blockchain
Jared Soares for TIMECleve Mesidor, who founded the National Policy Network of Women of Color in Blockchain

What looms in the backdrop is clear. In the U.S., the median white family’s wealth—reflecting not just assets minus debt, but also the ability to weather a financial setback—sat around $188,200, per the Federal Reserve’s most recent measure in 2019. That’s about eight times the median wealth of Black families. (For Latino families, it’s five times greater; the wealth of Asian, Pacific Island and other families sits between that of white and Latino families, according to the report.) Other estimates paint an even grimmer picture. If trends continue, the median Black household will have zero wealth by 2053. The summit attendees seem certain that crypto represents keys to a car bound for somewhere better.

“Our digital selves are more important in some ways than our real-world selves,” Tony Perkins, a Black MIT-trained computer scientist, says during a summit session on “Enabling Black Land and Asset Ownership Using Blockchain.” The possibilities he rattles off—including fractional ownership of space stations—will, to many, sound fantastical. To others, they sound like hope. “We can operate on an even playing field in the digital world,” he says.

The next night, when in-person attendees gather at Barcode, a Black-owned downtown D.C. establishment, for drinks and conversation, there’s a small rush on black T-shirts with white lettering: SATOSHI, they proclaim, IS BLACK.


That’s an intriguing idea when your ancestors’ bodies form much of the foundation of U.S. prosperity. At the nation’s beginnings, land theft from Native Americans seeded the agricultural operations where enslaved Africans would labor and die, making others rich. By 1860, the cotton-friendly ground of Mississippi was so productive that it was home to more millionaires than anywhere else in the country. Government-supported pathways to wealth, from homesteading to homeownership, have been reliably accessible to white Americans only. So Black Bitcoiners’ embrace of decentralized currencies—and a degree of doubt about government regulators, as well as those who have done well in the traditional system—makes sense.

Skinner, the conference organizer, believes there’s racial subtext in the caution from the financial mainstream regarding Bitcoin—a pervasive idea that Black people just don’t understand finance. “I’m skeptical of all of those [warnings], based on the history,” Skinner, who is Black American, says. Even a drop in the value of Bitcoin this year, which later went back up, has not made him reticent. “They have petrol shortages in England right now. They’ll blame the weather or Brexit, but they’ll never have to say they’re dumb. Something don’t work in Detroit or some city with a Black mayor, we get a collective shame on us.”

Read more: America’s Interstate Slave Trade Once Trafficked Nearly 30,000 People a Year—And Reshaped the Country’s Economy

The first time I speak to Skinner, the summit is still two weeks away. I’d asked him to talk through some of the logistics, but our conversation ranges from what gives money value to the impact of ride-share services on cabbies refusing Black passengers. Tech often promises to solve social problems, he says. The Internet was supposed to democratize all sorts of things. In many cases, it defaulted to old patterns. (As Black crypto policy expert Cleve Mesidor put it to me, “The Internet was supposed to be decentralized, and today it’s owned by four white men.”) But with the right people involved from the start of the next wave of change—crypto—the possibilities are endless, Skinner says.

Skinner, a Howard grad and engineer by training, first turned to crypto when he and Mapondera were trying to find ways to do ethanol business in Zimbabwe. Traditional international transactions were slow or came with exorbitant fees. In Africa, consumers pay some of the world’s highest remittance, cell phone and Internet data fees in the world, a damaging continuation of centuries-long wealth transfers off the continent to others, Skinner says. Hearing about cryptocurrency, he was intrigued—particularly having seen, during the recession, the same banking industry that had profited from slavery getting bailed out as hundreds of thousands of people of color lost their homes.

So in 2013, he invested “probably less than $3,000,” mostly in Bitcoin. Encouraged by his friend Brian Armstrong, CEO of Coinbase, one of the largest platforms for trading crypto, he grew his stake. In 2014, when Skinner went to a crypto conference in Amsterdam, only about eight Black people were there, five of them caterers, but he felt he had come home ideologically. He saw he didn’t need a Rockefeller inheritance to change the world. “I don’t have to build a bank where they literally used my ancestors to build the capital,” says Skinner, who today runs a site called I Love Black People, which operates like a global anti-racist Yelp. “I can unseat that thing by not trying to be like them.”

Eventually, he and Mapondera founded BillMari and became the first crypto company to partner with the Reserve Bank of Zimbabwe to lower fees on remittances, the flow of money from immigrants overseas back home to less-developed nations—an economy valued by the World Bank and its offshoot KNOMAD at $702 billion in 2020. (Some of the duo’s business plans later evaporated, after Zimbabwe’s central bank revoked approval for some cryptocurrency activities.)

Skinner’s feelings about the economic overlords make it a bit surprising that he can attract people like Charlene Fadirepo, a banker by trade and former government regulator, to speak at the summit. On the first day, she offers attendees a report on why 2021 was a “breakout year for Bitcoin,” pointing out that major banks have begun helping high-net-worth clients invest in it, and that some corporations have bought crypto with their cash on hand, holding it as an asset.

Fadirepo, who worked in the Fed’s inspector general’s office monitoring Federal Reserve banks and the Consumer Financial Protection Bureau, is not a person who hates central banks or regulation. A Black American, she believes strongly in both, and in their importance for protecting investors and improving the economic position of Black people. Today she operates Guidefi, a financial education and advising company geared toward helping Black women connect with traditional financial advisers. It just launched, for a fee, direct education in cryptocurrency.

Crypto is a relatively new part of Fadirepo’s life. She and her Nigerian-American doctor husband earn good salaries and follow all the responsible middle-class financial advice. But the pandemic showed her they still didn’t have what some of his white colleagues did: the freedom to walk away from high-risk work. As the stock market shuddered and storefronts shuttered, she decided a sea change was coming. A family member had mentioned Bitcoin at a funeral in 2017, but it sounded risky. Now, her research kept bringing her back to it. Last year, she and her husband bought $6,000 worth. No investment has ever generated the kinds of returns for them that Bitcoin has.

“It has transformed people’s relationship with money,” she says. “Folks are just more intentional … and honestly feeling like they had access to a world that was previously walled off.”

Read more: El Salvador Is Betting on Bitcoin to Rebrand the Country — and Strengthen the President’s Grip

She knows frauds exists. In May, a federal watchdog revealed that since October 2020, nearly 7,000 people have reported losses of more than $80 million on crypto scams—12 times more scam reports than the same period the previous year. The median individual loss: $1,900. For Fadirepo, it’s worrying. That’s part of why she helps moderate recurring free learning and discussion options like the Black Bitcoin Billionaires chat room on Clubhouse, which has grown from about 2,000 to 130,000 club members this year.

Charlene Fadirepo, a banker and former government regulator, near the National Museum of African American History and Culture
Jared Soares for TIMECharlene Fadirepo, a banker and former government regulator, near the National Museum of African American History and Culture

There’s a reason Black investors might prefer their own spaces for that kind of education. Fadirepo says it’s not unheard-of in general crypto spaces—theoretically open to all, but not so much in practice—to hear that relying on the U.S. dollar is slavery. “To me, a descendant of enslaved people in America, that was painful,” she says. “There’s a lot of talk about sovereignty, freedom from the U.S. dollar, freedom from inflation, inflation is slavery, blah blah blah. The historical context has been sucked out of these conversations about traditional financial systems. I don’t know how I can talk about banking without also talking about history.”


Back in January, I found myself in a convenience store in a low-income and predominantly Black neighborhood in Dallas, an area still living the impact of segregation decades after its official end. I was there to report on efforts to register Black residents for COVID-19 shots after an Internet-only sign-up system—and wealthier people gaming the system—created an early racial disparity in vaccinations. I stepped away to buy a bottle of water. Inside the store, a Black man wondered aloud where the lottery machine had gone. He’d come to spend his usual $2 on tickets and had found a Bitcoin machine sitting in its place. A second Black man standing nearby, surveying chip options, explained that Bitcoin was a form of money, an investment right there for the same $2. After just a few questions, the first man put his money in the machine and walked away with a receipt describing the fraction of one bitcoin he now owned.

Read more: When a Texas County Tried to Ensure Racial Equity in COVID-19 Vaccinations, It Didn’t Go as Planned

I was both worried and intrigued. What kind of arrangement had prompted the store’s owner to replace the lottery machine? That month, a single bitcoin reached the $40,000 mark.

“That’s very revealing, if someone chooses to put a cryptocurrency machine in the same place where a lottery [machine] was,” says Jeffrey Frankel, a Harvard economist, when I tell him that story. Frankel has described cryptocurrencies as similar to gambling, more often than not attracting those who can least afford to lose, whether they are in El Salvador or Texas. Frankel ranks among the economists who have been critical of El Salvador’s decision to begin recognizing Bitcoin last month as an official currency, in part because of the reality that few in the county have access to the internet, as well as the cryptocurrency’s price instability and its lack of backing by hard assets, he says.

At the same time that critics have pointed to the shambolic Bitcoin rollout in El Salvador, Bitcoin has become a major economic force in Nigeria, one of the world’s larger players in cryptocurrency trading. In fact, some have argued that it has helped people in that country weather food inflation. But, to Frankel, crypto does not contain promise for lasting economic transformation. To him, disdain for experts drives interest in cryptocurrency in much the same way it can fuel vaccine hesitancy. Frankel can see the potential to reduce remittance costs, and he does not doubt that some people have made money. Still, he’s concerned that the low cost and click-here ease of buying crypto may draw people to far riskier crypto assets, he says. Then he tells me he’d put the word assets here in a hard set of air quotes.

And Frankel, who is white, is not alone. Darrick Hamilton, an economist at the New School who is Black, says Bitcoin should be seen in the same framework as other low-cost, high-risk, big-payoff options. “In the end, it’s a casino,” he says. To people with less wealth, it can feel like one of the few moneymaking methods open to them, but it’s not a source of group uplift. “Like any speculation, those that can arbitrage the market will be fine,” he says. “There’s a whole lot of people that benefited right before the Great Recession, but if they didn’t get out soon enough, they lost their shirts too.”

To buyers like Jiri Sampson, a Black cryptocurrency investor who works in real estate and lives outside Washington, D.C., that perspective doesn’t register as quite right.

The U.S.-born son of Guyanese immigrants wasn’t thinking about exploitation when he invested his first $20 in cryptocurrency in 2017. But the groundwork was there. Sampson homeschools his kids, due in part to his lack of faith that public schools equip Black children with the skills to determine their own fates. He is drawn to the capacity of this technology to create greater agency for Black people worldwide. The blockchain, for example, could be a way to establish ownership for people who don’t hold standard documents—an important issue in Guyana and many other parts of the world, where individuals who have lived on the land for generations are vulnerable to having their property co-opted if they lack formal deeds. Sampson even pitched a project using the blockchain and GPS technology to establish digital ownership records to the Guyanese government, which did not bite.

“I don’t want to downplay the volatility of Bitcoin,” Sampson says. But that’s only a significant concern, he believes, if one intends to sell quickly. To him, Bitcoin represents a “harder” asset than the dollar, which he compares to a ship with a hole in it. Bitcoin has a limited supply, while the Fed can decide to print more dollars anytime. That, to Sampson, makes some cryptocurrencies, namely Bitcoin, good to buy and hold, to pass along wealth from one generation to another.


Economists and crypto buyers aren’t the only ones paying attention. Congress, the Securities and Exchange Commission, and the Federal Reserve have indicated that they will move toward official assessments or regulation soon. At least 10 federal agencies are interested in or already regulating crypto in some way, and there’s now a Congressional Blockchain Caucus. Representatives from the Federal Reserve and the SEC declined to comment, but SEC Chairman Gary Gensler assured a Senate subcommittee in September that his agency is working to develop regulation that will apply to cryptocurrency markets and trading activity.

Enter Cleve Mesidor, of the quip about the Internet being owned by four white men. When we meet during the summit, she introduces herself: “Cleve Mesidor, I’m in crypto.”

She’s the first person I’ve ever heard describe herself that way, but not that long ago, “influencer” wasn’t a career either. A former Obama appointee who worked inside the Commerce Department on issues related to entrepreneurship and economic development, Mesidor learned about cryptocurrency during that time. But she didn’t get involved in it personally until 2013, when she purchased $200 in Bitcoin. After leaving government, she founded the National Policy Network of Women of Color in Blockchain, and is now the public policy adviser for the industry group the Blockchain Association. There are more men than women in Black crypto spaces, she tells me, but the gender imbalance tends to be less pronounced than in white-dominated crypto communities.

Mesidor, who immigrated to the U.S. from Haiti and uses her crypto investments to fund her professional “wanderlust,” has also lived crypto’s downsides. She’s been hacked and the victim of an attempted ransomware attack. But she still believes cryptocurrency and related technology can solve real-world problems, and she’s trying, she says, to make sure that necessary consumer protections are not structured in a way that chokes the life out of small businesses or investors.

“D.C. is like Vegas; the house always wins,” says Mesidor, whose independently published book is called The Clevolution: My Quest for Justice in Politics & Crypto. “The crypto community doesn’t get that.” Passion, she says, is not enough. The community needs to be involved in the regulatory discussions that first intensified after the price of a bitcoin went to $20,000 in 2017. A few days after the summit, when Mesidor and I spoke by phone, Bitcoin had climbed to nearly $60,000.


At Barcode, the Washington lounge, Isaiah Jackson is holding court. A man with a toothpaste-commercial smile, he’s the author of the independently published Bitcoin & Black America, has appeared on CNBC and is half of the streaming show The Gentleman of Crypto, which bills itself as the one of the longest-running cryptocurrency shows on the Internet. When he was building websites as a sideline, he convinced a large black church in Charlotte, N.C., to, for a time, accept Bitcoin donations. He helped establish Black Bitcoin Billionaires on Clubhouse and, like Fadirepo, helps moderate some of its rooms and events. He’s also a former teacher, descended from a line of teachers, and is using those skills to develop (for a fee) online education for those who want to become crypto investors. Now, there’s a small group standing near him, talking, but mostly listening.

Jackson was living in North Carolina when one of his roommates, a white man who worked for a money-management firm, told him he had just heard a presentation about crypto and thought he might want to suggest it to his wealthy parents. The concept blew Jackson’s mind. He soon started his own research.

“Being in the Black community and seeing the actions of banks, with redlining and other things, it just appealed to me,” Jackson tells me. “You free the money, you free everything else.”

Read more: Beyond Tulsa: The Historic Legacies and Overlooked Stories of America’s ‘Black Wall Streets’

He took his $400 savings and bought two bitcoins in October 2013. That December, the price of a single bitcoin topped $1,100. He started thinking about what kind of new car he’d buy. And he stuck with it, even seeing prices fluctuate and scams proliferate. When the Gentlemen of Bitcoin started putting together seminars, one of the early venues was at a college fair connected to an annual HBCU basketball tournament attended by thousands of mostly Black people. Bitcoin eventually became more than an investment. He believed there was great value in spreading the word. But that was then.

“I’m done convincing people. There’s no point battling going back and forth,” he says. “Even if they don’t realize it, what [investors] are doing if they are keeping their bitcoin long term, they are moving money out of the current system into another one. And that is basically the best form of peaceful protest.”

 

With reporting by Leslie Dickstein and Simmone Shah

Source: Tech – TIME | 16 Oct 2021 | 1:00 am

Microsoft Shuts Down LinkedIn in China, Citing ‘Challenging Operating’ Climate

Microsoft Corp.’s LinkedIn is shuttering a localized version of its professional networking platform in China, becoming the last major U.S. social media provider to pull out of the country.

LinkedIn said it made the decision in light of “a significantly more challenging operating environment and greater compliance requirements in China.” The company will close the current version later this year, LinkedIn said in a blog post Thursday.

After entering China in 2014, LinkedIn seemed to offer a model for American Internet companies to break into the country. In exchange for that privilege, the company agreed to restrict some content to adhere to state censorship rules. The service had about 52 million users on mainland China. Other social media platforms like Twitter Inc. and Facebook Inc. have long been banned.

Read More: Here’s What to Know About China’s Sweeping Tech Crackdown

Signs of turbulence for Microsoft emerged in March. LinkedIn said then that it had paused new member sign-ups for its China service while it worked to ensure it was in compliance with local law.

LinkedIn said its new strategy for China is to focus on helping local professionals find jobs in the nation and to help Chinese companies find quality candidates. Later this year, it will introduce InJobs, a new standalone jobs application for the country. The site won’t include a social feed or the ability to share posts or articles.

Source: Tech – TIME | 15 Oct 2021 | 7:33 am

Microsoft Agrees to Human Rights Review of Deals With Law Enforcement and Government

Microsoft Corp., which has faced pressure from employees and shareholders over contracts with governments and law enforcement agencies, agreed to commission an independent human rights review of some of those deals.

The move came in response to a June filing of a shareholder proposal asking the company to evaluate how well it sticks to its human rights statement and related policies. Microsoft committed to a review of any human rights impacts that its products have on those including communities of Black, Indigenous and People of Color in contracts for police, immigration enforcement and unspecified other government agencies, according to correspondence from the company viewed by Bloomberg.
[time-brightcove not-tgx=”true”]

Microsoft pledged to publish the report next year, and the shareholders, who include faith-based investors like Religious of the Sacred Heart of Mary, have withdrawn their proposal ahead of Microsoft’s annual shareholder meeting next month.

Microsoft spokesman Frank Shaw confirmed the company will undertake the review.

“In response to shareholder requests, Microsoft Corp. will commission an independent, third-party assessment to identify, understand, assess, and address actual or potential adverse human rights impacts of the company’s products and services and business relationships with regard to law enforcement, immigration enforcement, and other government contracts. The assessment will include consultation with BIPOC communities, including immigrants, and other groups representing communities most impacted by Microsoft’s surveillance products, law enforcement and government contracts,” the company said in a statement.

As government, military and police contracts have become targets of scrutiny and activism, Microsoft employees have circulated letters demanding the company abandon a deal to build versions of its HoloLens augmented reality headsets for the U.S. Army as well as raising concerns about business with U.S. Immigration and Customs Enforcement. Chief Executive Officer Satya Nadella has stood behind software sales to the U.S. military, but paused selling facial recognition technology to police departments, although the company sells other programs to law enforcement. The California-based religious order agreed to lead the shareholder proposal because it wanted to make sure the company’s products don’t “cause human rights harms, including perpetuating systemic racial inequities,” Sister Joanne Safian, said in a statement.

Microsoft told the investors the review will be conducted by the law firm Foley Hoag LLP. The proposal was filed by Investor Advocates for Social Justice, a nonprofit representing faith-based institutional investors. Microsoft didn’t specify which contracts will be examined, but shareholders “expect” it will include what the group said are about 16 active contracts with ICE and U.S. Customs and Border Protection.

“This will be an ambitious and complicated process and we’re certainly putting our faith in Microsoft and Foley Hoag to be conscientious,” said Michael Connor, executive director of Open MIC, a nonprofit shareholder advocacy organization that worked with IASJ on the proposal. “They’re asking for input from affected rights holders, which was a very big request on our part and they agreed to that.”

Human rights concerns have been raised by shareholders in areas related to labor and in the apparel industry around manufacturing conditions but are newer to the technology companies, he said. Open MIC has also made similar requestsof Amazon.com Inc., related to its facial recognition technology, as well as Apple Inc., Facebook Inc. and Alphabet Inc., without a positive response from the companies or a win at shareholder meetings, Connor said.

Open MIC is also working on two other shareholder resolutions related to Microsoft, including one that asks the company to stop selling facial recognition software to all government agencies.

“Tech companies take the position that all tech is good, and while we as shareholders recognize that tech can be helpful, there are also many downsides,” Connor said.

Microsoft earlier this month agreed to let more repair shops fix its devices in response to a push from As You Sow, a nonprofit shareholder activism group, and consumer advocates.

Source: Tech – TIME | 14 Oct 2021 | 9:54 am

Tesla Must Answer For Failure to Recall Autopilot Software After Crashes

(DETROIT) — U.S. safety investigators want to know why Tesla didn’t file recall documents when it updated Autopilot software to better identify parked emergency vehicles, escalating a simmering clash between the automaker and regulators.

In a letter to Tesla, the National Highway Traffic Safety Administration told the electric car maker Tuesday that it must recall vehicles if an over-the-internet update deals with a safety defect.

“Any manufacturer issuing an over-the-air update that mitigates a defect that poses an unreasonable risk to motor vehicle safety is required to timely file an accompanying recall notice to NHTSA,” the agency said in a letter to Eddie Gates, Tesla’s director of field quality.
[time-brightcove not-tgx=”true”]

The agency also ordered Tesla to provide information about its “Full Self-Driving” software that’s being tested on public roads with some owners.

The latest clash is another sign of escalating tensions between Tesla and the agency that regulates vehicle safety and partially automated driving systems.

In August the agency opened an investigation into Tesla’s Autopilot after getting multiple reports of vehicles crashing into emergency vehicles with warning lights flashing that were stopped on highways. The software can keep cars in their lane and a safe distance from vehicles in front of them.

Messages were left early Wednesday seeking comment from Tesla.

NHTSA opened a formal investigation of Autopilot after a series of collisions with parked emergency vehicles. The investigation covers 765,000 vehicles, almost everything that Tesla has sold in the U.S. since the start of the 2014 model year. Of the dozen crashes that are part of the probe, 17 people were injured and one was killed.

According to the agency, Tesla did an over-the-internet software update in late September that was intended to improve detection of emergency vehicle lights in low-light conditions. The agency says Tesla is aware that federal law requires automakers to do a recall if they find out that vehicles have safety defects.

The agency asked for information about Tesla’s “Emergency Light Detection Update” that was sent to certain vehicles “with the stated purpose of detecting flashing emergency vehicle lights in low light conditions and then responding to said detection with driver alerts and changes to the vehicle speed while Autopilot is engaged.”

The letter asks for a list of events that motivated the software update, as well as what vehicles it was sent to and whether the measures extend to Tesla’s entire fleet.

It also asks the Palo Alto, California, company whether it intends to file recall documents. “If not, please furnish Tesla’s technical and/or legal basis for declining to do so,” the agency asks.

Philip Koopman, a professor of electrical and computer engineering at Carnegie Mellon University, said NHTSA clearly wants Tesla to issue a recall. “They’re giving Tesla a chance to have their say before they bring the hammer down,” said Koopman, who studies automated vehicle safety.

When automakers find a safety defect, they must tell NHTSA within five working days, and they’re required to do recalls. NHTSA monitors the recalls to make sure they cover all affected vehicles.. Automakers are required to notify all owners with letters explaining the repairs, which must be done at company expense.

A public recall allows owners to make sure the repairs are done, and so people buying cars are aware of potential safety problems.

NHTSA’s actions put all automakers on notice that when they do software updates via the internet, they have to be reported to the agency if they fix a safety problem. It’s another new technology that the agency has to deal with as numerous automakers follow Tesla with internet software capability.

“Now every company has exposure every time they do an over-the-air update because NHTSA may come back weeks later and say ‘wait a minute, that was a stealth recall,’” Koopman said.

Tesla has to comply with the request by Nov. 1 or face court action and civil fines of more than $114 million, the agency wrote.

In a separate order to Tesla, NHTSA says that the company may be taking steps to hinder the agency’s access to safety information by requiring drivers who are testing “Full Self-Driving” software to sign non-disclosure agreements.

The order demands that Tesla describe the non-disclosure agreements and say whether the company requires owners of vehicles with Autopilot to agree “to any terms that would prevent or discourage vehicle owners from sharing information about or discussing any aspect of Autopilot with any person other than Tesla.”

Responses must be made by a Tesla officer under oath. If Tesla fails to fully comply, the order says the matter could be referred to the Justice Department. It also threatens more fines of over $114 million.

Tesla has said that neither vehicles equipped with “Full Self-Driving” nor Autopilot can drive themselves. It warns drivers that they must be ready to intervene at all times.

Shares of Tesla rose slightly in Wednesday morning trading.

It was unclear how Tesla and CEO Elon Musk will respond to NHTSA’s demands. The company and Musk have a long history of sparring with federal regulators.

In January, Tesla refused a request from NHTSA to recall about 135,000 vehicles because their touch screens could go dark. The agency said the screens were a safety defect because backup cameras and windshield defroster controls could be disabled.

A month later, after NHTSA started the process of holding a public hearing and taking Tesla to court, the company agreed to the recall. Tesla said it would replace computer processors for the screens, even though it maintained there was no safety threat.

Musk fought with the Securities and Exchange Commission over a 2018 tweet claiming that he had financing to take Tesla private, when that funding was not secured. He and the company agreed to pay $20 million each to settle allegations that he misled investors. Musk branded the SEC the “shortseller enrichment commission,” distorting the meaning of its acronym. Short sellers bet that a stock price will fall.

The new demands from NHTSA signal a tougher regulatory stance under President Joe Biden on automated vehicle safety compared with the previous administrations. The agency had appeared reluctant to regulate the new technology for fear of hampering adoption of the potentially life-saving systems.

Source: Tech – TIME | 14 Oct 2021 | 8:16 am

Washington Wants to Regulate Facebook’s Algorithm. That Might Be Unconstitutional

Facebook whistleblower Frances Haugen urged Congress to “change the rules that Facebook plays by” and regulate the platform’s algorithm when she testified before the Senate on Oct. 5. By the end of that week, even Facebook itself had thrown support behind its algorithm being “held to account.”

Legislators have shown increasing interest in passing a bill that would hold Facebook more accountable for the content it amplifies using its algorithm. But legal experts are divided over whether such a change could survive a legal challenge on First Amendment grounds because of how it would alter the way speech is promoted on the platform.
[time-brightcove not-tgx=”true”]

“Any algorithmic regulation is probably going to have some First Amendment questions, if not problems,” says Gautam Hans, an expert on First Amendment law and technology policy at Vanderbilt Law School. “I think tech companies would feel fairly confident they have a good hand to play if they end up in court.”

Washington has long debated whether major social media companies like Facebook need to be broken up. But after Haugen’s testimony, which alleged that Facebook’s algorithm exacerbates body image issues for teens on Instagram and sent users to radicalized content ahead of the Jan. 6 attack on the Capitol, the conversation has shifted towards whether the government should regulate the engagement-driven algorithm at the core of the products. Taking aim at the algorithm instead of breaking up the company through antitrust law would be a novel approach that would almost certainly invite legal challenges.

Read More: How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits

“The courts have repeatedly said that restrictions on the distribution of speech are restrictions on speech themselves,” says Jeff Kosseff, associate professor of cybersecurity law at the United States Naval Academy. “Exactly how the courts would interpret that based on algorithms is hard to know.”

The main strategy for regulating the algorithm centers on Congress reforming Section 230, a part of the United States Communications Decency Act of 1996 that has been interpreted to mean social media networks and Internet services are not personally responsible for the content posted on their platforms—including illegal content and libel.

Momentum is growing behind reforming Section 230. Tim Wu, special assistant to the president for technology and competition policy, told TIME in a statement on Oct. 7 that President Joe Biden supports changes: “The President has been a strong supporter of fundamental reforms to achieve greater accountability for tech platforms,” Wu said. “That includes not only antitrust reforms, but also Section 230 reforms and privacy reforms, as well as more transparency.”

And even leadership within Facebook is open to the idea. “We need greater transparency,” Nick Clegg, vice president for global affairs and communications of Facebook, said on CNN’s “State of the Union” on Oct. 10. “[The algorithms] should be held to account, if necessary by regulation, so that people can match what our systems say they’re supposed to do and what actually happens.”

Read More: This Is the White House’s Plan to Take on Facebook

Roddy Lindsay, a former data scientist at Facebook who worked on the News Feed algorithm, says the key to successfully reforming Section 230 lies in making sure the change does not include any constraint to speech. Instead, it must remove the current protections social media companies enjoy around serving illegal content to their users. Removing that protection would mean that Facebook could be sued if its algorithm promotes any libelous speech or illegal content on a user’s News Feed. Then, Lindsay says, companies would likely drop their personalized, algorithmic amplification of certain content rather than risk promoting illegal posts. Instead, platforms like Facebook would have incentives to implement a non-algorithmic feed, such as something in chronological order or a system where users have more control over the content they see.

There’s some evidence that courts are becoming more sympathetic to regulating algorithms. Kai Falkenberg, First Deputy Commissioner of the New York City Mayor’s Office of Media and Entertainment and a lecturer at Columbia Law School, points to cases against tech companies that were recently dismissed by the 9th Circuit on the grounds of Section 230, and the court wrote that protections for tech companies have extended too far. “You’re starting to see judges saying, ‘If Congress is not willing to restrict the scope of Section 230, then we’re going to do it through the courts,’” Falkenberg says. As a result, Falkenberg says Congress may be able to pass a limited reform to Section 230 that withstands scrutiny “in the near future.”

But opponents to Section 230 reform can argue that these private companies and their users have rights, and the government should have no involvement in regulating speech nor decisions about distribution of speech. They base their case in the text of the First Amendment: “Congress shall make no law…abridging the freedom of speech.”

“I think there’s been a lot of paralysis with government regulating the algorithm because of the First Amendment,” says Joan Donovan, research director of the Shorenstein Center at Harvard Kennedy School. “We can expect what we’ve always gotten from [the tech] side, which is that they will argue that you cannot regulate the Internet.”

Hans, the First Amendment law expert, predicts any algorithmic regulation would face an uphill battle in the judiciary, given how courts have ruled overwhelmingly in favor of First Amendment arguments in recent years. He pointed to Supreme Court cases like 2010’s Citizens United v. FEC, which held that free speech prevented the government from restricting corporations from making political donations, and 2018’s Janus v. AFSCME, which held that public sector unions collecting fees from non-members violates the First Amendment. “The increasing skepticism of government regulation of commercial speech means it would be harder for some of the proposals to survive Constitutional scrutiny because of the trajectory of the decisions we’ve seen,” says Hans.

That’s why Hans says he would “not be surprised” if Congress passes a bill that gets shot down in a peremptory challenge in a federal court. “There’s no ban on Congress passing unconstitutional legislation,” he adds. “Just because it’s unconstitutional doesn’t mean they’re not going to try.”

-With reporting by Brian Bennett/Washington

Source: Tech – TIME | 14 Oct 2021 | 6:44 am

Why Big Businesses in Texas Are Ignoring Gov. Abbott’s Vaccine Mandate Ban

Mandates have proven to be an effective but controversial method for compelling vaccine-shy Americans to receive their shots. But as the Biden Administration has doubled down on requiring COVID-19 vaccination—including proposing a rule that businesses with more than 100 employees mandate vaccination—for some Republicans, opposition to mandates is proving to be an essential credential for showcasing leaders’ conservative bonafides.

On Oct. 11, Texas Gov. Greg Abbott—who has opposed masking but came under fire from a Republican political rival recently for allegedly failing to push back hard enough on federal vaccine mandates—took a strong stand against vaccine mandates, issuing an executive order banning any “entity” in Texas from mandating vaccination for people who object to the vaccine for any reason, including “personal conscience.”
[time-brightcove not-tgx=”true”]

Stuck between following federal guidance and the state executive order, representatives for high-profile businesses based in Texas told TIME that they feel that federal law as well as employees and customers’ safety supersedes Abbott’s rule. And those that already required employees to be vaccinated have no intention of changing course.

Dell, which is based in Round Rock, Tex., and boasted revenue of $92.2 billion last year, is requiring employees to be vaccinated or submit to weekly testing to work in the office. “Any employee or contractor who experiences challenges with the policy will have the option, by role, to work remotely,” the company told TIME in a statement on Oct. 12. “We believe this policy provides multiple options for anyone who works for or with Dell, and allows us to maintain safe working environments around the world.”

IBM, which has large offices in Austin, Houston, Dallas, and San Antonio and reported revenue of $73.6 billion last year, said all direct employees of federal contractors must be vaccinated by Dec. 8, or get a medical or religious exemption. “We will continue to protect the health and safety of IBM employees and clients, and we will continue to follow federal requirements,” the company told TIME in a statement.

The air travel industry, which has come out strongly in favor of vaccine mandates, has also declined to change course. American Airlines, the largest airline in the U.S. which has its headquarters in Fort Worth, told Bloomberg that it feels the pending federal rule “supersedes any conflicting state laws.” The company is requiring all employees be fully vaccinated by Nov. 24. A spokesperson for Southwest Airlines, which is headquartered in Dallas, echoed American Airlines in a statement to TIME, writing, “federal action supersedes any state mandate or law, and we would be expected to comply with the President’s Order to remain compliant as a federal contractor.” Southwest employees must be vaccinated by Dec. 8.

Other organizations responded to Abbott’s rule with greater caution. Chevron, which has facilities in Texas and is one of the largest oil companies in the world, told TIME that its employees who travel internationally, work offshore in the Gulf of Mexico, or work aboard tankers must be vaccinated. However, a statement from the company noted that the federal rule has not been formally issued yet, “so it is premature to say what its impact will have on our operations.”

“To the extent federal, state and local laws are not in conflict, we endeavor to remain in compliance with all of them,” the statement read. “​​When a new law is put into effect, we review our practices and adjust them as may be necessary.”

Texas Children’s Hospital in Houston, which currently requires its employees to be vaccinated, told TIME that it is reviewing Abbott’s order, but reaffirmed its commitment to vaccination, noting that many of the women and children it serves are immunocompromised. “We support the ability of private employers to determine the best vaccine policy for their operations and employee safety,” a representative told TIME in a statement.

Houston Methodist announced an employee vaccine mandate in March, and later fended off a lawsuit from employees who opposed the mandate; more than 153 employees in a workforce of 26,000 ultimately resigned or were fired in June after they failed to to be vaccinated by a final deadline. Dr. Marc Boom, president and CEO of Houston Methodist, told TIME that because the hospital implemented the mandate early, it won’t be immediately affected by the order, as most employees are already vaccinated. But he added that the hospital system is taking a closer look at the executive order to determine its implications.

“We are concerned for other Texas hospitals that may not be able to continue their mandates now with this executive order,” Boom said. “Health care workers all have an obligation to safely care for their patients and this order makes that promise harder.”

 

Source: Tech – TIME | 13 Oct 2021 | 12:15 pm

Uber Drivers Say a ‘Racist’ Algorithm Is Putting Them Out of Work

Abiodun Ogunyemi has been an Uber Eats delivery driver since February 2020. But since March he has been unable to work due to what a union supporting drivers claims is a racially-biased algorithm. Ogunyemi, who is Black, had submitted a photograph of himself to confirm his identity on the app, but when the software failed to recognize him, he was blocked from accessing his account for “improper use of the Uber application.”

Ogunyemi is one of dozens of Uber drivers who have been prevented from working due to what they say is “racist” facial verification technology. Uber uses Microsoft Face API software on its app to verify drivers’ identification, asking drivers to submit new photos on a regular basis. According to trade union the Independent Workers’ Union of Great Britain (IWGB) and Uber drivers, the software has difficulty accurately recognizing people with darker skin tones.
[time-brightcove not-tgx=”true”]

In 2018, a similar version of the Microsoft software was found to fail one in five darker-skinned female faces and one in 17 darker-skinned male faces. In London nine out of 10 private hire drivers identify as Black or Black British, Asian or Asian British, or mixed race, according to Transport for London data. This poses a potential issue for those who work for Uber.

In an email to TIME, an Uber spokesperson said that its facial verification software is “designed to protect the safety and security of everyone who uses the Uber app by helping ensure the correct driver is behind the wheel.” A Microsoft spokesperson said in an emailed statement: “Microsoft is committed to testing and improving Face API, paying special attention to fairness and its accuracy across demographic groups. We also provide our customers with detailed guidance for getting the best results and tools that help them to assess fairness in their system.”

‘Racist’ algorithm

Last week around 80 Uber drivers and protestors gathered outside the ride-hailing app’s London headquarters in Aldgate to waving placards reading “Scrap the racist algorithm” and “Stop unfair terminations,” to protest about the software’s role in disproportionately leading to terminations of drivers of color, among other concerns.

Ogunyemi—who was unable to attend the protest because he is based in Manchester—has three children, and since March he says his wife has taken on full-time work to support the family. Even so, he has fallen into arrears on loan and mortgage payments, he says.

uber-facial-recognition
CourtesyUber Eats delivery driver Abiodun Ogunyemi says his account was suspended after Uber’s facial recognition software failed to verify his photo.

The delivery driver, who until recently had a 96% customer rating, had run into difficulties with the automatic facial identification software before. Drivers are given the option of submitting their pictures to a computer or an Uber employee for review and Ogunyemi often had to wait for additional human verification after submitting his photos. When Uber rejected his picture in March, he says, the situation turned into “a nightmare.”

After his appeal of Uber’s decision was rejected, Ogunyemi asked to speak to someone more senior, but his request was denied, he says. IWGB has since stepped in for Ogunyemi, sending evidence to Uber on his behalf. Last month, he received a message from Uber saying his account had been reactivated and that his photo had initially been rejected by a member of staff due to “human error.” Yet, when Ogunyemi tried to access his account, he was asked to upload another picture for verification. He immediately submitted a new photo, which was denied. His account remains blocked.

“Every single day that I cannot work has a negative impact on my family,” he told TIME in a phone call. “My kids need to go to school, I need to give them pocket money. I need to pay for their bus pass.”

Uber’s spokesperson said that its system “includes robust human review to make sure that this algorithm is not making decisions about someone’s livelihood in a vacuum, without oversight,” but did not address Ogunyemi’s case.

Ogunyemi says he knows of five other drivers, all of whom are Black, who have had their accounts terminated because of issues with facial identification. IWGB says that 35 drivers have reported similar incidents to the union.

Driver identity concerns

Uber began using the problematic software after it was stripped of its license to operate in London in November 2019 amid safety concerns. Authorities found that more than 14,000 trips had been taken with 43 drivers who had used false identities. There were 45,000 Uber drivers licensed in London at the time. A year later, Uber won an appeal to have its license reinstated, but promised to root out unverified drivers by using regular facial identification procedures.

Last week it was reported that an unnamed Black British Uber driver is taking the company to court alleging indirect race discrimination because the facial recognition software was preventing him from working. According to the driver’s claim, he submitted two photos of himself, which were rejected by the platform. The IWGB, which is supporting his claim alongside Black Lives Matter U.K., said his account was later deactivated and he received a message saying: “Our team conducted a thorough investigation and the decision to end the partnership has been made on a permanent basis.” The message also said that the matter was “not subject to further review.” The ADCU is also taking legal action against Uber over the dismissal of a driver and a courier due to the software failing to recognize them.

In the U.S., a similar case was taken to a Missouri court in 2019, filed under civil rights law. The plaintiff, William Fambrough, claimed he was forced to lighten the photos he submitted for immediate verification, since he worked “late nights” for Uber and the software could not identify his face in “pitch darkness.” The company said the photos were fraudulent and his account was suspended. Fambrough’s claim was ultimately unsuccessful.

According to Professor Toby Breckon, an engineer and computer scientist at Durham University, England, facial recognition software is designed for well-lit photos. He says that people with lighter skin tones tend to be more easily recognized by the software, even in badly-lit environments. The data on racial bias in Uber’s software is “particularly bad,” although there is currently no software without a racial bias, Breckon says. His team of researchers, who are working to reduce racial bias in facial recognition algorithms, has found that skin tone is not the only factor: the technology equally struggles to identify a variety of facial features and hair types.

Read more: Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It

At the London protest, drivers expressed anger about the dismissal of their colleagues, which some believed was a symptom of systemic racism within the company. George Ibekwe, an Uber driver whose account was suspended after a customer complained that he had argued with another driver during the trip, told TIME that he believed racism was at play when his account was suspended without further investigation. Uber’s spokesperson did not comment on Ibekwe’s case.

“I haven’t had any criminal record in my life,” he said. “It is totally devastating. It affects me personally, financially, and mentally.” Without an income, he says he has been forced to claim unemployment benefits.

Another driver at the protest, who asked not to be named, claimed he was terminated after a customer complained he was “staring” at them. He said there was “no evidence, no investigation, and no interview” before his account was suspended.

Uber’s spokesperson did not comment about these allegations when asked by TIME.

Uber drivers’ rights

Uber drivers have long fought against worsening pay (despite rising fares) due to higher service fees, and unsafe working conditions. In February, the British Supreme Court ruled that Uber drivers must be treated as workers, rather than self-employed, entitling them to earn a minimum wage and take paid vacation leave. The ruling was the culmination of a long-running legal battle over the company’s responsibility to its drivers. Similar efforts are underway in other countries around the world, including Spain, the Netherlands, and South Africa, while in California, legal wrangling over ride-sharing drivers’ rights is ongoing.

According to Alex Marshall, president of IWGB, the U.K. Supreme Court ruling has opened the door to drivers suing Uber on the basis that the company has failed to protect them from discrimination. He says that since the tribunal alleging indirect race discrimination against a driver was launched, “Uber seem to be slightly on the backfoot.”

“We’re sending off emails [about facial identification errors], and we’re hearing decisions getting overturned a lot quicker than in the past,” he says.

The outcome of the upcoming court case may have major implications for Uber’s facial identification processes, and could set a precedent for use of the technology. “We’re seeing this movement growing,” Marshall says. “We’re seeing the power switch back to the drivers and we’re going to keep fighting.”

Ogunyemi will be watching the other drivers’ tribunals closely and says he is considering whether to approach a lawyer himself. “It’s been six months since I’ve been out of work,” he says. “I have tried everything humanly possible to reason with Uber. I am not going to sit around any longer waiting for them.”

Source: Tech – TIME | 13 Oct 2021 | 11:24 am

Inside the Battle for the Hearts and Minds of Tomorrow’s Business Leaders

Tima Bansal begins every new course with a cautionary statistic for her business school students. A 2008 study found that MBA candidates enter business school with more community-oriented values, but graduate with more selfish ones. “They come in caring about the world, and they leave caring more about themselves. Why?” she says.

Bansal, a professor at Ivey Business School in London, Ontario, thinks she knows the answer. “At the heart of every single course that we teach is this orientation toward profit or leadership or themselves,” says Bansal, one of a growing number of academics who want to change that. MBA programs, they say, can no longer justify teaching future business leaders to maximize profits at the expense of the planet. The way Bansal and others see it, the world would be a better place if more businesses played an active role in tackling social and environmental challenges, from climate change to global poverty. And if the leadership ranks of major companies don’t adjust the way they do business, they warn, their fixation on making money and rewarding shareholders will exacerbate inequality and climate disasters.
[time-brightcove not-tgx=”true”]

“We have a crisis on our hands, and business schools need to act,” Bansal says.

<strong>“We have a crisis on our hands, and business schools need to act.”</strong>The world’s major corporations stand at a crossroads. Many Boomer and Generation X executives have grudgingly come to the realization in recent years that they can no longer straddle the fence or remain silent on thorny social and political issues. To attract and retain the best and brightest Millennial and Gen Z employees, companies are facing pressure to express their opinions and take action on critical matters, including racial injustice, climate change and income inequality. A majority of college students (68%) say companies should take public stances on social issues. Another 16% said they wouldn’t work for a company that did not, according to a recent Axios/Generation Lab poll. Another survey, by Washington State University’s Carson College of Business, found that 70% of Gen Z employees want to work for a company whose values align with their own, and 83% want to work for a company that has a positive impact on the world.

Tima Bansal
Courtesy Tima BansalTima Bansal, a professor at the Ivey Business School in London, Ontario.

That demand is reflected in the young people seeking business graduate degrees. At Boston University’s Questrom School of Business, the number of students in the Social Impact MBA program has nearly doubled in the last decade, growing from 79 in 2011 to 155 in 2021. Since leaders at the University of Vermont ripped up the 1970s-era MBA format and redesigned it around sustainability—the term used to describe businesses that are environmentally and socially conscious—the program has grown from 20 students in 2014 to 47 in 2021. They’re considering expanding the program to 70 or 80 students after receiving a record number of applications last year.

Companies are also facing pressure from consumers who increasingly want to buy eco-friendly, ethical products from businesses that share their values. Nearly 80% of consumers say it’s important that brands are sustainable and environmentally responsible, according to a 2020 study by IBM and the National Retail Federation, which polled consumers in 28 countries. A majority (57%) of those consumers say they’re willing to change their shopping habits in order to reduce the negative impact on the environment.

For the first time this semester, Presidio Graduate School in San Francisco offered MBA students an eight-week elective course on promoting anti-racism in the workforce, adding to courses on leading inclusive organizations, prioritizing social justice in supply chains and exploring renewable energy systems. Liz Leiba—an adjunct professor teaching the course, which covers the advantages and challenges of building a diverse workplace and how to identify discrimination and bias—thinks it should be required for all students. “Diversity sometimes has been an afterthought,” she says. “Marketing is not an afterthought. Sales is not an afterthought.”

Read more: What Happened When Facebook Shut Down the Team That Put People Ahead of Profits

The movement amounts to a fight for the hearts and minds of tomorrow’s business leaders by changing how MBA students are educated. “Can we possibly justify teaching students to go out and profit their investors by depleting society and the rest of the planet?’ It’s just not a viable ethical position,” says Tom Lyon, faculty director of the Erb Institute for Global Sustainable Enterprise at the University of Michigan’s Ross School of Business.

But change has been slow. Universities are large, traditional institutions that tend to stick with what they know, and many major business schools have not overhauled tried-and-true programs, instead offering one-off courses on sustainability or ethics.

<strong>“We can’t do business as usual. We have to do new business.”</strong>Maggie Winslow, the academic dean at Presidio, which aims to include sustainability and social justice in every business course, says when she offered to help the dean of another business school start a sustainability curriculum, she was told “that’s just a fad.”

Lyon has struggled to get concepts such as sustainability and corporate political responsibility fully integrated into the core curriculum and notes that there hasn’t been a critical mass of students or donors demanding that change. “It’s like the MBA core is the inner sanctum of the religion of business schools. And every area feels like, ‘I have my sacred concepts I must teach, and I cannot make room for these sort of nice, but superfluous ideas,’” Lyon says.

And many of the business schools that have been leading the charge on this front are not among the country’s top-ranked MBA programs, suggesting that the most competitive business schools are hesitant to disrupt a time-tested curriculum.

But as the world experiences the devastating effects of climate change and the country confronts centuries of racial injustice, many professors argue that change has never been more urgent. “It’s an all-hands-on-deck kind of moment,” Lyon says. “We’re close to a point of turning the planet into a place that is much less inhabitable than it’s been for the last millennium.”

‘A 50-year-old paradigm’

Dartmouth founded the first graduate school of management in 1900 with the Tuck School of Business, and Harvard launched the world’s first MBA program in 1908. The MBA has since grown to be the most popular postgraduate degree in the country, making up 24% of all master’s degrees earned in the 2018-19 school year, according to the National Center for Education Statistics.

But the world has changed dramatically since the MBA first became a rite of passage for business leaders, raising questions about whether courses in marketing, microeconomics and finance are a sufficient foundation for business leadership. MBA applications surged as the pandemic caused economic challenges and mass unemployment, but business schools had been contending with several years of declining applications before that.

“Business schools are still operating out of a 50-year-old paradigm. And I think that’s the fundamental problem,” Lyon says. “All the businessman had to do was just maximize profits, play within the rules, and everything was fine. And the problem is, our rules need to be changed. The system isn’t really working anymore.”

Climate change is expected to cost the global economy as much as $23 trillion by 2050, according to a 2021 report by the insurance company Swiss Re. And a reckoning over racial injustice has intensified calls for corporations to do more to promote equity and and diversity.

Alyssa-Gutner-Davis
Courtesy Alyssa Gutner-DavisAlyssa Gutner-Davis, a student at Boston University’s Questrom School of Business.

</span><strong style="font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif; font-size: 16px;">“My impression was if you go to business school, you’re only focused on the economics.”</strong><span style="font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif; font-size: 16px;">Alyssa Gutner-Davis, a second-year student at Questrom, says if you had told her in college that she’d one day go to business school, she would have laughed. “My impression was if you go to business school, you’re only focused on the economics and do the economics pencil out, and there’s no room for thinking through any other considerations,” says Gutner-Davis, 31. But she enrolled in Questrom’s Social Impact MBA program—which includes courses on impact investing, discrimination in the workplace and environmentally sustainable supply chains— because she wanted to understand business basics in order to pursue her interests in environmental justice and clean energy.

The demand is coming from corporations too. In June, the accounting firm PricewaterhouseCoopers announced it would invest $12 billion over five years to create 100,000 new jobs, many with an environmental, social, and governance (ESG) focus.

Read more: Wildfires Are Getting Worse, So Why Is the U.S. Still Using Wood to Build Homes?

“Student demand is increasing. You can see employers are seeking graduates with these skills and knowledge. There’s demand from society for business schools to positively contribute to tackling some of these grand societal challenges,” says Caroline Flammer, co-faculty director of Questrom’s social impact program. She teaches a course called Social Impact: Business, Society, and the Natural Environment, which she thinks should be required for any MBA student. “In my view, the Social Impact MBA program should be the MBA program.”

<strong>“Student demand is increasing. You can see employers are seeking graduates with these skills and knowledge.”</strong> At Michigan, Lyon teaches a course on the economics of sustainability to undergraduates and a course on energy markets and energy politics to graduate students. He’s working on building a task force on corporate political responsibility, aiming to confront the way companies too often “focus on their own short term profits at the expense of the larger society.” He would like to see the concepts of ethics, sustainability and political responsibility fully integrated throughout all MBA courses, not just tacked on as a single course.

Caroline Flammer
Dan WatkinsCaroline Flammer of Boston University’s Questrom School of Business.

That’s how Sanjay Sharma, dean of the University of Vermont’s Grossman School of Business, redesigned the school’s MBA program, with sustainability embedded in every subject. Students learn about impact investing, carbon pricing and analyzing social and environmental risks. They explore all case studies through a lens of environmental and social justice impacts.

Sustainability has been part of the core curriculum at Ivey, in Ontario, since 2003. When Bansal first began integrating business sustainability into strategy, finance and marketing courses—teaching students to take a long-term view and to consider social and environmental impacts in business decisions—she confronted the perception that environmental issues didn’t belong in business education. Now, she hears from students who say they “desperately need more” of these classes.

She thinks MBA programs need to do a better job of preparing students to solve today’s global challenges. “You have to have corporations that build products that solve not just their own profits, but products that actually make the world better,” she says. “That requires a different type of thinking.”

‘Business moves faster than academia’

Sharma knows that MBA programs like his, built entirely around sustainability, are still niche. Top-ranked business schools with powerful brands don’t seem eager to upend successful programs that are still attracting thousands of applicants willing to spend as much as $200,000 for their degrees and producing highly employable graduates.

“Business moves faster than academia,” Sharma says. but he thinks all business schools will be forced to adapt eventually. “If organizations demand it and if society demands it, then it’ll start happening faster.”

He and his peers are confronting a lingering stigma that courses on sustainability or social justice are nice, but not essential. “I wish I could say the vast majority of Ross students were beating down the doors for ethics and sustainability. And, you know, they’re not,” Lyon says. “The real drivers are student demand and donors. So if students start saying, ‘We have to have this material,’ schools will change.”

For university leaders, there’s nothing simple about revamping decades-old curricula or persuading tenured faculty to change their courses. But the global realities of climate instability and resource shortages could force their hand.

“If you only plan to be in business for five years, maybe you don’t want to think about it. But if you want to be in business for 50 years, then we all have to think about this,” says Winslow, the Presidio dean. “We can’t do business as usual. We have to do new business.”

Source: Tech – TIME | 9 Oct 2021 | 12:00 am

How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits

Facebook’s civic-integrity team was always different from all the other teams that the social media company employed to combat misinformation and hate speech. For starters, every team member subscribed to an informal oath, vowing to “serve the people’s interest first, not Facebook’s.”

The “civic oath,” according to five former employees, charged team members to understand Facebook’s impact on the world, keep people safe and defuse angry polarization. Samidh Chakrabarti, the team’s leader, regularly referred to this oath—which has not been previously reported—as a set of guiding principles behind the team’s work, according to the sources.
[time-brightcove not-tgx=”true”]

Chakrabarti’s team was effective in fixing some of the problems endemic to the platform, former employees and Facebook itself have said.

But, just a month after the 2020 U.S. election, Facebook dissolved the civic-integrity team, and Chakrabarti took a leave of absence. Facebook said employees were assigned to other teams to help share the group’s experience across the company. But for many of the Facebook employees who had worked on the team, including a veteran product manager from Iowa named Frances Haugen, the message was clear: Facebook no longer wanted to concentrate power in a team whose priority was to put people ahead of profits.

Facebook Mark Zuckerberg Time Magazine Cover
Illustration by TIME (Source photo: Getty Images)

Five weeks later, supporters of Donald Trump stormed the U.S. Capitol—after some of them organized on Facebook and used the platform to spread the lie that the election had been stolen. The civic-integrity team’s dissolution made it harder for the platform to respond effectively to Jan. 6, one former team member, who left Facebook this year, told TIME. “A lot of people left the company. The teams that did remain had significantly less power to implement change, and that loss of focus was a pretty big deal,” said the person. “Facebook did take its eye off the ball in dissolving the team, in terms of being able to actually respond to what happened on Jan. 6.” The former employee, along with several others TIME interviewed, spoke on the condition of anonymity, for fear that being named would ruin their career.

Tour Of Facebook's Elections Integrity War Room
Paul Morris—Bloomberg/Getty ImagesSamidh Chakrabarti, head of Facebook’s civic-integrity team, stands beside Katie Harbath, a Facebook director of public policy, in Facebook’s headquarters in Menlo Park, California, on Oct. 17, 2018.

 

Enter Frances Haugen

Haugen revealed her identity on Oct. 3 as the whistle-blower behind the most significant leak of internal research in the company’s 17-year history. In a bombshell testimony to the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security two days later, Haugen said the civic-integrity team’s dissolution was the final event in a long series that convinced her of the need to blow the whistle. “I think the moment which I realized we needed to get help from the outside—that the only way these problems would be solved is by solving them together, not solving them alone—was when civic-integrity was dissolved following the 2020 election,” she said. “It really felt like a betrayal of the promises Facebook had made to people who had sacrificed a great deal to keep the election safe, by basically dissolving our community.”

Read more: Facebook Will Not Fix Itself

In a statement provided to TIME, Facebook’s vice president for integrity Guy Rosen denied the civic-integrity team had been disbanded. “We did not disband Civic Integrity,” Rosen said. “We integrated it into a larger Central Integrity team so that the incredible work pioneered for elections could be applied even further, for example, across health-related issues. Their work continues to this day.” (Facebook did not make Rosen available for an interview for this story.)

Impacts of Civic Technology Conference 2016The defining values of the civic-integrity team, as described in a 2016 presentation given by Samidh Chakrabarti and Winter Mason. Civic-integrity team members were expected to adhere to this list of values, which was referred to internally as the “civic oath”.

Haugen left the company in May. Before she departed, she trawled Facebook’s internal employee forum for documents posted by integrity researchers about their work. Much of the research was not related to her job, but was accessible to all Facebook employees. What she found surprised her.

Some of the documents detailed an internal study that found that Instagram, its photo-sharing app, made 32% of teen girls feel worse about their bodies. Others showed how a change to Facebook’s algorithm in 2018, touted as a way to increase “meaningful social interactions” on the platform, actually incentivized divisive posts and misinformation. They also revealed that Facebook spends almost all of its budget for keeping the platform safe only on English-language content. In September, the Wall Street Journal published a damning series of articles based on some of the documents that Haugen had leaked to the paper. Haugen also gave copies of the documents to Congress and the Securities and Exchange Commission (SEC).

Read more: The Facebook Whistleblower Revealed Herself on 60 Minutes. Here’s What You Need to Know

The documents, Haugen testified Oct. 5, “prove that Facebook has repeatedly misled the public about what its own research reveals about the safety of children, the efficacy of its artificial intelligence systems, and its role in spreading divisive and extreme messages.” She told Senators that the failings revealed by the documents were all linked by one deep, underlying truth about how the company operates. “This is not simply a matter of certain social media users being angry or unstable, or about one side being radicalized against the other; it is about Facebook choosing to grow at all costs, becoming an almost trillion-dollar company by buying its profits with our safety,” she said.

Facebook’s focus on increasing user engagement, which ultimately drives ad revenue and staves off competition, she argued, may keep users coming back to the site day after day—but also systematically boosts content that is polarizing, misinformative and angry, and which can send users down dark rabbit holes of political extremism or, in the case of teen girls, body dysmorphia and eating disorders. “The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they have put their astronomical profits before people,” Haugen said. (In 2020, the company reported $29 billion in net income—up 58% from a year earlier. This year, it briefly surpassed $1 trillion in total market value, though Haugen’s leaks have since knocked the company down to around $940 billion.)

Asked if executives adhered to the same set of values as the civic-integrity team, including putting the public’s interests before Facebook’s, a company spokesperson told TIME it was “safe to say everyone at Facebook is committed to understanding our impact, keeping people safe and reducing polarization.”

In the same week that an unrelated systems outage took Facebook’s services offline for hours and revealed just how much the world relies on the company’s suite of products—including WhatsApp and Instagram—the revelations sparked a new round of national soul-searching. It led some to question how one company can have such a profound impact on both democracy and the mental health of hundreds of millions of people. Haugen’s documents are the basis for at least eight new SEC investigations into the company for potentially misleading its investors. And they have prompted senior lawmakers from both parties to call for stringent new regulations.

Read more: Here’s How to Fix Facebook, According to Former Employees and Leading Critics

Haugen urged Congress to pass laws that would make Facebook and other social media platforms legally liable for decisions about how they choose to rank content in users’ feeds, and force companies to make their internal data available to independent researchers. She also urged lawmakers to find ways to loosen CEO Mark Zuckerberg’s iron grip on Facebook; he controls more than half of voting shares on its board, meaning he can veto any proposals for change from within. “I came forward at great personal risk because I believe we still have time to act,” Haugen told lawmakers. “But we must act now.”

Potentially even more worryingly for Facebook, other experts it hired to keep the platform safe, now alienated by the company’s actions, are growing increasingly critical of their former employer. They experienced first hand Facebook’s unwillingness to change, and they know where the bodies are buried. Now, on the outside, some of them are still honoring their pledge to put the public’s interests ahead of Facebook’s.

Inside Facebook’s civic-integrity team

Chakrabarti, the head of the civic-integrity team, was hired by Facebook in 2015 from Google, where he had worked on improving how the search engine communicated information about lawmakers and elections to its users. A polymath described by one person who worked under him as a “Renaissance man,” Chakrabarti holds master’s degrees from MIT, Oxford and Cambridge, in artificial intelligence engineering, modern history and public policy, respectively, according to his LinkedIn profile.

Although he was not in charge of Facebook’s company-wide “integrity” efforts (led by Rosen), Chakrabarti, who did not respond to requests to comment for this article, was widely seen by employees as the spiritual leader of the push to make sure the platform had a positive influence on democracy and user safety, according to multiple former employees. “He was a very inspirational figure to us, and he really embodied those values [enshrined in the civic oath] and took them quite seriously,” a former member of the team told TIME. “The team prioritized societal good over Facebook good. It was a team that really cared about the ways to address societal problems first and foremost. It was not a team that was dedicated to contributing to Facebook’s bottom line.”

Chakrabarti began work on the team by questioning how Facebook could encourage people to be more engaged with their elected representatives on the platform, several of his former team members said. An early move was to suggest tweaks to Facebook’s “more pages you may like” feature that the team hoped might make users feel more like they could have an impact on politics.

After the chaos of the 2016 election, which prompted Zuckerberg himself to admit that Facebook didn’t do enough to stop misinformation, the team evolved. It moved into Facebook’s wider “integrity” product group, which employs thousands of researchers and engineers to focus on fixing Facebook’s problems of misinformation, hate speech, foreign interference and harassment. It changed its name from “civic engagement” to “civic integrity,” and began tackling the platform’s most difficult problems head-on.

Shortly before the midterm elections in 2018, Chakrabarti gave a talk at a conference in which he said he had “never been told to sacrifice people’s safety in order to chase a profit.” His team was hard at work making sure the midterm elections did not suffer the same failures as in 2016, in an effort that was generally seen as a success, both inside the company and externally. “To see the way that the company has mobilized to make this happen has made me feel very good about what we’re doing here,” Chakrabarti told reporters at the time. But behind closed doors, integrity employees on Chakrabarti’s team and others were increasingly getting into disagreements with Facebook leadership, former employees said. It was the beginning of the process that would eventually motivate Haugen to blow the whistle.

Facebook Whistle Blower Frances Haugen Testifies To Senate Committee
Drew Angerer—Getty ImagesFormer Facebook employee Frances Haugen testifies during a Senate hearing entitled ‘Protecting Kids Online: Testimony from a Facebook Whistleblower’ in Washington, D.C., Oct. 5, 2021.

In 2019, the year Haugen joined the company, researchers on the civic-integrity team proposed ending the use of an approved list of thousands of political accounts that were exempt from Facebook’s fact-checking program, according to tech news site The Information. Their research had found that the exemptions worsened the site’s misinformation problem because users were more likely to believe false information if it were shared by a politician. But Facebook executives rejected the proposal.

The pattern repeated time and time again, as proposals to tweak the platform to down-rank misinformation or abuse were rejected or watered down by executives concerned with engagement or worried that changes might disproportionately impact one political party more than another, according to multiple reports in the press and several former employees. One cynical joke among members of the civic-integrity team was that they spent 10% of their time coding and the other 90% arguing that the code they wrote should be allowed to run, one former employee told TIME. “You write code that does exactly what it’s supposed to do, and then you had to argue with execs who didn’t want to think about integrity, had no training in it and were mad that you were hurting their product, so they shut you down,” the person said.

Sometimes the civic-integrity team would also come into conflict with Facebook’s policy teams, which share the dual role of setting the rules of the platform while also lobbying politicians on Facebook’s behalf. “I found many times that there were tensions [in meetings] because the civic-integrity team was like, ‘We’re operating off this oath; this is our mission and our goal,’” says Katie Harbath, a long-serving public-policy director at the company’s Washington, D.C., office who quit in March 2021. “And then you get into decisionmaking meetings, and all of a sudden things are going another way, because the rest of the company and leadership are not basing their decisions off those principles.”

Harbath admitted not always seeing eye to eye with Chakrabarti on matters of company policy, but praised his character. “Samidh is a man of integrity, to use the word,” she told TIME. “I personally saw times when he was like, ‘How can I run an integrity team if I’m not upholding integrity as a person?’”

Do you work at Facebook or another social media platform? TIME would love to hear from you. You can reach out to

Years before the 2020 election, research by integrity teams had shown Facebook’s group recommendations feature was radicalizing users by driving them toward polarizing political groups, according to the Journal. The company declined integrity teams’ requests to turn off the feature, BuzzFeed News reported. Then, just weeks before the vote, Facebook executives changed their minds and agreed to freeze political group recommendations. The company also tweaked its News Feed to make it less likely that users would see content that algorithms flagged as potential misinformation, part of temporary emergency “break glass” measures designed by integrity teams in the run-up to the vote. “Facebook changed those safety defaults in the run-up to the election because they knew they were dangerous,” Haugen testified to Senators on Tuesday. But they didn’t keep those safety measures in place long, she added. “Because they wanted that growth back, they wanted the acceleration on the platform back after the election, they returned to their original defaults. And the fact that they had to break the glass on Jan. 6, and turn them back on, I think that’s deeply problematic.”

In a statement, Facebook spokesperson Tom Reynolds rejected the idea that the company’s actions contributed to the events of Jan. 6. “In phasing in and then adjusting additional measures before, during and after the election, we took into account specific on-platforms signals and information from our ongoing, regular engagement with law enforcement,” he said. “When those signals changed, so did the measures. It is wrong to claim that these steps were the reason for Jan. 6—the measures we did need remained in place through February, and some like not recommending new, civic or political groups remain in place to this day. These were all part of a much longer and larger strategy to protect the election on our platform—and we are proud of that work.”

Read more: 4 Big Takeaways From the Facebook Whistleblower Congressional Hearing

Soon after the civic-integrity team was dissolved in December 2020, Chakrabarti took a leave of absence from Facebook. In August, he announced he was leaving for good. Other employees who had spent years working on platform-safety issues had begun leaving, too. In her testimony, Haugen said that several of her colleagues from civic integrity left Facebook in the same six-week period as her, after losing faith in the company’s pledge to spread their influence around the company. “Six months after the reorganization, we had clearly lost faith that those changes were coming,” she said.

After Haugen’s Senate testimony, Facebook’s director of policy communications Lena Pietsch suggested that Haugen’s criticisms were invalid because she “worked at the company for less than two years, had no direct reports, never attended a decision-point meeting with C-level executives—and testified more than six times to not working on the subject matter in question.” On Twitter, Chakrabarti said he was not supportive of company leaks but spoke out in support of the points Haugen raised at the hearing. “I was there for over 6 years, had numerous direct reports, and led many decision meetings with C-level execs, and I find the perspectives shared on the need for algorithmic regulation, research transparency, and independent oversight to be entirely valid for debate,” he wrote. “The public deserves better.”

Can Facebook’s latest moves protect the company?

Two months after disbanding the civic-integrity team, Facebook announced a sharp directional shift: it would begin testing ways to reduce the amount of political content in users’ News Feeds altogether. In August, the company said early testing of such a change among a small percentage of U.S. users was successful, and that it would expand the tests to several other countries. Facebook declined to provide TIME with further information about how its proposed down-ranking system for political content would work.

Many former employees who worked on integrity issues at the company are skeptical of the idea. “You’re saying that you’re going to define for people what political content is, and what it isn’t,” James Barnes, a former product manager on the civic-integrity team, said in an interview. “I cannot even begin to imagine all of the downstream consequences that nobody understands from doing that.”

Another former civic-integrity team member said that the amount of work required to design algorithms that could detect any political content in all the languages and countries in the world—and keeping those algorithms updated to accurately map the shifting tides of political debate—would be a task that even Facebook does not have the resources to achieve fairly and equitably. Attempting to do so would almost certainly result in some content deemed political being demoted while other posts thrived, the former employee cautioned. It could also incentivize certain groups to try to game those algorithms by talking about politics in nonpolitical language, creating an arms race for engagement that would privilege the actors with enough resources to work out how to win, the same person added.

Tech CEOs Testify Before House Judiciary Subcommittee
Graeme Jennings—Bloomberg/Getty ImagesMark Zuckerberg, chief executive officer and founder of Facebook, speaks via video conference during a House Judiciary Subcommittee hearing in Washington, D.C., on, July 29, 2020.

When Zuckerberg was hauled to testify in front of lawmakers after the Cambridge Analytica data scandal in 2018, Senators were roundly mocked on social media for asking basic questions such as how Facebook makes money if its services are free to users. (“Senator, we run ads” was Zuckerberg’s reply.) In 2021, that dynamic has changed. “The questions asked are a lot more informed,” says Sophie Zhang, a former Facebook employee who was fired in 2020 after she criticized Facebook for turning a blind eye to platform manipulation by political actors around the world.

“The sentiment is increasingly bipartisan” in Congress, Zhang adds. In the past, Facebook hearings have been used by lawmakers to grandstand on polarizing subjects like whether social media platforms are censoring conservatives, but this week they were united in their condemnation of the company. “Facebook has to stop covering up what it knows, and must change its practices, but there has to be government accountability because Facebook can no longer be trusted,” Senator Richard Blumenthal of Connecticut, chair of the Subcommittee on Consumer Protection, told TIME ahead of the hearing. His Republican counterpart Marsha Blackburn agreed, saying during the hearing that regulation was coming “sooner rather than later” and that lawmakers were “close to bipartisan agreement.”

As Facebook reels from the revelations of the past few days, it already appears to be reassessing product decisions. It has begun conducting reputational reviews of new products to assess whether the company could be criticized or its features could negatively affect children, the Journal reported Wednesday. It last week paused its Instagram Kids product amid the furor.

Whatever the future direction of Facebook, it is clear that discontent has been brewing internally. Haugen’s document leak and testimony have already sparked calls for stricter regulation and improved the quality of public debate about social media’s influence. In a post addressing Facebook staff on Wednesday, Zuckerberg put the onus on lawmakers to update Internet regulations, particularly relating to “elections, harmful content, privacy and competition.” But the real drivers of change may be current and former employees, who have a better understanding of the inner workings of the company than anyone—and the most potential to damage the business. —With reporting by Eloise Barry/London and Chad de Guzman/Hong Kong

Source: Tech – TIME | 8 Oct 2021 | 4:35 am









© 澳纽网 Ausnz.net