×
Education

Harvard Accused of Bowing to Meta By Ousted Disinformation Scholar in Whistleblower Complaint (cjr.org) 148

The Washington Post reports: A prominent disinformation scholar has accused Harvard University of dismissing her to curry favor with Facebook and its current and former executives in violation of her right to free speech.

Joan Donovan claimed in a filing with the Education Department and the Massachusetts attorney general that her superiors soured on her as Harvard was getting a record $500 million pledge from Meta founder Mark Zuckerberg's charitable arm. As research director of Harvard Kennedy School projects delving into mis- and disinformation on social media platforms, Donovan had raised millions in grants, testified before Congress and been a frequent commentator on television, often faulting internet companies for profiting from the spread of divisive falsehoods. Last year, the school's dean told her that he was winding down her main project and that she should stop fundraising for it. This year, the school eliminated her position.

As one of the first researchers with access to "the Facebook papers" leaked by Frances Haugen, Donovan was asked to speak at a meeting of the Dean's Council, a group of the university's high-profile donors, remembers The Columbia Journalism Review : Elliot Schrage, then the vice president of communications and global policy for Meta, was also at the meeting. Donovan says that, after she brought up the Haugen leaks, Schrage became agitated and visibly angry, "rocking in his chair and waving his arms and trying to interrupt." During a Q&A session after her talk, Donovan says, Schrage reiterated a number of common Meta talking points, including the fact that disinformation is a fluid concept with no agreed-upon definition and that the company didn't want to be an "arbiter of truth."

According to Donovan, Nancy Gibbs, Donovan's faculty advisor, was supportive after the incident. She says that they discussed how Schrage would likely try to pressure Douglas Elmendorf, the dean of the Kennedy School of Government (where the Shorenstein Center hosting Donovan's project is based) about the idea of creating a public archive of the documents... After Elmendorf called her in for a status meeting, Donovan claims that he told her she was not to raise any more money for her project; that she was forbidden to spend the money that she had raised (a total of twelve million dollars, she says); and that she couldn't hire any new staff. According to Donovan, Elmendorf told her that he wasn't going to allow any expenditure that increased her public profile, and used a number of Meta talking points in his assessment of her work...

Donovan says she tried to move her work to the Berkman Klein Center at Harvard, but that the head of that center told her that they didn't have the "political capital" to bring on someone whom Elmendorf had "targeted"... Donovan told me that she believes the pressure to shut down her project is part of a broader pattern of influence in which Meta and other tech platforms have tried to make research into disinformation as difficult as possible... Donovan said she hopes that by blowing the whistle on Harvard, her case will be the "tip of the spear."

Another interesting detail from the article: [Donovan] alleges that Meta pressured Elmendorf to act, noting that he is friends with Sheryl Sandberg, the company's chief operating officer. (Elmendorf was Sandberg's advisor when she studied at Harvard in the early nineties; he attended Sandberg's wedding in 2022, four days before moving to shut down Donovan's project.)
AI

Meta Publicly Launches AI Image Generator Trained On Your Facebook, Instagram Photos (venturebeat.com) 28

An anonymous reader quotes a report from VentureBeat: Meta Platforms, the parent company of Facebook, Instagram, Whatsapp and Quest VR headsets and creator of leading open source large language model Llama 2 -- is getting into the text-to-image AI generator game. Actually, to clarify: Meta was already in that game via a text-to-image and text-to-sticker generator that was launched within Facebook and Instagram Messengers earlier this year. However, as of this week, the company has launched a standalone text-to-image AI generator service, "Imagine" outside of its messaging platforms. Meta's Imagine now a website you can simply visit and begin generating images from: imagine.meta.com. You'll still need to log in with your Meta or Facebook/Instagram account (I tried Facebook, and it forced me to create a new "Meta account," but hey -- it still worked). [...]

Meta's Imagine service was built on its own AI model called Emu, which was trained on 1.1 billion Facebook and Instagram user photos, as noted by Ars Technica and disclosed in the Emu research paper published by Meta engineers back in September. An earlier report by Reuters noted that Meta excluded private messages and images that were not shared publicly on its services.

When developing Emu, Meta's researchers also fine-tuned it around quality metrics. As they wrote in their paper: "Our key insight is that to effectively perform quality tuning, a surprisingly small amount -- a couple of thousand -- exceptionally high-quality images and associated text is enough to make a significant impact on the aesthetics of the generated images without compromising the generality of the model in terms of visual concepts that can be generated. " Interestingly, despite Meta's vocal support for open source AI, neither Emu nor the Imagine by Meta AI service appear to be open source.

Encryption

Meta Defies FBI Opposition To Encryption, Brings E2EE To Facebook, Messenger (arstechnica.com) 39

An anonymous reader quotes a report from Ars Technica: Meta has started enabling end-to-end encryption (E2EE) by default for chats and calls on Messenger and Facebook despite protests from the FBI and other law enforcement agencies that oppose the widespread use of encryption technology. "Today I'm delighted to announce that we are rolling out default end-to-end encryption for personal messages and calls on Messenger and Facebook," Meta VP of Messenger Loredana Crisan wrote yesterday. In April, a consortium of 15 law enforcement agencies from around the world, including the FBI and ICE Homeland Security Investigations, urged Meta to cancel its plan to expand the use of end-to-end encryption. The consortium complained that terrorists, sex traffickers, child abusers, and other criminals will use encrypted messages to evade law enforcement.

Meta held firm, telling Ars in April that "we don't think people want us reading their private messages" and that the plan to make end-to-end encryption the default in Facebook Messenger would be completed before the end of 2023. Meta also plans default end-to-end encryption for Instagram messages but has previously said that may not happen this year. Meta said it is using "the Signal Protocol, and our own novel Labyrinth Protocol," and the company published two technical papers that describe its implementation (PDF). "Since 2016, Messenger has had the option for people to turn on end-to-end encryption, but we're now changing personal chats and calls across Messenger to be end-to-end encrypted by default. This has taken years to deliver because we've taken our time to get this right," Crisan wrote yesterday. Meta said it will take months to implement across its entire user base.
A post written by two Meta software engineers said the company "designed a server-based solution where encrypted messages can be stored on Meta's servers while only being readable using encryption keys under the user's control."

"Product features in an E2EE setting typically need to be designed to function in a device-to-device manner, without ever relying on a third party having access to message content," they wrote. "This was a significant effort for Messenger, as much of its functionality has historically relied on server-side processing, with certain features difficult or impossible to exactly match with message content being limited to the devices."

The company says it had "to redesign the entire system so that it would work without Meta's servers seeing the message content."
Technology

How Tech Giants Use Money, Access To Steer Academic Research (washingtonpost.com) 19

Tech giants including Google and Facebook parent Meta have dramatically ramped up charitable giving to university campuses over the past several years -- giving them influence over academics studying such critical topics as artificial intelligence, social media and disinformation. From a report: Meta CEO Mark Zuckerberg alone has donated money to more than 100 university campuses, either through Meta or his personal philanthropy arm, according to new research by the Tech Transparency Project, a nonprofit watchdog group studying the technology industry. Other firms are helping fund academic centers, doling out grants to professors and sitting on advisory boards reserved for donors, researchers told The Post.

Silicon Valley's influence is most apparent among computer science professors at such top-tier schools as Berkeley, University of Toronto, Stanford and MIT. According to a 2021 paper by University of Toronto and Harvard researchers, most tenure-track professors in computer science at those schools whose funding sources could be determined had taken money from the technology industry, including nearly 6 of 10 scholars of AI. The proportion rose further in certain controversial subjects, the study found. Of 33 professors whose funding could be traced who wrote on AI ethics for the top journals Nature and Science, for example, all but one had taken grant money from the tech giants or had worked as their employees or contractors.

Security

Android Vulnerability Exposes Credentials From Mobile Password Managers (techcrunch.com) 22

An anonymous reader quotes a report from TechCrunch: A number of popular mobile password managers are inadvertently spilling user credentials due to a vulnerability in the autofill functionality of Android apps. The vulnerability, dubbed "AutoSpill," can expose users' saved credentials from mobile password managers by circumventing Android's secure autofill mechanism, according to university researchers at the IIIT Hyderabad, who discovered the vulnerability and presented their research at Black Hat Europe this week. The researchers, Ankit Gangwal, Shubham Singh and Abhijeet Srivastava, found that when an Android app loads a login page in WebView, password managers can get "disoriented" about where they should target the user's login information and instead expose their credentials to the underlying app's native fields, they said. This is because WebView, the preinstalled engine from Google, lets developers display web content in-app without launching a web browser, and an autofill request is generated.

"Let's say you are trying to log into your favorite music app on your mobile device, and you use the option of 'login via Google or Facebook.' The music app will open a Google or Facebook login page inside itself via the WebView," Gangwal explained to TechCrunch prior to their Black Hat presentation on Wednesday. "When the password manager is invoked to autofill the credentials, ideally, it should autofill only into the Google or Facebook page that has been loaded. But we found that the autofill operation could accidentally expose the credentials to the base app." Gangwal notes that the ramifications of this vulnerability, particularly in a scenario where the base app is malicious, are significant. He added: "Even without phishing, any malicious app that asks you to log in via another site, like Google or Facebook, can automatically access sensitive information."

The researchers tested the AutoSpill vulnerability using some of the most popular password managers, including 1Password, LastPass, Keeper and Enpass, on new and up-to-date Android devices. They found that most apps were vulnerable to credential leakage, even with JavaScript injection disabled. When JavaScript injection was enabled, all the password managers were susceptible to their AutoSpill vulnerability. Gangwal says he alerted Google and the affected password managers to the flaw. Gangwal tells TechCrunch that the researchers are now exploring the possibility of an attacker potentially extracting credentials from the app to WebView. The team is also investigating whether the vulnerability can be replicated on iOS.

Encryption

Facebook Kills PGP-Encrypted Emails (techcrunch.com) 37

An anonymous reader quotes a report from TechCrunch: In 2015, as part of the wave of encrypting all the things on the internet, encouraged by the Edward Snowden revelations, Facebook announced that it would allow users to receive encrypted emails from the company. Even at the time, this was a feature for the paranoid users. By turning on the feature, all emails sent from Facebook -- mostly notifications of "likes" and private messages -- to the users who opted-in would be encrypted with the decades-old technology called Pretty Good Privacy, or PGP. Eight years later, Facebook is killing the feature due to low usage, according to the company. The feature was deprecated Tuesday. Facebook declined to specify exactly how many users were still using the encrypted email feature.
Encryption

Beeper Mini is an iMessage-for-Android App That Doesn't Require Any Apple Device at All (liliputing.com) 122

An anonymous reader shares a report: Beeper has been offering a unified messaging platform for a few years, allowing users to open a single app to communicate with contacts via SMS, Google Chat, Facebook Messenger, Slack, Discord, WhatsApp, and perhaps most significantly, iMessage. Up until this week though, Android users that wanted to use Beeper to send "blue bubble" messages to iMessage users had their messages routed through a Mac or iOS device. Now Beeper has launched a new app called Beeper Mini that handles everything on-device, no iPhone or Mac bridge required.

Beeper Mini is available now from the Google Play Store, and offers a 7-day free trial. After that, it costs $2 per month to keep using. [...] previously the company had to rely on a Mac-in-the-cloud? The company explains the method it's using in a blog post, but in a nutshell, Beeper says a security researcher has reverse engineered "the iMessage protocol and encryption," so that "all messages are sent and received by Beeper Mini Android app directly to Apple's servers" and "the encryption keys needed to encrypt these messages never leave your phone." That security researcher, by the way, is a high school student that goes by jjtech, who was hired by Beeper after showing the company his code. A proof-of-concept Python script is also available on Github if you'd like to run it to send messages to iMessage from a PC.

AI

Meta, IBM Create Industrywide AI Alliance To Share Technology (bloomberg.com) 6

Meta and IBM are joining more than 40 companies and organizations to create an industry group dedicated to open source artificial intelligence work, aiming to share technology and reduce risks. From a report: The coalition, called the AI Alliance, will focus on the responsible development of AI technology, including safety and security tools, according to a statement Tuesday. The group also will look to increase the number of open source AI models -- rather than the proprietary systems favored by some companies -- develop new hardware and team up with academic researchers.

Proponents of open source AI technology, which is made public by developers for others to use, see the approach as a more efficient way to cultivate the highly complex systems. Over the past few months, Meta has been releasing open source versions of its large language models, which are the foundation of AI chatbots.

AI

Meta Will Enforce Ban On AI-Powered Political Ads In Every Nation, No Exceptions (zdnet.com) 15

An anonymous reader quotes a report from ZDNet: Meta says its generative artificial intelligence (AI) advertising tools cannot be used to power political campaigns anywhere globally, with access blocked for ads targeting specific services and issues. The social media giant said earlier this month that advertisers will be barred from using generative AI tools in its Ads Manager tool to produce ads for politics, elections, housing, employment, credit, or social issues. Ads related to health, pharmaceuticals, and financial services also are not allowed access to the generative AI features. This policy will apply globally, as Meta continues to test its generative AI ads creation tools, confirmed Dan Neary, Meta's Asia-Pacific vice president. "This approach will allow us to better understand potential risks and build the right safeguards for the use of generative AI in ads that relate to potentially sensitive topics in regulated industries," said Neary.
Facebook

Meta Says There's Been No Downside To Sharing AI Technology (bloomberg.com) 30

Meta executives said there's been no major drawbacks to openly sharing its AI technology, even as many peers take the opposite approach. From a report: Over the past few months, Meta has been releasing open-source versions of its large language models -- the technology behind AI chatbots like ChatGPT. The idea is to keep those models free and then gain an advantage by building products and services on top of them, executives said at an event for the company's AI research Lab FAIR. "There is really no commercial downside to also making it available to other people," said Yann LeCun, Meta's chief AI scientist. Meta has joined most of the world's biggest technology companies in embracing generative AI, which can create text, images and even video based on simple prompts. But they aren't taking the same path.

Many of the top AI developers, including OpenAI and Google's DeepMind, don't currently open-source their large language models. Companies are often fearful of opening up their work because competitors could steal it, said Mike Schroepfer, Meta's senior fellow and former chief technology officer. "I feel like we're approaching this world where everyone is closing down as it becomes competitively important," he said. But staying open has its advantages. Meta can rely on thousands of developers across the world to help enhance its AI models.

Businesses

Tech's New Normal: Microcuts Over Growth at All Costs (wsj.com) 78

The tech industry has largely recovered from the downturn, but Silicon Valley learned a long-lasting lesson: how to do more with less. From a report: Amazon, Google, Microsoft and Meta Platforms have been cutting dozens or a few hundred employees at a time as executives keep tight controls on costs, even as their businesses and stock prices have rebounded sharply. The cuts are far smaller than the mass layoffs that reached tens of thousands in late 2022 and early this year. But they suggest a new era for an industry that in years past grew with little restraint, one in which companies are focusing on efficiency and acting more like their corporate peers that emphasize shareholder value and healthy margins.

The launch of the humanlike chatbot ChatGPT late last year served as a bright spot of growth in an industry that was otherwise scaling back. Challenges regarding the technology and calls for regulation remain, but some of the biggest tech companies are starting to make it their priority. There is a reallocation of resources from noncore areas to projects such as AI rather than hiring new people, said Ward, who was previously a director of recruiting at Facebook and the head of recruiting at Pinterest.

Amazon eliminated several hundred roles this month from its Alexa division to maximize its "resources and efforts focused on generative AI," according to an internal memo. The company has also made small cuts in recent weeks to its gaming and music divisions. Facebook's parent, Meta, recently posted its largest quarterly revenue in more than a decade. It laid off 20 people weeks later. Chief Executive Officer Mark Zuckerberg said on an earnings call that the company would continue to operate more efficiently going forward "both because it creates a more disciplined and lean culture, and also because it provides stability to see our long-term initiatives through in a very volatile world."

Facebook

Meta Designed Platforms To Get Children Addicted, Court Documents Allege (theguardian.com) 64

An anonymous reader quotes a report from The Guardian: Instagram and Facebook parent company Meta purposefully engineered its platforms to addict children and knowingly allowed underage users to hold accounts, according to a newly unsealed legal complaint. The complaint is a key part of a lawsuit filed against Meta by the attorneys general of 33 states in late October and was originally redacted. It alleges the social media company knew -- but never disclosed -- it had received millions of complaints about underage users on Instagram but only disabled a fraction of those accounts. The large number of underage users was an "open secret" at the company, the suit alleges, citing internal company documents.

In one example, the lawsuit cites an internal email thread in which employees discuss why a 12-year-old girl's four accounts were not deleted following complaints from the girl's mother stating her daughter was 12 years old and requesting the accounts to be taken down. The employees concluded that "the accounts were ignored" in part because representatives of Meta "couldn't tell for sure the user was underage." The complaint said that in 2021, Meta received over 402,000 reports of under-13 users on Instagram but that 164,000 -- far fewer than half of the reported accounts -- were "disabled for potentially being under the age of 13" that year. The complaint noted that at times Meta has a backlog of up to 2.5m accounts of younger children awaiting action. The complaint alleges this and other incidents violate the Children's Online Privacy and Protection Act, which requires that social media companies provide notice and get parental consent before collecting data from children. The lawsuit also focuses on longstanding assertions that Meta knowingly created products that were addictive and harmful to children, brought into sharp focus by whistleblower Frances Haugen, who revealed that internal studies showed platforms like Instagram led children to anorexia-related content. Haugen also stated the company intentionally targets children under the age of 18.

Company documents cited in the complaint described several Meta officials acknowledging the company designed its products to exploit shortcomings in youthful psychology, including a May 2020 internal presentation called "teen fundamentals" which highlighted certain vulnerabilities of the young brain that could be exploited by product development. The presentation discussed teen brains' relative immaturity, and teenagers' tendency to be driven by "emotion, the intrigue of novelty and reward" and asked how these asked how these characteristics could "manifest ... in product usage." [...] One Facebook safety executive alluded to the possibility that cracking down on younger users might hurt the company's business in a 2019 email. But a year later, the same executive expressed frustration that while Facebook readily studied the usage of underage users for business reasons, it didn't show the same enthusiasm for ways to identify younger kids and remove them from its platforms.

Facebook

Russia Puts Spokesman For Facebook-owner Meta on a Wanted List (yahoo.com) 100

Russia has added the spokesman of U.S. technology company Meta, which owns Facebook and Instagram, to a wanted list, according to an online database maintained by the country's interior ministry. From a report: Russian state agency Tass and independent news outlet Mediazona first reported that Meta communications director Andy Stone was included on the list Sunday, weeks after Russian authorities in October classified Meta as a "terrorist and extremist" organization, opening the way for possible criminal proceedings against Russian residents using its platforms.

The interior ministry's database doesn't give details of the case against Stone, stating only that he is wanted on criminal charges. According to Mediazona, an independent news website that covers Russia's opposition and prison system, Stone was put on the wanted list in February 2022, but authorities made no related statements at the time and no news media reported on the matter until this week. In March this year, Russia's federal Investigative Committee opened a criminal investigation into Meta.

Facebook

Meta Knowingly Collected Data on Pre-Teens, Unredacted Evidence From Lawsuit Shows (msn.com) 56

The New York Times reports: Meta has received more than 1.1 million reports of users under the age of 13 on its Instagram platform since early 2019 yet it "disabled only a fraction" of those accounts, according to a newly unsealed legal complaint against the company brought by the attorneys general of 33 states.

Instead, the social media giant "routinely continued to collect" children's personal information, like their locations and email addresses, without parental permission, in violation of a federal children's privacy law, according to the court filing. Meta could face hundreds of millions of dollars, or more, in civil penalties should the states prove the allegations. "Within the company, Meta's actual knowledge that millions of Instagram users are under the age of 13 is an open secret that is routinely documented, rigorously analyzed and confirmed," the complaint said, "and zealously protected from disclosure to the public...."

It also accused Meta executives of publicly stating in congressional testimony that the company's age-checking process was effective and that the company removed underage accounts when it learned of them — even as the executives knew there were millions of underage users on Instagram... The lawsuit argues that Meta elected not to build systems to effectively detect and exclude such underage users because it viewed children as a crucial demographic — the next generation of users — that the company needed to capture to assure continued growth.

More from the Wall Street Journal: An internal 2020 Meta presentation shows that the company sought to engineer its products to capitalize on the parts of youth psychology that render teens "predisposed to impulse, peer pressure, and potentially harmful risky behavior," the filings show... "Teens are insatiable when it comes to 'feel good' dopamine effects," the Meta presentation shows, according to the unredacted filing, describing the company's existing product as already well-suited to providing the sort of stimuli that trigger the potent neurotransmitter. "And every time one of our teen users finds something unexpected their brains deliver them a dopamine hit...."

"In December 2017, an Instagram employee indicated that Meta had a method to ascertain young users' ages but advised that 'you probably don't want to open this pandora's box' regarding age verification improvements," the states say in the suit. Some senior executives raised the possibility that cracking down on underage usage could hurt Meta's business... The states say Meta made little progress on automated detection systems or adequately staffing the team that reviewed user reports of underage activity. "Meta at times has a backlog of 2-2.5 million under-13 accounts awaiting action," according to the complaint...

The unredacted material also includes allegations that Meta Chief Executive Mark Zuckerberg instructed his subordinates to give priority to boosting its platforms' usage above the well being of users... Zuckerberg also repeatedly dismissed warnings from senior company officials that its flagship social-media platforms were harming young users, according to unsealed allegations in a lawsuit filed by Massachusetts earlier this month...

The complaint cites numerous other executives making public claims that were allegedly contradicted by internal documents. While Meta's head of global safety, Antigone Davis, told Congress that the company didn't consider profitability when designing products for teens, a 2018 internal email stated that product teams should keep in mind that "The lifetime value of a 13 y/o teen is roughly $270" when making product decisions.

It's funny.  Laugh.

Cards Against Humanity's Black Friday Prank: Launching Its Own Social Media Site (adage.com) 23

Long-time Slashdot reader destinyland writes: The popular party game "Cards Against Humanity" continued their tradition of practical jokes on Black Friday. They created a new social network where users can perform only one action: posting the word "yowza."

Then announced it on their official social media accounts on Instagram, Facebook, and X...

Regardless of what words you type into the window, they're replaced with the word yowza. "For just $0.99, you'll get an exclusive black check by your name," reads an announcement on the site, "and the ability to post a new word: awooga."

It's a magical land where "yowfluencers" keep "reyowzaing" the "yowzas" of other users. And there's also a tab for trending hashtags. (Although, yes, they all seem to be "yowza".) But they've already gotten a write up in the trade industry publication Advertising Age.

"With every bad thing happening in the world, social media is always right there, making it worse," a spokesperson said.... "[W]e asked ourselves: Is there a way we could make a social network that doesn't suck? At first, the answer was 'no.' The content moderation problem is just too hard. And then we thought, why not solve the content moderation problem by having no content? That's Yowza...."

When creating your profile on the network there's a dropdown menu for specifying your age and location — although all of the choices are yowza. More details from Advertising Age:

The company said the word "yowza" was the first that came to mind when its creative teams were brainstorming—and it just stuck. "It's dumb, it's ridiculous, it means nothing. It's perfect," the rep said.

And the service is still evolving, with fresh user upgrades. The official Yowza store will now also sell you the ability to also post the word Shazam — for $29.99. (Also on sale are 100,000 followers — for 99 cents.) But there's also an official FAQ which articulates the service's deep commitment to protecting their users' privacy.

Do you promise you won't share my private information with the Chinese Communist Party, like TikTok?

Yowza.

Facebook

Meta's Head of Augmented Reality Software Stepping Down (reuters.com) 8

According to Reuters, Meta's head of augmented reality software is stepping down from his role. From the report: VP of Engineering Don Box announced the end of his tenure at Meta internally this week, without elaborating on what he would do next, according to a source familiar with the matter. A Meta spokesperson confirmed Box would be leaving the company at the end of this week and said he was doing so for personal reasons. There would be no change in product roadmap as a result of his decision, she added.

The departure of Box, a veteran engineer with experience building major technology systems from their infancy, could be a setback to progress on the operating system, a key component of Meta's AR glasses project, the source told Reuters. Meta has been planning to deliver a first generation of its AR glasses by next year, although those are meant to be used only internally and by a select group of developers, the source said. It aims to ship its first AR glasses to consumers in 2027. The Meta spokesperson declined to address the roadmap or whether the OS that Box's team was building would be in the first generation AR glasses. [...]

Meta initially hired Box in 2021 to chart a path forward after the failure of its XROS project, which aimed to create a unified custom operating system for its virtual reality headsets, Ray-Ban Stories smart glasses and planned augmented reality glasses, the source said. Box broke up the 300-person XROS unit into dedicated teams for each device line early last year and personally took over the team focused on AR software, according to both the source and Box's LinkedIn profile. Prior to joining Meta, Box had worked at Microsoft since 2002. In his final role at Microsoft, he ran engineering for mixed reality, which involved developing software for the HoloLens2 headset and related AR/VR services. Box is known for having led the creation of the Xbox One operating system and later heading Microsoft's core operating system group, which works across all Windows products.

Facebook

MediaTek Partners With Meta To Develop Chips For AR Smart Glasses (9to5google.com) 7

During MediaTek's 2023 summit, MediaTek executive Vince Hu announced a new partnership with Meta that would allow it to develop smart glasses capable of augmented reality or mixed reality experiences. 9to5Google reports: As the current generation exists, the Ray-Ban Meta glasses feature a camera and microphone for sending and receiving messages. However, the next generation of Meta smart glasses are likely to have a built-in "viewfinder" display to merge the virtual and physical worlds, allowing users to scan QR codes, read messages, and more. Beyond that, the company wants to bring AR glasses into the fold, which presents a much broader set of challenges. To accomplish this, a few things need to change. AR glasses need to be built for everyday use and optimized to take on an industrial design that looks good but can pack enough tech to ensure a good experience. As it stands, mixed-reality headsets are bulky and take on a large profile. Ideally, Meta's fully AR glasses would be thinner and sleeker.

The new partnership between companies means that MediaTek will help co-develop custom silicon with Meta, built specifically for AR use cases and the glasses. MediaTek brings expertise in developing low-power, high-performance SoCs that can fit within small parameters, like in the frame in a pair of AR glasses. Little to no details were revealed about the upcoming AR glasses, other than directly stating that "MediaTek-powered AR glasses from Meta" would be a thing sometime in the future. Previous leaks position the next generation of smart glasses with a viewfinder as a 2025 release, while a more robust set of AR glasses was referred to as a 2027 product -- if done properly, it would be an incredible product.

The Courts

Social Media Giants Must Face Child Safety Lawsuits, Judge Rules (theverge.com) 53

Emma Roth reports via The Verge: Meta, ByteDance, Alphabet, and Snap must proceed with a lawsuit alleging their social platforms have adverse mental health effects on children, a federal court ruled on Tuesday. US District Judge Yvonne Gonzalez Rogers rejected the social media giants' motion to dismiss the dozens of lawsuits accusing the companies of running platforms "addictive" to kids. School districts across the US have filed suit against Meta, ByteDance, Alphabet, and Snap, alleging the companies cause physical and emotional harm to children. Meanwhile, 42 states sued Meta last month over claims Facebook and Instagram "profoundly altered the psychological and social realities of a generation of young Americans." This order addresses the individual suits and "over 140 actions" taken against the companies.

Tuesday's ruling states that the First Amendment and Section 230, which says online platforms shouldn't be treated as the publishers of third-party content, don't shield Facebook, Instagram, YouTube, TikTok, and Snapchat from all liability in this case. Judge Gonzalez Rogers notes many of the claims laid out by the plaintiffs don't "constitute free speech or expression," as they have to do with alleged "defects" on the platforms themselves. That includes having insufficient parental controls, no "robust" age verification systems, and a difficult account deletion process.

"Addressing these defects would not require that defendants change how or what speech they disseminate," Judge Gonzalez Rogers writes. "For example, parental notifications could plausibly empower parents to limit their children's access to the platform or discuss platform use with them." However, Judge Gonzalez Rogers still threw out some of the other "defects" identified by the plaintiffs because they're protected under Section 230, such as offering a beginning and end to a feed, recommending children's accounts to adults, the use of "addictive" algorithms, and not putting limits on the amount of time spent on the platforms.

AI

Giant AI Platform Introduces 'Bounties' For Deepfakes of Real People (404media.co) 28

An anonymous reader quotes a report from 404 Media: Civitai, an online marketplace for sharing AI models that enables the creation of nonconsensual sexual images of real people, has introduced a new feature that allows users to post "bounties." These bounties allow users to ask the Civitai community to create AI models that generate images of specific styles, compositions, or specific real people, and reward the best AI model that does so with a virtual currency users can buy with real money. As is common on the site, many of the bounties posted to Civitai since the feature was launched are focused on recreating the likeness of celebrities and social media influencers, almost exclusively women. But 404 Media has seen at least one bounty for a private person who has no significant public online presence.

"I am very afraid of what this can become, for years I have been facing problems with the misuse of my image and this has certainly never crossed my mind," Michele Alves, an Instagram influencer who has a bounty on Civitai, told 404 Media. "I don't know what measures I could take, since the internet seems like a place out of control. The only thing I think about is how it could affect me mentally because this is beyond hurtful." The news shows how increasingly easy to use text-to-image AI tools, the ability to easily create AI models of specific people, and a platform that monetizes the production of nonconsensual sexual images is making it possible to generate nonconsensual images of anyone, not just celebrities.

The bounty of a real person that 404 Media saw on Civitai did not include a name, and included a handful of images that were taken from her social media accounts. 404 Media was able to find this person's online accounts and confirm they were not a celebrity or social media influencer, but just a regular person with personal social media accounts with few followers. The person who posted the bounty claimed that the woman he wanted an AI model of was his wife, though her Facebook account said she was single. Other Civitai users also weren't buying that explanation. Despite suspicions from these users, someone did complete the bounty and created an AI model of the woman that now any Civiai user can download. Several non-sexual AI generated images of her have been posted to the site.

AI

Meta's New Rule: If Your Political Ad Uses AI Trickery, You Must Confess (techxplore.com) 110

Press2ToContinue writes: Starting next year, Meta will play the role of a strict schoolteacher for political ads, making them fess up if they've used AI to tweak images or sounds. This new 'honesty policy' will kick in worldwide on Facebook and Instagram, aiming to prevent voters from being duped by digitally doctored candidates or made-up events. Meanwhile, Microsoft is jumping on the integrity bandwagon, rolling out anti-tampering tech and a support squad to shield elections from AI mischief.

Slashdot Top Deals