AI

World's First AI Chatbot, ELIZA, Resurrected After 60 Years (livescience.com) 37

"Scientists have just resurrected 'ELIZA,' the world's first chatbot, from long-lost computer code," reports LiveScience, "and it still works extremely well." (Click in the vintage black-and-green rectangle for a blinking-cursor prompt...) Using dusty printouts from MIT archives, these "software archaeologists" discovered defunct code that had been lost for 60 years and brought it back to life. ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum and named for Eliza Doolittle, the protagonist of the play "Pygmalion," who was taught how to speak like an aristocratic British woman.

As a language model that the user could interact with, ELIZA had a significant impact on today's artificial intelligence (AI), the researchers wrote in a paper posted to the preprint database arXiv Sunday (Jan. 12). The "DOCTOR" script written for ELIZA was programmed to respond to questions as a psychotherapist would. For example, ELIZA would say, "Please tell me your problem." If the user input "Men are all alike," the program would respond, "In what way."

Weizenbaum wrote ELIZA in a now-defunct programming language he invented, called Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP), but it was almost immediately copied into the language Lisp. With the advent of the early internet, the Lisp version of ELIZA went viral, and the original version became obsolete. Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers. "I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind...."

Even though it was intended to be a research platform for human-computer communication, "ELIZA was such a novelty at the time that its 'chatbotness' overwhelmed its research purposes," Shrager said.

I just remember that time 23 years ago when someone connected a Perl version of ELIZA to "an AOL Instant Messenger account that has a high rate of 'random' people trying to start conversations" to "put ELIZA in touch with the real world..."

Thanks to long-time Slashdot reader MattSparkes for sharing the news.
Biotech

OpenAI Has Created an AI Model For Longevity Science (technologyreview.com) 33

OpenAI has developed a language model designed for engineering proteins, capable of converting regular cells into stem cells. It marks the company's first venture into biological data and demonstrates AI's potential for unexpected scientific discoveries. An anonymous reader quotes a report from MIT Technology Review: Last week, OpenAI CEO Sam Altman said he was "confident" his company knows how to build an AGI, adding that "superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own." The protein engineering project started a year ago when Retro Biosciences, a longevity research company based in San Francisco, approached OpenAI about working together. That link-up did not happen by chance. Sam Altman, the CEO of OpenAI, personally funded Retro with $180 million, as MIT Technology Review first reported in 2023. Retro has the goal of extending the normal human lifespan by 10 years. For that, it studies what are called Yamanaka factors. Those are a set of proteins that, when added to a human skin cell, will cause it to morph into a young-seeming stem cell, a type that can produce any other tissue in the body. [...]

OpenAI's new model, called GPT-4b micro, was trained to suggest ways to re-engineer the protein factors to increase their function. According to OpenAI, researchers used the model's suggestions to change two of the Yamanaka factors to be more than 50 times as effective -- at least according to some preliminary measures. [...] The model does not work the same way as Google's AlphaFold, which predicts what shape proteins will take. Since the Yamanaka factors are unusually floppy and unstructured proteins, OpenAI said, they called for a different approach, which its large language models were suited to. The model was trained on examples of protein sequences from many species, as well as information on which proteins tend to interact with one another. While that's a lot of data, it's just a fraction of what OpenAI's flagship chatbots were trained on, making GPT-4b an example of a "small language model" that works with a focused data set.

Once Retro scientists were given the model, they tried to steer it to suggest possible redesigns of the Yamanaka proteins. The prompting tactic used is similar to the "few-shot" method, in which a user queries a chatbot by providing a series of examples with answers, followed by an example for the bot to respond to. Although genetic engineers have ways to direct evolution of molecules in the lab, they can usually test only so many possibilities. And even a protein of typical length can be changed in nearly infinite ways (since they're built from hundreds of amino acids, and each acid comes in 20 possible varieties). OpenAI's model, however, often spits out suggestions in which a third of the amino acids in the proteins were changed. "We threw this model into the lab immediately and we got real-world results," says Retro's CEO, Joe Betts-Lacroix. He says the model's ideas were unusually good, leading to improvements over the original Yamanaka factors in a substantial fraction of cases.

AI

Google Reports Halving Code Migration Time With AI Help 12

Google computer scientists have been using LLMs to streamline internal code migrations, achieving significant time savings of up to 89% in some cases. The findings appear in a pre-print paper titled "How is Google using AI for internal code migrations?" The Register reports: Their focus is on bespoke AI tools developed for specific product areas, such as Ads, Search, Workspace and YouTube, instead of generic AI tools that provide broadly applicable services like code completion, code review, and question answering. Google's code migrations involved: changing 32-bit IDs in the 500-plus-million-line codebase for Google Ads to 64-bit IDs; converting its old JUnit3 testing library to JUnit4; and replacing the Joda time library with Java's standard java.time package. The int32 to int64 migration, the Googlers explain, was not trivial as the IDs were often generically defined (int32_t in C++ or Integer in Java) and were not easily searchable. They existed in tens of thousands of code locations across thousands of files. Changes had to be tracked across multiple teams and changes to class interfaces had to be considered across multiple files. "The full effort, if done manually, was expected to require hundreds of software engineering years and complex crossteam coordination," the authors explain.

For their LLM-based workflow, Google's software engineers implemented the following process. An engineer from Ads would identify an ID in need of migration using a combination of code search, Kythe, and custom scripts. Then an LLM-based migration toolkit, triggered by someone knowledgeable in the art, was run to generate verified changes containing code that passed unit tests. Those changes would be manually checked by the same engineer and potentially corrected. Thereafter, the code changes would be sent to multiple reviewers who are responsible for the portion of the codebase affected by the changes. The result was that 80 percent of the code modifications in the change lists (CLs) were purely the product of AI; the remainder were either human-authored or human-edited AI suggestions.

"We discovered that in most cases, the human needed to revert at least some changes the model made that were either incorrect or not necessary," the authors observe. "Given the complexity and sensitive nature of the modified code, effort has to be spent in carefully rolling out each change to users." Based on this, Google undertook further work on LLM-driven verification to reduce the need for detailed review. Even with the need to double-check the LLM's work, the authors estimate that the time required to complete the migration was reduced by 50 percent. With LLM assistance, it took just three months to migrate 5,359 files and modify 149,000 lines of code to complete the JUnit3-JUnit4 transition. Approximately 87 percent of the code generated by AI ended up being committed with no changes. As for the Joda-Java time framework switch, the authors estimate a time saving of 89 percent compared to the projected manual change time, though no specifics were provided to support that assertion.
AI

Microsoft-OpenAI Partnership Raises Antitrust Concerns, FTC Says (bloomberg.com) 2

Microsoft's $13 billion investment in OpenAI raises concerns that the tech giant could extend its dominance in cloud computing into the nascent AI market, the Federal Trade Commission said in a report released Friday. From a report: The commission said Microsoft's deal with OpenAI, as well as Amazon and Google's partnerships with AI company Anthropic, raise the risk that AI developers could be "fully acquired" by the tech giants in the future.

"The FTC's report sheds light on how partnerships by big tech firms can create lock-in, deprive start-ups of key AI inputs, and reveal sensitive information that can undermine fair competition," FTC Chair Lina Khan said in a statement. The FTC has the power to open market studies to glean more information about industry trends. The findings can be used to inform future actions. It's unclear what the agency's new leadership under the Trump administration will do with the report.

Google

Google Begins Requiring JavaScript For Google Search (techcrunch.com) 91

Google says it has begun requiring users to turn on JavaScript, the widely-used programming language to make web pages interactive, in order to use Google Search. From a report: In an email to TechCrunch, a company spokesperson claimed that the change is intended to "better protect" Google Search against malicious activity, such as bots and spam, and to improve the overall Google Search experience for users. The spokesperson noted that, without JavaScript, many Google Search features won't work properly, and that the quality of search results tends to be degraded.
Google

Google Won't Add Fact Checks Despite New EU Law (axios.com) 185

According to Axios, Google has told the EU it will not add fact checks to search results and YouTube videos or use them in ranking or removing content, despite the requirements of a new EU law. From the report: In a letter written to Renate Nikolay, the deputy director general under the content and technology arm at the European Commission, Google's global affairs president Kent Walker said the fact-checking integration required by the Commission's new Disinformation Code of Practice "simply isn't appropriate or effective for our services" and said Google won't commit to it. The code would require Google to incorporate fact-check results alongside Google's search results and YouTube videos. It would also force Google to build fact-checking into its ranking systems and algorithms.

Walker said Google's current approach to content moderation works and pointed to successful content moderation during last year's "unprecedented cycle of global elections" as proof. He said a new feature added to YouTube last year that enables some users to add contextual notes to videos "has significant potential." (That program is similar to X's Community Notes feature, as well as new program announced by Meta last week.)

The EU's Code of Practice on Disinformation, introduced in 2022, includes several voluntary commitments that tech firms and private companies, including fact-checking organizations, are expected to deliver on. The Code, originally created in 2018, predates the EU's new content moderation law, the Digital Services Act (DSA), which went into effect in 2022.

The Commission has held private discussions over the past year with tech companies, urging them to convert the voluntary measures into an official code of conduct under the DSA. Walker said in his letter Thursday that Google had already told the Commission that it didn't plan to comply. Google will "pull out of all fact-checking commitments in the Code before it becomes a DSA Code of Conduct," he wrote. He said Google will continue to invest in improvements to its current content moderation practices, which focus on providing people with more information about their search results through features like Synth ID watermarking and AI disclosures on YouTube.

Google

Google Strikes World's Largest Biochar Carbon Removal Deal 33

Google has partnered with Indian startup Varaha to purchase 100,000 tons of carbon dioxide removal credits by 2030, marking its largest deal in India and the largest involving biochar, a carbon removal solution made from biomass. TechCrunch reports: The offtake agreement credits will be delivered to Google by 2030 from Varaha's industrial biochar project in the western Indian state of Gujarat, the two firms said on Thursday. [...] Biochar is produced in two ways: artisanal and industrial. The artisanal method is community-driven, where farmers burn crop residue in conical flasks without using machines. In contrast, industrial biochar is made using large reactors that process 50-60 tons of biomass daily.

Varaha's project will generate industrial biochar from an invasive plant species, Prosopis Juliflora, using its pyrolysis facility in Gujarat. The invasive species impacts plant biodiversity and has overtaken grasslands used for livestock. Varaha will harvest the plant and make efforts to restore native grasslands in the region, the company's co-founder and CEO Madhur Jain said in an interview. Once the biochar is produced, a third-party auditor will submit their report to Puro.Earth to generate credits. Although biochar is seen as a long-term carbon removal solution, its permanence can vary between 1,000 and 2,500 years depending on production and environmental factors.

Jain told TechCrunch that Varaha tried using different feedstocks and different parameters within its reactors to find the best combination to achieve permanence close to 1,600 years. The startup has also built a digital monitoring, reporting and verification system, integrating remote sensing to monitor biomass availability. It even has a mobile app that captures geo-tagged, time-stamped images to geographically document activities, including biomass excavation and biochar's field application. With its first project, Varaha said it processed at least 40,000 tons of biomass and produced 10,000 tons of biochar last year.
AI

AI Slashes Google's Code Migration Time By Half (theregister.com) 74

Google has cut code migration time in half by deploying AI tools to assist with large-scale software updates, according to a new research paper from the company's engineers. The tech giant used large language models to help convert 32-bit IDs to 64-bit across its 500-million-line codebase, upgrade testing libraries, and replace time-handling frameworks. While 80% of code changes were AI-generated, human engineers still needed to verify and sometimes correct the AI's output. In one project, the system helped migrate 5,359 files and modify 149,000 lines of code in three months.
Security

Dead Google Apps Domains Can Be Compromised By New Owners (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Lots of startups use Google's productivity suite, known as Workspace, to handle email, documents, and other back-office matters. Relatedly, lots of business-minded webapps use Google's OAuth, i.e. "Sign in with Google." It's a low-friction feedback loop -- up until the startup fails, the domain goes up for sale, and somebody forgot to close down all the Google stuff. Dylan Ayrey, of Truffle Security Co., suggests in a report that this problem is more serious than anyone, especially Google, is acknowledging. Many startups make the critical mistake of not properly closing their accounts -- on both Google and other web-based apps -- before letting their domains expire.

Given the number of people working for tech startups (6 million), the failure rate of said startups (90 percent), their usage of Google Workspaces (50 percent, all by Ayrey's numbers), and the speed at which startups tend to fall apart, there are a lot of Google-auth-connected domains up for sale at any time. That would not be an inherent problem, except that, as Ayrey shows, buying a domain allows you to re-activate the Google accounts for former employees if the site's Google account still exists.

With admin access to those accounts, you can get into many of the services they used Google's OAuth to log into, like Slack, ChatGPT, Zoom, and HR systems. Ayrey writes that he bought a defunct startup domain and got access to each of those through Google account sign-ins. He ended up with tax documents, job interview details, and direct messages, among other sensitive materials.
A Google spokesperson said in a statement: "We appreciate Dylan Ayrey's help identifying the risks stemming from customers forgetting to delete third-party SaaS services as part of turning down their operation. As a best practice, we recommend customers properly close out domains following these instructions to make this type of issue impossible. Additionally, we encourage third-party apps to follow best-practices by using the unique account identifiers (sub) to mitigate this risk."
Piracy

Telegram Shuts Down Z-Library, Anna's Archive Channels Over Copyright Infringement (torrentfreak.com) 18

An anonymous reader quotes a report from TorrentFreak: In 'piracy' associated circles, Z-Library has one of the most followed Telegram channels of all. The shadow library's official channel amassed over 630,000 subscribers over the years, who were among the first to read site announcements and other key updates. Z-Library previously had some of its messages removed due to copyright infringement. While it didn't upload or directly link to infringing material on Telegram, rightsholders allegedly complained about the links that were posted to the Z-Library website. In response, Z-Library chose to no longer include links to its own homepage on Telegram. Instead, it referred users to Wikipedia and Reddit, where the links were still available. The same copyright awareness was visible at Anna's Archive, a popular shadow library search engine. This channel was also careful not to post direct links to infringing material. After all, sharing or uploading copyrighted books would undoubtedly lead to trouble.

Despite the reported caution, the channels of both Z-Library and Anna's Archive are no longer accessible today. Messages posted by these accounts were purged "due to copyright infringement", as shown below. Telegram didn't limit its action to removing posts; the channels are now entirely inaccessible. Those trying to access the channels in the Telegram app receive a pop-up message stating they are "unavailable due to copyright infringement." The simultaneous removal of both channels suggests they are linked to the same complaint or decision. The specific complaint and alleged copyright infringements remain unclear.

Businesses

Even Harvard MBAs Are Struggling To Land Jobs (msn.com) 120

Nearly a quarter of Harvard Business School's 2024 M.B.A. graduates remained jobless three months after graduation, highlighting deepening employment challenges at elite U.S. business schools. The unemployment rate for Harvard M.B.A.s rose to 23% from 20% a year earlier, more than double the 10% rate in 2022.

Major employers including McKinsey, Amazon, Google, and Microsoft have scaled back M.B.A. recruitment, with McKinsey cutting its hires at University of Chicago's Booth School to 33 from 71. "We're not immune to the difficulties of the job market," said Kristen Fitzpatrick, who oversees career development at Harvard Business School. "Going to Harvard is not going to be a differentiator. You have to have the skills." Columbia Business School was the only top program to improve its placement rate in 2024. Median starting salaries for employed M.B.A.s remain around $175,000.
Google

Google is Making AI in Gmail and Docs Free - But Raising the Price of Workspace (theverge.com) 21

Google is bundling its AI features into Workspace at no extra charge while raising the base subscription price by $2 to $14 per user monthly, the company said Wednesday. The move eliminates the previous $20 monthly fee for Gemini Business plan that was required to access AI tools in Gmail, Docs and other Workspace apps.
Social Networks

Pixelfed, Instagram's Decentralized Competitor, Is Now On iOS and Android (engadget.com) 15

Pixelfed has launched its mobile app for iOS and Android, solidifying its position as a viable alternative to Instagram. The move also comes at a pivotal moment, as a potential Supreme Court ban on TikTok could drive users to explore other social media platforms. Pixelfed is ad-free, open source, decentralized, defaults to chronological feeds and doesn't share user data with third parties. Engadget reports: The platform launched in 2018, but was only available on the web or through third-party app clients. The Android app debuted on January 9 and the iOS app released today. Creator Daniel Supernault posted on Mastodon Monday evening that the platform had 11,000 users join over the preceding 24 hours and that more than 78,000 posts have been shared to Pixelfed to date. The platform runs on ActivityPub, the same protocol that powers several other decentralized social networks in the fediverse, such as Mastodon and Flipboard. The iOS and Android apps are available at their respective links.

Further reading: Meta Is Blocking Links to Decentralized Instagram Competitor Pixelfed
AI

OpenAI's AI Reasoning Model 'Thinks' In Chinese Sometimes, No One Really Knows Why 104

OpenAI's "reasoning" AI model, o1, has exhibited a puzzling behavior of "thinking" in Chinese, Persian, or some other language -- "even when asked a question in English," reports TechCrunch. While the exact cause remains unclear, as OpenAI has yet to provide an explanation, AI experts have proposed a few theories. From the report: Several on X, including Hugging Face CEO Clement Delangue, alluded to the fact that reasoning models like o1 are trained on datasets containing a lot of Chinese characters. Ted Xiao, a researcher at Google DeepMind, claimed that companies including OpenAI use third-party Chinese data labeling services, and that o1 switching to Chinese is an example of "Chinese linguistic influence on reasoning."

"[Labs like] OpenAI and Anthropic utilize [third-party] data labeling services for PhD-level reasoning data for science, math, and coding," Xiao wrote in a post on X. "[F]or expert labor availability and cost reasons, many of these data providers are based in China." [...] Other experts don't buy the o1 Chinese data labeling hypothesis, however. They point out that o1 is just as likely to switch to Hindi, Thai, or a language other than Chinese while teasing out a solution.

Other experts don't buy the o1 Chinese data labeling hypothesis, however. They point out that o1 is just as likely to switch to Hindi, Thai, or a language other than Chinese while teasing out a solution. Rather, these experts say, o1 and other reasoning models might simply be using languages they find most efficient to achieve an objective (or hallucinating). "The model doesn't know what language is, or that languages are different," Matthew Guzdial, an AI researcher and assistant professor at the University of Alberta, told TechCrunch. "It's all just text to it."

Tiezhen Wang, a software engineer at AI startup Hugging Face, agrees with Guzdial that reasoning models' language inconsistencies may be explained by associations the models made during training. "By embracing every linguistic nuance, we expand the model's worldview and allow it to learn from the full spectrum of human knowledge," Wang wrote in a post on X. "For example, I prefer doing math in Chinese because each digit is just one syllable, which makes calculations crisp and efficient. But when it comes to topics like unconscious bias, I automatically switch to English, mainly because that's where I first learned and absorbed those ideas."

[...] Luca Soldaini, a research scientist at the nonprofit Allen Institute for AI, cautioned that we can't know for certain. "This type of observation on a deployed AI system is impossible to back up due to how opaque these models are," they told TechCrunch. "It's one of the many cases for why transparency in how AI systems are built is fundamental."
The Internet

Double-keyed Browser Caching Is Hitting Web Performance 88

A Google engineer has warned that a major shift in web browser caching is upending long-standing performance optimization practices. Browsers have overhauled their caching systems that forces websites to maintain separate copies of shared resources instead of reusing them across domains.

The new "double-keyed caching" system, implemented to enhance privacy, is ending the era of shared public content delivery networks, writes Google engineer Addy Osmani. According to Chrome's data, the change has led to a 3.6% increase in cache misses and 4% rise in network bandwidth usage.
Businesses

The New $30,000 Side Hustle: Making Job Referrals for Strangers (bnnbloomberg.ca) 15

Tech workers at major U.S. companies are earning thousands of dollars by referring job candidates they've never met, creating an underground marketplace for employment referrals at firms like Microsoft and Nvidia, according to Bloomberg.

One tech worker cited in the report earned $30,000 in referral bonuses after recommending over 1,000 strangers to his employer over 18 months, resulting in more than six successful hires. While platforms like ReferralHub charge up to $50 per referral, Goldman Sachs and Google said such practices violate their policies. Google requires referrals to be based on personal knowledge of candidates.
Google

Google Wants to Track Your Digital Fingerprints Again (mashable.com) 54

Google is reintroducing "digital fingerprinting" in five weeks, reports Mashable, describing it as "a data collection process that ingests all of your online signals (from IP address to complex browser information) and pinpoints unique users or devices." Or, to put it another way, Google "is tracking your online behavior in the name of advertising."

The UK's Information Commissioner's Office called Google's decision "irresponsible": it is likely to reduce people's choice and control over how their information is collected. The change to Google's policy means that fingerprinting could now replace the functions of third-party cookies... Google itself has previously said that fingerprinting does not meet users' expectations for privacy, as users cannot easily consent to it as they would cookies. This in turn means they cannot control how their information is collected. To quote Google's own position on fingerprinting from 2019: "We think this subverts user choice and is wrong...." When the new policy comes into force on 16 February 2025, organisations using Google's advertising technology will be able to deploy fingerprinting without being in breach of Google's own policies. Given Google's position and scale in the online advertising ecosystem, this is significant.
Their post ends with a warning that those hoping to use fingerprinting for advertising "will need to demonstrate how they are complying with the requirements of data protection law. These include providing users with transparency, securing freely-given consent, ensuring fair processing and upholding information rights such as the right to erasure."

But security and privacy researcher Lukasz Olejnik asks if Google's move is the biggest privacy erosion in 10 years.... Could this mark the end of nearly a decade of progress in internet and web privacy? It would be unfortunate if the newly developing AI economy started from a decrease of privacy and data protection standards. Some analysts or observers might then be inclined to wonder whether this approach to privacy online might signal similar attitudes in other future Google products, like AI... The shift is rather drastic. Where clear restrictions once existed, the new policy removes the prohibition (so allows such uses) and now only requires disclosure... [I]f the ICO's claims about Google sharing IP addresses within the adtech ecosystem are accurate, this represents a significant policy shift with critical implications for privacy, trust, and the integrity of previously proposed Privacy Sandbox initiatives.
Their post includes a disturbing thought. "Reversing the stance on fingerprinting could open the door to further data collection, including to crafting dynamic, generative AI-powered ads tailored with huge precision. Indeed, such applications would require new data..."

Thanks to long-time Slashdot reader sinij for sharing the news.
AI

Futurist Predicts AI-Powered 'Digital Superpowers' by 2030 (bigthink.com) 100

Unanimous AI's founder Louis Rosenberg predicts a "wave" of new superhuman abilities is coming soon that we experience profoundly "as self-embodied skills that we carry around with us throughout our lives"...

"[B]y 2030, a majority of us will live our lives with context-aware AI agents bringing digital superpowers into our daily experiences." They will be unleashed by context-aware AI agents that are loaded into body-worn devices that see what we see, hear what we hear, experience what we experience, and provide us with enhanced abilities to perceive and interpret our world... The majority of these superpowers will be delivered through AI-powered glasses with cameras and microphones that act as their eyes and ears, but there will be other form factors for people who just don't like eyewear... [For example, earbuds with built in cameras] We will whisper to these intelligent devices, and they will whisper back, giving us recommendations, guidance, spatial reminders, directional cues, haptic nudges, and other verbal and perceptual content that will coach us through our days like an omniscient alter ego... When you spot that store across the street, you simply whisper to yourself, "I wonder when it opens?" and a voice will instantly ring back into your ears, "10:30 a.m...."

By 2030, we will not need to whisper to the AI agents traveling with us through our lives. Instead, you will be able to simply mouth the words, and the AI will know what you are saying by reading your lips and detecting activation signals from your muscles. I am confident that "mouthing" will be deployed because it's more private, more resilient to noisy spaces, and most importantly, it will feel more personal, internal, and self-embodied. By 2035, you may not even need to mouth the words. That's because the AI will learn to interpret the signals in our muscles with such subtlety and precision — we will simply need to think about mouthing the words to convey our intent... When you grab a box of cereal in a store and are curious about the carbs, or wonder whether it's cheaper at Walmart, the answers will just ring in your ears or appear visually. It will even give you superhuman abilities to assess the emotions on other people's faces, predict their moods, goals, or intentions, coaching you during real-time conversations to make you more compelling, appealing, or persuasive...

I don't make these claims lightly. I have been focused on technologies that augment our reality and expand human abilities for over 30 years and I can say without question that the mobile computing market is about to run in this direction in a very big way.

Instead of Augmented Reality, how about Augmented Mentality? The article notes Meta has already added context-aware AI to its Ray-Ban glasses and suggests that within five years Meta might try "selling us superpowers we can't resist". And Google's new AI-powered operating system Android XR hopes to augment our world with seamless context-aware content. But think about where this is going. "[E]ach of us could find ourselves in a new reality where technologies controlled by third parties can selectively alter what we see and hear, while AI-powered voices whisper in our ears with targeted advice and guidance."

And yet " by 2030 the superpowers that these devices give us won't feel optional. After all, not having them could put us at a social and cognitive disadvantage."

Thanks to Slashdot reader ZipNada for sharing the news.
Social Networks

'What If They Ban TikTok and People Keep Using It Anyway?' (yahoo.com) 101

"What if they ban TikTok and people keep using it anyway?" asks the New York Times, saying a pending ban in America "is vague on how it would be enforced" Some experts say that even if TikTok is actually banned this month or soon, there may be so many legal and technical loopholes that millions of Americans could find ways to keep TikTok'ing. The law is "Swiss cheese with lots of holes in it," said Glenn Gerstell, a former top lawyer at the National Security Agency and a senior adviser at the Center for Strategic and International Studies, a policy research organization. "There are obviously ways around it...." When other countries ban apps, the government typically orders internet providers and mobile carriers to block web traffic to and from the blocked website or app. That's probably not how a ban on TikTok in the United States would work. Two lawyers who reviewed the law said the text as written doesn't appear to order internet and mobile carriers to stop people from using TikTok.

There may not be unanimity on this point. Some lawyers who spoke to Bloomberg News said internet providers would be in legal hot water if they let their customers continue to use a banned TikTok. Alan Rozenshtein, a University of Minnesota associate law professor, said he suspected internet providers aren't obligated to stop TikTok use "because Congress wanted to allow the most dedicated TikTok users to be able to access the app, so as to limit the First Amendment infringement." The law also doesn't order Americans to stop using TikTok if it's banned or to delete the app from our phones....

Odds are that if the Supreme Court declares the TikTok law constitutional and if a ban goes into effect, blacklisting the app from the Apple and Google app stores will be enough to stop most people from using TikTok... If a ban goes into effect and Apple and Google block TikTok from pushing updates to the app on your phone, it may become buggy or broken over time. But no one is quite sure how long it would take for the TikTok app to become unusable or compromised in this situation.

Users could just sideload the app after downloading it outside a phone's official app store, the article points out. (More than 10 million people sideloaded Fortnite within six weeks of its removal from Apple and Google's app stores.) And there's also the option of just using a VPN — or watching TikTok's web site.

(I've never understood why all apps haven't already been replaced with phone-optimized web sites...)
Youtube

YouTubers Are Selling Their Unused Video Footage To AI Companies (bloomberg.com) 17

An anonymous reader shares a report: YouTubers and other digital content creators are selling their unused video footage to AI companies seeking exclusive videos to better train their AI algorithms, oftentimes netting thousands of dollars per deal. OpenAI, Alphabet's Google, AI media company Moonvalley and several other AI companies are collectively paying hundreds of content creators for access to their unpublished videos, according to people familiar with the negotiations.

That content, which hasn't been posted elsewhere online, is considered valuable for training artificial intelligence systems since it's unique. AI companies are currently paying between $1 and $4 per minute of footage, the people said, with prices increasing depending on video quality or format. Videos that are shot in 4K, for example, go for a higher price, as does non-traditional footage like videos captured from drones or using 3D animations. Most footage, such as unused video created for networks like YouTube, Instagram and TikTok, is selling for somewhere between $1 and $2 per minute.

Slashdot Top Deals