×
Government

Are AI-Generated Search Results Still Protected by Section 230? (msn.com) 59

Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it...

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome."

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness."

The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online."

The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot."

The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...
AI

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities (schneier.com) 38

Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper.

"Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way.

Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection.

Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...?

Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet.

Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task.

"But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."
News

Robert Dennard, Inventor of DRAM, Dies At 91 20

necro81 writes: Robert Dennard was working at IBM in the 1960s when he invented a way to store one bit using a single transistor and capacitor. The technology became dynamic random access memory (DRAM), which when implemented using the emerging technology of silicon integrated circuits, helped catapult computing by leaps and bounds. The first commercial DRAM chips in the late 1960s held just 1024 bits; today's DDR5 modules hold hundreds of billions.

Dr. Robert H. Dennard passed away last month at age 91. (alternate link)

In the 1970s he helped guide technology roadmaps for the ever-shrinking feature size of lithography, enabling the early years of Moore's Law. He wrote a seminal paper in 1974 relating feature size and power consumption that is now referred to as Dennard Scaling. His technological contributions earned him numerous awards, and accolades from the National Academy of Engineering, IEEE, and the National Inventor's Hall of Fame.
Data Storage

Father of SQL Says Yes to NoSQL (theregister.com) 74

An anonymous reader shared this report from the Register: The co-author of SQL, the standardized query language for relational databases, has come out in support of the NoSQL database movement that seeks to escape the tabular confines of the RDBMS. Speaking to The Register as SQL marks its 50th birthday, Donald Chamberlin, who first proposed the language with IBM colleague Raymond Boyce in a 1974 paper [PDF], explains that NoSQL databases and their query languages could help perform the tasks relational systems were never designed for. "The world doesn't stay the same thing, especially in computer science," he says. "It's a very fast, evolving, industry. New requirements are coming along and technology has to change to meet them, I think that's what's happening. The NoSQL movement is motivated by new kinds of applications, particularly web applications, that need massive scalability and high performance. Relational databases were developed in an earlier generation when scalability and performance weren't quite as important. To get the scalability and performance that you need for modern apps, many systems are relaxing some of the constraints of the relational data model."

[...] A long-time IBMer, Chamberlin is now semi-retired, but finds time to fulfill a role as a technical advisor for NoSQL company Couchbase. In the role, he has become an advocate for a new query language designed to overcome the "impedance mismatch" between data structures in the application language and a database, he says. UC San Diego professor Yannis Papakonstantinou has proposed SQL++ to solve this problem, with a view to addressing impedance mismatch between heavily object-based JavaScript, the core language for web development and the assumed relational approach embedded in SQL. Like C++, SQL++ is designed as a compatible extension of an earlier language, SQL, but is touted as better able to handle the JSON file format inherent in JavaScript. Couchbase and AWS have adopted the language, although the cloud giant calls it PartiQL.

At the end of the interview, Chamblin adds that "I don't think SQL is going to go away. A large part of the world's business data is encoded in SQL, and data is very sticky. Once you've got your database, you're going to leave it there. Also, relational systems do a very good job of what they were designed to do...

"[I]f you're a startup company that wants to sell shoes on the web or something, you're going to need a database, and one of those SQL implementations will do the job for free. I think relational databases and the SQL language will be with us for a long time."
Apple

Is the Era of Stickers In Apple Boxes Coming To an End? (9to5mac.com) 57

Citing a memo distributed to Apple Store employees, 9to5Mac reports that the new iPad Pro and iPad Air lineups will not include Apple stickers in the box -- "a key piece of memorabilia" that dates back as far as 1977's Apple II, notes Ars Technica. While the company says that this is part of its environmental goals to completely remove plastic from its packaging, it begs the question: is the era of stickers in Apple boxes coming to an end? 9to5Mac reports: The M3 MacBook Air that launched in March includes stickers in the box, but Apple Vision Pro (which launched in February) does not. Will the iPhone 16 include stickers in the box? Only time will tell. Ars' Andrew Cunningham writes about the origins of the Apple stickers: Apple has included stickers with its products at least as far back as the Apple II in 1977 when the stickers still said "Apple Computer" on them in the company's then-favored Motter Tektura typeface (I couldn't track down a vintage Apple II unboxing, but I did find some fun photos of Apple enthusiast Dan Budiac opening a sealed-in-box mid-'80s-era Apple IIc, complete with rainbow pack-in stickers). I myself became familiar with them during the height of the iPod in the early to mid-2000s when Apple was still firmly a tech underdog, and people would stick white Apple logo stickers to their cars to show off their non-conformist cred and/or Apple brand loyalty.

As Apple's products became more colorful in the 2010s, the Apple logo stickers would sometimes be color-matched to the device you had just bought, a cute bit of attention to detail that has carried over into present-day MagSafe cables and color-matched iMac keyboards and trackpads.
The report notes that you can still request an Apple sticker at Apple Stores at the time of your purchase; however, Amazon, Best Buy, and other retailers don't appear to have them available.
Technology

'The Good Enough Trap' (ian-leslie.com) 80

An anonymous reader shares an essay: Software designers refer to "the good enough principle." It means, simply put, that sometimes you should prioritise functionality over perfection. As a relentless imperfectionist, I'm inclined to embrace this idea. I gave this newsletter its name to encourage myself to post rough versions of my pieces rather than not to write them at all. When it comes to parenting, I'm a Winnicottian: I believe you shouldn't try to be the perfect mum or dad because there's no such thing. At work and in life, it's often true that the optimal strategy is not to strive for the optimal result, but to aim for what works and hope for the best.

The good enough can be a staging post to the perfect. The iPhone's camera was a "good enough" substitute for a compact camera. It did the job, but it wasn't as good as a Kodak or a Fuji. Until it was. Technological innovation often works like this, but the improvement curve isn't always as steep as with the smartphone camera. Sometimes we allow ourselves to get stuck with a product which is good enough to displace the competition, without fulfilling the same range of needs. The psychological and social ramifications can be profound.

Let's say you're a student and you use ChatGPT to write your essays for you. Give it the right prompts and it will produce pieces that are good enough to get the grade you need. That seems like a win: it saves you time and effort, presuming your tutors don't notice or don't care. Maybe you get through the whole of university this way. But be wary of this equilibrium. Over the longer term, you will be stunting the growth of your own mind. The struggle of turning inchoate thought into readable sentences and paragraphs is a powerful exercise for the brain. It's how you get better at thinking. It is thinking.

Science

Scientists Find a 'Missing Link' Between Poor Diet and Higher Cancer Risk (sciencealert.com) 57

Science Alert reports that a team of researchers found "that changes in glucose metabolism could help cancer grow by temporarily disabling a gene that protects us from tumors called BRCA2." The team first examined people who inherited one faulty copy of BRCA2. They found that cells from these people were more sensitive to methylglyoxal (MGO), which is produced when cells break down glucose for energy in the process of glycolysis. Glycolysis generates over 90 percent of the MGO in cells, which a pair of enzymes typically keep to minimal levels. In the event they can't keep up, high MGO levels can lead to the formation of harmful compounds that damage DNA and proteins. In conditions like diabetes, where MGO levels are elevated due to high blood sugar, these harmful compounds contribute to disease complications.

The researchers discovered that MGO can temporarily disable the tumor-suppressing functions of the BRCA2 protein, resulting in mutations linked to cancer development...

As the BRCA2 allele isn't permanently inactivated, functional forms of the protein it produces can later return to normal levels. But cells repeatedly exposed to MGO may continue to accumulate cancer-causing mutations whenever existing BRCA2 protein production fails. Overall, this suggests that changes in glucose metabolism can disrupt BRCA2 function via MGO, contributing to the development and progression of cancer...

This new information may lead to strategies for cancer prevention or early detection. "Methylglyoxal can be easily detected by a blood test for HbA1C, which could potentially be used as a marker," Venkitaraman says. "Furthermore, high methylglyoxal levels can usually be controlled with medicines and a good diet, creating avenues for proactive measures against the initiation of cancer."

Their research has been published in Cell.
AI

Microsoft Details How It's Developing AI Responsibly (theverge.com) 40

Thursday the Verge reported that a new report from Microsoft "outlines the steps the company took to release responsible AI platforms last year." Microsoft says in the report that it created 30 responsible AI tools in the past year, grew its responsible AI team, and required teams making generative AI applications to measure and map risks throughout the development cycle. The company notes that it added Content Credentials to its image generation platforms, which puts a watermark on a photo, tagging it as made by an AI model.

The company says it's given Azure AI customers access to tools that detect problematic content like hate speech, sexual content, and self-harm, as well as tools to evaluate security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect prompt injections where the malicious instructions are part of data ingested by the AI model.

It's also expanding its red-teaming efforts, including both in-house red teams that deliberately try to bypass safety features in its AI models as well as red-teaming applications to allow third-party testing before releasing new models.

Microsoft's chief Responsible AI officer told the Washington Post this week that "We work with our engineering teams from the earliest stages of conceiving of new features that they are building." "The first step in our processes is to do an impact assessment, where we're asking the team to think deeply about the benefits and the potential harms of the system. And that sets them on a course to appropriately measure and manage those risks downstream. And the process by which we review the systems has checkpoints along the way as the teams are moving through different stages of their release cycles...

"When we do have situations where people work around our guardrails, we've already built the systems in a way that we can understand that that is happening and respond to that very quickly. So taking those learnings from a system like Bing Image Creator and building them into our overall approach is core to the governance systems that we're focused on in this report."

They also said " it would be very constructive to make sure that there were clear rules about the disclosure of when content is synthetically generated," and "there's an urgent need for privacy legislation as a foundational element of AI regulatory infrastructure."
Privacy

When a Politician Sues a Blog to Unmask Its Anonymous Commenter 79

Markos Moulitsas is the poll-watching founder of the political blog Daily Kos. Thursday he wrote that in 2021, future third-party presidential candidate RFK Jr. had sued their web site.

"Things are not going well for him." Back in 2021, Robert F. Kennedy Jr. sued Daily Kos to unmask the identity of a community member who posted a critical story about his dalliance with neo-Nazis at a Berlin rally. I updated the story here, here, here, here, and here.

To briefly summarize, Kennedy wanted us to doxx our community member, and we stridently refused.

The site and the politician then continued fighting for more than three years. "Daily Kos lost the first legal round in court," Moulitsas posted in 2021, "thanks to a judge who is apparently unconcerned with First Amendment ramifications given the chilling effect of her ruling."

But even then, Moulitsas was clear on his rights: Because of Section 230 of the Communications Decency Act, [Kennedy] cannot sue Daily Kos — the site itself — for defamation. We are protected by the so-called safe harbor. That's why he's demanding we reveal what we know about "DowneastDem" so they can sue her or him directly.
Moulitsas also stressed that his own 2021 blog post was "reiterating everything that community member wrote, and expanding on it. And so instead of going after a pseudonymous community writer/diarist on this site, maybe Kennedy will drop that pointless lawsuit and go after me... consider this an escalation." (Among other things, the post cited a German-language news account saying Kennedy "sounded the alarm concerning the 5G mobile network and Microsoft founder Bill Gates..." Moulitsas also noted an Irish Times article which confirmed that at the rally Kennedy spoke at, "Noticeable numbers of neo-Nazis, kitted out with historic Reich flags and other extremist accessories, mixed in with the crowd.")

So what happened? Moulitsas posted an update Thursday: Shockingly, Kennedy got a trial court judge in New York to agree with him, and a subpoena was issued to Daily Kos to turn over any information we might have on the account. However, we are based in California, not New York, so once I received the subpoena at home, we had a California court not just quash the subpoena, but essentially signal that if New York didn't do the right thing on appeal, California could very well take care of it.

It's been a while since I updated, and given a favorable court ruling Thursday, it's way past time to catch everyone up.

New York is one of the U.S. states that doesn't have a strict "Dendrite standard" law protecting anonymous speech. But soon the blog founder discovered he had allies: The issues at hand are so important that The New York Times, the E.W.Scripps Company, the First Amendment Coalition, New York Public Radio, and seven other New York media companies joined the appeals effort with their own joint amicus brief. What started as a dispute over a Daily Kos diarist has become a meaningful First Amendment battle, with major repercussions given New York's role as a major news media and distribution center.

After reportedly spending over $1 million on legal fees, Kennedy somehow discovered the identity of our community member sometime last year and promptly filed a defamation suit in New Hampshire in what seemed a clumsy attempt at forum shopping, or the practice of choosing where to file suit based on the belief you'll be granted a favorable outcome. The community member lives in Maine, Kennedy lives in California, and Daily Kos doesn't publish specifically in New Hampshire. A perplexed court threw out the case this past February on those obvious jurisdictional grounds....

Then, last week, the judge threw out the appeal of that decision because Kennedy's lawyer didn't file in time — and blamed the delay on bad Wi-Fi...

Kennedy tried to dismiss the original case, the one awaiting an appellate decision in New York, claiming it was now moot. His legal team had sued to get the community member's identity, and now that they had it, they argued that there was no reason for the case to continue. We disagreed, arguing that there were important issues to resolve (i.e., Dendrite), and we also wanted lawyer fees for their unconstitutional assault on our First Amendment rights...

On Thursday, in a unanimous decision, a four-judge New York Supreme Court appellate panel ordered the case to continue, keeping the Dendrite issue alive and also allowing us to proceed in seeking damages based on New York's anti-SLAPP law, which prohibits "strategic lawsuits against public participation."

Thursday's blog post concludes with this summation. "Kennedy opened up a can of worms and has spent millions fighting this stupid battle. Despite his losses, we aren't letting him weasel out of this."
The Internet

Humans Now Share the Web Equally With Bots, Report Warns (independent.co.uk) 32

An anonymous reader quotes a report from The Independent, published last month: Humans now share the web equally with bots, according to a major new report -- as some fear that the internet is dying. In recent months, the so-called "dead internet theory" has gained new popularity. It suggests that much of the content online is in fact automatically generated, and that the number of humans on the web is dwindling in comparison with bot accounts. Now a new report from cyber security company Imperva suggests that it is increasingly becoming true. Nearly half, 49.6 per cent, of all internet traffic came from bots last year, its "Bad Bot Report" indicates. That is up 2 percent in comparison with last year, and is the highest number ever seen since the report began in 2013. In some countries, the picture is worse. In Ireland, 71 per cent of internet traffic is automated, it said.

Some of that rise is the result of the adoption of generative artificial intelligence and large language models. Companies that build those systems use bots scrape the internet and gather data that can then be used to train them. Some of those bots are becoming increasingly sophisticated, Imperva warned. More and more of them come from residential internet connections, which makes them look more legitimate. "Automated bots will soon surpass the proportion of internet traffic coming from humans, changing the way that organizations approach building and protecting their websites and applications," said Nanhi Singh, general manager for application security at Imperva. "As more AI-enabled tools are introduced, bots will become omnipresent."

Music

Back From the Dead: Amarok 3.0 Music Player Released (kde.org) 56

"Aamrok 3.0, ported to Qt5/KDE Frameworks 5, has been released," writes Slashdot reader serafean. "With the heavy lifting being done, the Qt6/KF6 version is expected later in the year." Originally developed for Linux as part of the KDE desktop environment, Amarok is a free, cross-platform music player that supports various audio formats and a user interface that can be tailored to individual preferences. These are the main features/changes, as highlighted in a KDE blog post: FEATURES:
- Added a visual hint that context view applets can be resized in edit mode.
- Display missing metadata errors in Wikipedia applet UI.
- Add a button to stop automatic Wikipedia page updating. (BR 485813)

CHANGES:
- Replace defunct lyricwiki with lyrics.ovh as lyrics provider for now. (BR 455937)
- Show only relevant items in wikipedia applet right click menu (BR 323941), use monobook skin for opened links and silently ignore non-wikipedia links.
- Don't show non-functional play mode controls in dynamic mode (BR 287055)
The changelog is available here. You can find the package on download.kde.org.
Open Source

Bruce Perens Emits Draft Post-Open Zero Cost License (theregister.com) 73

After convincing the world to buy open source and give up the Morse Code test for ham radio licenses, Bruce Perens has a new gambit: develop a license that ensures software developers receive compensation from large corporations using their work. The new Post-Open Zero Cost License seeks to address the financial disparities in open source software use and includes provisions against using content to train AI models, aligning its enforcement with non-profit performing rights organizations like ASCAP. Here's an excerpt from an interview The Register conducted with Perens: The license is one component among several -- the paid license needs to be hammered out -- that he hopes will support his proposed Post-Open paradigm to help software developers get paid when their work gets used by large corporations. "There are two paradigms that you can use for this," he explains in an interview. "One is Spotify and the other is ASCAP, BMI, and SESAC. The difference is that Spotify is a for-profit corporation. And they have to distribute profits to their stockholders before they pay the musicians. And as a result, the musicians complain that they're not getting very much at all."

"There are two paradigms that you can use for this," he explains in an interview. "One is Spotify and the other is ASCAP, BMI, and SESAC. The difference is that Spotify is a for-profit corporation. And they have to distribute profits to their stockholders before they pay the musicians. And as a result, the musicians complain that they're not getting very much at all." Perens wants his new license -- intended to complement open source licensing rather than replace it -- to be administered by a 501(c)(6) non-profit. This entity would handle payments to developers. He points to the music performing rights organizations as a template, although among ASCAP, BMI, SECAC, and GMR, only ASCAP remains non-profit. [...]

The basic idea is companies making more than $5 million annually by using Post-Open software in a paid-for product would be required to pay 1 percent of their revenue back to this administrative organization, which would distribute the funds to the maintainers of the participating open source project(s). That would cover all Post-Open software used by the organization. "The license that I have written is long -- about as long as the Affero GPL 3, which is now 17 years old, and had to deal with a lot more problems than the early licenses," Perens explains. "So, at least my license isn't excessively long. It handles all of the abuses of developers that I'm conscious of, including things I was involved in directly like Open Source Security v. Perens, and Jacobsen v. Katzer."

"It also makes compliance easier for companies than it is today, and probably cheaper even if they do have to pay. It creates an entity that can sue infringers on behalf of any developer and gets the funding to do it, but I'm planning the infringement process to forgive companies that admit the problem and cure the infringement, so most won't ever go to court. It requires more infrastructure than open source developers are used to. There's a central organization for Post-Open (or it could be three organizations if we divided all of the purposes: apportioning money to developers, running licensing, and enforcing compliance), and an outside CPA firm, and all of that has to be structured so that developers can trust it."
You can read the full interview here.
Wikipedia

Russia Clones Wikipedia, Censors It, Bans Original (404media.co) 243

Jules Roscoe reports via 404 Media: Russia has replaced Wikipedia with a state-sponsored encyclopedia that is a clone of the original Russian Wikipedia but which conveniently has been edited to omit things that could cast the Russian government in poor light. Real Russian Wikipedia editors used to refer to the real Wikipedia as Ruwiki; the new one is called Ruviki, has "ruwiki" in its url, and has copied all Russian-language Wikipedia articles and strictly edited them to comply with Russian laws. The new articles exclude mentions of "foreign agents," the Russian government's designation for any person or entity which expresses opinions about the government and is supported, financially or otherwise, by an outside nation. [...]

Wikimedia RU, the Russian-language chapter of the non-profit that runs Wikipedia, was forced to shut down in late 2023 amid political pressure due to the Ukraine war. Vladimir Medeyko, the former head of the chapter who now runs Ruviki, told Novaya Gazeta Europe in July that he believed Wikipedia had problems with "reliability and neutrality." Medeyko first announced the project to copy and censor the 1.9 million Russian-language Wikipedia articles in June. The goal, he said at the time, was to edit them so that the information would be "trustworthy" as a source for all Russian users. Independent outlet Bumaga reported in August that around 110 articles about the war in Ukraine were missing in full, while others were severely edited. Ruviki also excludes articles about reports of torture in prisons and scandals of Russian government representatives. [...]

Graphic designer Constantine Konovalov calculated the number of characters changed between Wikipedia RU and Ruviki articles on the same topics, and found that there were 205,000 changes in articles about freedom of speech; 158,000 changes in articles about human rights; 96,000 changes in articles about political prisoners; and 71,000 changes in articles about censorship in Russia. He wrote in a post on X that the censorship was "straight out of a 1984 novel." Interestingly, the Ruviki article about George Orwell's 1984 entirely omits the Ministry of Truth, which is the novel's main propaganda outlet concerned with governing "truth" in the country.

Space

The Naked-Eye Sky Will (Briefly) Host a New Star (cnn.com) 41

RockDoctor (Slashdot reader #15,477) wants to tell you about a "new" star that will be visible to the naked eye — without a telescope — sometime before September: By "star", I do not mean "comet", "meteorite" or "firefly", but genuine [star] photons arriving here after about 3000 years in flight, causing your eyes to see a bright point on the nighttime sky. When it happens, the star will go from needing-a- telescope-or-good-binoculars-to-see, to being the 50th (or even 30th) brightest star in the sky.

For a week or so. Of course, it could just go full-on supernova, and be visible in daylight for a few weeks, and dominate the night sky for months. But that's unlikely.

Named "T Corona Borealis" (because it's the 20th variable star studied in the constellation "Corona Borealis") it's now visible all night, all year, for about 60% of the world's population (although normally you need binoculars to see it).

But RockDoctor writes that in 2016, "T CrB" (as it is known) has started showing "a similar pattern of changes" to what happened in the late 1930s when it became one of only 10 "recurring nova" known to science: In 2023, the pattern continued and the match of details got better. The star is expected to undergo another "eruption" — becoming one of the brightest few stars in the sky, within the next couple of months. Maybe the next couple of weeks. Maybe the next couple of hours....

Last week, astrophysicist Dr Becky Smethurst posted on the expected event in her monthly "Night Sky News" video blog. If you prefer your information in text not video, the AAVSO (variable star observers) posted a news alert for it's observers a while ago. They also hosted a seminar on the star, and why it's eruption is expected Real Soon Now, which is also on YouTube. A small selection of recent papers on the subject are posted here, which also includes information on how to get the most up-to-date brightness readings (unless you're a HST / JWST / Palomar / Hawai`i / Chile telescope operator). Yes, the "big guns" of astronomy have prepared their "TOO — Target Of Opportunity" plans, and will be dropping normal observations really quickly when the news breaks and slewing TOO the target.

You won't need your eclipse glasses for this. (Dr Becky's video covers where you can send them for re-use.) But you might want to photograph the appropriate part of the sky so you'll notice when the bomb goes off. Bomb? Did I say that the best model for what is happening is a thermonuclear explosion like a H-bomb the size of the Earth detonating? Well, that's the best analogue.

This CNN article includes a nice animation from NASA illustrating the multi-star interaction that's causing the event: The stars in the orbiting pair are close enough to each other that they interact violently. The red giant becomes increasingly unstable over time as it heats up, casting off its outer layers that land as matter on the white dwarf star. The exchange of matter causes the atmosphere of the white dwarf to gradually heat until it experiences a "runaway thermonuclear reaction," resulting in a nova [according to NASA]...

The NASAUniverse account on X, formerly known as Twitter, will provide updates about the outburst and its appearance.

The BBC reiterates the key data points — that "The rare cosmic event is expected to take place sometime before September 2024. When it occurs it will likely be visible to the naked eye. No expensive telescope will be needed to witness this cosmic performance, says NASA."
The Internet

FCC Votes To Restore Net Neutrality Rules (nytimes.com) 54

An anonymous reader quotes a report from the New York Times: The Federal Communications Commission voted on Thursday to restore regulations that expand government oversight of broadband providersand aim to protect consumer access to the internet, a move that will reignite a long-running battle over the open internet. Known as net neutrality, the regulations were first put in place nearly a decade ago under the Obama administration and are aimed at preventing internet service providers like Verizon or Comcast from blocking or degrading the delivery of services from competitors like Netflix and YouTube. The rules were repealed under President Donald J. Trump, and have proved to be a contentious partisan issue over the years while pitting tech giants against broadband providers.

In a 3-to-2 vote along party lines, the five-member commission appointed by President Biden revived the rules that declare broadband a utility-like service regulated like phones and water. The rules also give the F.C.C. the ability to demand broadband providers report and respond to outages, as well as expand the agency's oversight of the providers' security issues. Broadband providers are expected to sue to try to overturn the reinstated rules.

The core purpose of the regulations is to prevent internet service providers from controlling the quality of consumers' experience when they visit websites and use services online. When the rules were established, Google, Netflix and other online services warned that broadband providers had the incentive to slow down or block access to their services. Consumer and free speech groups supported this view. There have been few examples of blocking or slowing of sites, which proponents of net neutrality say is largely because of fear that the companies would invite scrutiny if they did so. And opponents say the rules could lead to more and unnecessary government oversight of the industry.

Games

Veteran PC Game 'Sopwith' Celebrates 40th Anniversary (github.io) 42

Longtime Slashdot reader sfraggle writes: Biplane shoot-'em up, Sopwith, is celebrating 40 years today since its first release back in 1984. The game is one of the oldest PC games still in active development today, originating as an MS-DOS game for the original IBM PC. The 40th anniversary site has a detailed history of how the game was written as a tech demo for the now-defunct Imaginet networking system. There is also a video interview with its original authors. "The game involves piloting a Sopwith biplane, attempting to bomb enemy buildings while avoiding fire from enemy planes and various other obstacles," reads the Wiki page. "Sopwith uses four-color CGA graphics and music and sound effects use the PC speaker. A sequel with the same name, but often referred to as Sopwith 2, was released in 1985."

You can play Sopwith in your browser here.
Transportation

Linux Can Finally Run Your Car's Safety Systems and Driver-Assistance Features (arstechnica.com) 44

An anonymous reader quotes a report from Ars Technica: There's a new Linux distro on the scene today, and it's a bit specialized. Its development was led by the automotive electronics supplier Elektrobit, and it's the first open source OS that complies with the automotive industry's functional safety requirements. [...] With Elektrobit's EB corbos Linux for Safety Applications (that sure is a long name), there's an open source Linux distro that finally fits the bill, having just been given the thumbs up by the German organization TUV Nord. (It also complies with the IEC 61508 standard for safety applications.) "The beauty of our concept is that you don't even need to safety-qualify Linux itself," said Moritz Neukirchner, a senior director at Elektrobit overseeing SDVs. Instead, an external safety monitor runs in a hypervisor, intercepting and validating kernel actions.

"When you look at how safety is typically being done, look at communication -- you don't safety-certify the communication specs or Ethernet stack, but you do a checker library on top, and you have a hardware anchor for checking down below, and you insure it end to end but take everything in between out of the certification path. And we have now created a concept that allows us to do exactly that for an operating system," Neukirchner told me. "So in the end, since we take Linux out of the certification path and make it usable in a safety-related context, we don't have any problems in keeping up to speed with the developer community," he explained. "Because if you start it off and say, 'Well, we're going to do Linux as a one-shot for safety,' you're going to have the next five patches and you're off [schedule] again, especially with the security regulation that's now getting toward effect now, starting in July with the UNECE R155 that requires continuous cybersecurity management vulnerability scanning for all software that ends up in the vehicle."

"In the end, we see roughly 4,000 kernel security patches within eight years for Linux. And this is the kind of challenge that you're being put up to if you want to participate in that speed of innovation of an open source community as rich as that of Linux and now want to combine this with safety-related applications," Neukirchner said. Elektrobit developed EB corbos Linux for Safety Applications together with Canonical, and together they will share the maintenance of keeping it compliant with safety requirements over time.

Power

California Is Grappling With a Growing Problem: Too Much Solar (washingtonpost.com) 338

An anonymous reader quotes a report from the Washington Post: In sunny California, solar panels are everywhere. They sit in dry, desert landscapes in the Central Valley and are scattered over rooftops in Los Angeles's urban center. By last count, the state had nearly 47 gigawatts of solar power installed -- enough to power 13.9 million homes and provide over a quarter of the Golden State's electricity. But now, the state and its grid operator are grappling with a strange reality: There is so much solar on the grid that, on sunny spring days when there's not as much demand, electricity prices go negative. Gigawatts of solar are "curtailed" -- essentially, thrown away. In response, California has cut back incentives for rooftop solar and slowed the pace of installing panels. But the diminishing economic returns may slow the development of solar in a state that has tried to move to renewable energy. And as other states build more and more solar plants of their own, they may soon face the same problems.

Curtailing solar isn't technically difficult -- according to Paul Denholm, senior research fellow at the National Renewable Energy Laboratory, it's equivalent to flipping a switch for grid operators. But throwing away free power raises electricity prices. It has also undercut the benefits of installing rooftop solar. Since the 1990s, California has been paying owners of rooftop solar panels when they export their energy to the grid. That meant that rooftop solar owners got $0.20 to $0.30 for each kilowatt-hour of electricity that they dispatched. But a year ago, the state changed this system, known as "net-metering," and now only compensates new solar panel owners for how much their power is worth to the grid. In the spring, when the duck curve is deepest, that number can dip close to zero. Customers can get more money back if they install batteries and provide power to the grid in the early evening or morning.

The change has sparked a huge backlash from Californians and rooftop solar companies, which say that their businesses are flagging. Indeed, Wood Mackenzie predicts that California residential solar installations in 2024 will fall by around 40 percent. Some state politicians are now trying to reverse the rule. "Under the CPUC's leadership California is responsible for the largest loss of solar jobs in our nation's history," Bernadette del Chiaro, the executive director of the California Solar and Storage Association, said in a statement referring to California's public utility commission. But experts say that it reflects how the economics of solar are changing in a state that has gone all-in on the technology. [...] To cope, [California's grid operator, known as CAISO] is selling some excess power to nearby states; California is also planning to install additional storage and batteries to hold solar power until later in the afternoon. Transmission lines that can carry electricity to nearby regions will also help -- some of the lost power comes from regions where there simply aren't enough power lines to carry a sudden burst of solar. Denholm says the state is starting to take the steps needed to deal with the glut. "There are fundamental limits to how much solar we can put on the grid before you start needing a lot of storage," Denholm said. "You can't just sit around and do nothing."
Further reading: The Energy Institute discusses this problem in a recent blog post.

Since 2020, the residential electricity rates in California have risen by as much as 40% after adjusting for inflation. While there's been "a lot of finger-pointing about the cause of these increases," the authors note that the impact on rates is multiplied when customers install their own generation and buy fewer kilowatts-hours from the grid because those households "contribute less towards all the fixed costs in the system." These fixed costs include: vegetation management, grid hardening, distribution line undergrounding, EV charging stations, subsidies for low income customers, energy efficiency programs, and the poles and wires that we all rely on whether we are taking electricity off the grid or putting it onto the grid from our rooftop PV systems.

"Since those fixed costs still need to be paid, rates go up, shifting costs onto the kWhs still being bought from the grid."
Games

Pareto's Economic Theories Used To Find the Best Mario Kart 8 Racer (engadget.com) 12

Data scientist Antoine Mayerowitz, PhD, applied Vilfredo Pareto's (the early 20th-century Italian economist) theories to Mario Kart 8 Deluxe to determine the best racer combinations. "When you break down the build options (including driver stats and various vehicle details) in Mario Kart 8 Deluxe, there are over 700,000 possible combinations," notes Engadget. "But once you eliminate duplicates that differ only in appearance, you can narrow it down to 'only' 25,704 possibilities." From the report: Pareto's theories, most notably the Pareto front, help us navigate the complexities of choice. They can pinpoint the solutions with the most balanced strengths and the fewest trade-offs. Pareto's work is about efficiency and effectiveness. [...] Mayerowitz's Pareto front analysis lets you narrow your possibilities down to the 14 most efficient. And it turns out the game's top players were onto something: One of the combinations with the most ideal balance of speed, acceleration and mini-turbo is Cat Peach driving the Teddy Buggy, roller tires and cloud glider -- one already favored among Mario Kart 8 competitors.

Of course, if that combination isn't your cup of tea, there are others that allow you to stay within the Pareto front's optimal range. As Eurogamer points out, Donkey Kong, Wario (my old standby, mostly because he makes me laugh) and Princess Peach are often highlighted as drivers, and you can use Mayerowitz's data fields to find the best matching vehicles. Keep in mind that others have identical stats, so racers like Villager (female), Inkling Girl and Diddy Kong are separated only by appearances.

To find your ideal racer, you can head over to Mayerowitz's website. There, you can enter your most prized stats and view the combos that give you the best balance (those highlighted in yellow), according to Pareto's theories.

EU

EU: Meta Cannot Rely On 'Pay Or Okay' (europa.eu) 110

The EU's European Data Protection Board oversees its privacy-protecting GDPR policies.

Earlier this week, TechCrunch reported that nearly two dozen civil society groups and nonprofits wrote the Board an open letter "urging it not to endorse a strategy used by Meta that they say is intended to bypass the EU's privacy protections for commercial gain."

Meta's strategy is sometimes called "Pay or Okay," writes long-time Slashdot reader AmiMoJo : Meta offers users a choice: "consent" to tracking, or pay over €250/year to use its sites without invasive monetization of personal data.
Meta prefers the phrase "subsccription for no ads," and told TechCrunch it makes them compliant with EU laws: A raft of complaints have been filed against Meta's implementation of the pay-or-consent tactic since it launched the "no ads" subscription offer last fall. Additionally, in a notable step last month, the European Union opened a formal investigation into Meta's tactic, seeking to find whether it breaches obligations that apply to Facebook and Instagram under the competition-focused Digital Markets Act. That probe remains ongoing.
The letter to the Board called for "robust protections that prioritize data subjects' agency and control over their information." And Wednesday the board issued its first decision:

"[I]n most cases, it will not be possible for [social media services] to comply with the requirements for valid consent, if they confront users only with a choice between consenting to processing of personal data for behavioural advertising purposes and paying a fee." The EDPB considers that offering only a paid alternative to services which involve the processing of personal data for behavioural advertising purposes should not be the default way forward for controllers. When developing alternatives, large online platforms should consider providing individuals with an 'equivalent alternative' that does not entail the payment of a fee. If controllers do opt to charge a fee for access to the 'equivalent alternative', they should give significant consideration to offering an additional alternative. This free alternative should be without behavioural advertising, e.g. with a form of advertising involving the processing of less or no personal data.
EDPB Chair, Anu Talus added: "Controllers should take care at all times to avoid transforming the fundamental right to data protection into a feature that individuals have to pay to enjoy."

Slashdot Top Deals