×
The Courts

Lawsuit Says OpenAI Violated US Authors' Copyrights To Train AI Chatbot (reuters.com) 82

Two U.S. authors have filed a proposed class action lawsuit against OpenAI, claiming that the company infringed their copyrights by using their works without permission to train its generative AI system, ChatGPT. The plaintiffs, Massachusetts-based writers Paul Tremblay and Mona Awad, claim the data used to train ChatGPT included thousands of books, including those from illegal "shadow libraries." Reuters reports: The complaint estimated that OpenAI's training data incorporated over 300,000 books, including from illegal "shadow libraries" that offer copyrighted books without permission. Awad is known for novels including "13 Ways of Looking at a Fat Girl" and "Bunny." Tremblay's novels include "The Cabin at the End of the World," which was adapted in the M. Night Shyamalan film "Knock at the Cabin" released in February.

Tremblay and Awad said ChatGPT could generate "very accurate" summaries of their books, indicating that they appeared in its database. The lawsuit seeks an unspecified amount of money damages on behalf of a nationwide class of copyright owners whose works OpenAI allegedly misused.

Books

How Review-Bombing Can Tank a Book Before It's Published (nytimes.com) 46

The website Goodreads has become an essential avenue for building readership, but the same features that help generate excitement can also backfire. The New York Times: Cecilia Rabess figured her debut novel, "Everything's Fine," would spark criticism: The story centers on a young Black woman working at Goldman Sachs who falls in love with a conservative white co-worker with bigoted views. But she didn't expect a backlash to strike six months before the book was published. In January, after a Goodreads user who had received an advanced copy posted a plot summary that went viral on Twitter, the review site was flooded with negative comments and one-star reviews, with many calling the book anti-Black and racist. Some of the comments were left by users who said they had never read the book, but objected to its premise.

"It may look like a bunch of one-star reviews on Goodreads, but these are broader campaigns of harassment," Rabess said. "People were very keen not just to attack the work, but to attack me as well." In an era when reaching readers online has become a near-existential problem for publishers, Goodreads has become an essential avenue for building an audience. As a cross between a social media platform and a review site like Yelp, the site has been a boon for publishers hoping to generate excitement for books. But the same features that get users talking about books and authors can also backfire. Reviews can be weaponized, in some cases derailing a book's publication long before its release. "It can be incredibly hurtful, and it's frustrating that people are allowed to review books this way if they haven't read them," said Roxane Gay, an author and editor who also posts reviews on Goodreads. "Worse, they're allowed to review books that haven't even been written. I have books on there being reviewed that I'm not finished with yet."

Education

US Reading and Math Scores Drop To Lowest Level In Decades (npr.org) 248

The average test scores for 13-year-old students in the U.S. have decreased in reading and math since 2020, reaching the lowest levels in decades, with more significant declines in math. NPR reports: The average scores, from tests given last fall, declined 4 points in reading and 9 points in math, compared with tests given in the 2019-2020 school year, and are the lowest in decades. The declines in reading were more pronounced for lower performing students, but dropped across all percentiles. The math scores were even more disappointing. On a scale of 500 points, the declines ranged from 6 to 8 points for middle and high performing students, to 12 to 14 points for low performing students.

The math results also showed widening gaps based on gender and race. Scores decreased by 11 points for female students over 2020 results, compared with a 7-point decrease for male students. Among Black students, math scores declined 13 points, while white students had a 6-point drop. Compared with the 35-point gap between Black and white students in 2020, the disparity widened to 42 points.

While the scores show a drop from the pre-pandemic years, the results also show that there are other factors at work. The decline is even more substantial when compared with scores of a decade ago: The average scores declined 7 points in reading and 14 points in mathematics. The Education Department says plans are underway to address the learning loss. [...] The latest results are from the NAEP Long-Term Trend Assessment, traditionally administered every four years by the National Center for Education Statistics.

AI

Is AI Making Silicon Valley Rich on Other People's Work? (mercurynews.com) 111

Slashdot reader rtfa0987 spotted this on the front page of the San Jose Mercury News. "Silicon Valley is poised once again to cash in on other people's products, making a data grab of unprecedented scale that has already spawned lawsuits and congressional hearings. Chatbots and other forms of generative artificial intelligence that burst onto the technology scene in recent months are fed vast amounts of material scraped from the internet — books, screenplays, research papers, news stories, photos, art, music, code and more — to produce answers, imagery or sound in response to user prompts... But a thorny, contentious and highly consequential issue has arisen: A great deal of the bots' fodder is copyrighted property...

The new AI's intellectual-property problem goes beyond art into movies and television, photography, music, news media and computer coding. Critics worry that major players in tech, by inserting themselves between producers and consumers in commercial marketplaces, will suck out the money and remove financial incentives for producing TV scripts, artworks, books, movies, music, photography, news coverage and innovative software. "It could be catastrophic," said Danielle Coffey, CEO of the News/Media Alliance, which represents nearly 2,000 U.S. news publishers, including this news organization. "It could decimate our industry."

The new technology, as happened with other Silicon Valley innovations, including internet-search, social media and food delivery, is catching on among consumers and businesses so quickly that it may become entrenched — and beloved by users — long before regulators and lawmakers gather the knowledge and political will to impose restraints and mitigate harms. "We may need legislation," said Congresswoman Zoe Lofgren, D-San Jose, who as a member of the House Judiciary Committee heard testimony on copyright and generative AI last month. "Content creators have rights and we need to figure out a way how those rights will be respected...."

Furor over the content grabbing is surging. Photo-sales giant Getty is also suing Stability AI. Striking Hollywood screenwriters last month raised concerns that movie studios will start using chatbot-written scripts fed on writers' earlier work. The record industry has lodged a complaint with federal authorities over copyrighted music being used to train AI.

The article includes some unique perspectives:
  • There's a technical solution being proposed by the software engineer-CEO of Dazzle Labs, a startup building a platform for controlling personal data. The Mercury News summarizes it as "content producers could annotate their work with conditions for use that would have to be followed by companies crawling the web for AI fodder."
  • Santa Clara University law school professor Eric Goldman "believes the law favors use of copyrighted material for training generative AI. 'All works build upon precedent works. We are all free to take pieces of precedent works. What generative AI does is accelerate that process, but it's the same process. It's all part of an evolution of our society's storehouse of knowledge...."

The Internet

A San Francisco Library Is Turning Off Wi-Fi At Night To Keep People Without Housing From Using It (theverge.com) 251

In San Francisco's District 8, a public library has turned off its Wi-Fi outside of business hours in response to complaints from neighbors and the city supervisor's office about open drug use and disturbances caused by unhoused individuals. The Verge reports: In San Francisco's District 8, a public library has been shutting down Wi-Fi outside business hours for nearly a year. The measure, quietly implemented in mid-2022, was made at the request of neighbors and the office of city supervisor Rafael Mandelman. It's an attempt to keep city dwellers who are currently unhoused away from the area by locking down access to one of the library's most valuable public services. A local activist known as HDizz revealed details behind the move last month, tweeting public records of a July 2022 email exchange between local residents and the city supervisor's office. In the emails, residents complained about open drug use and sidewalks blocked by residents who are unhoused. One relayed a secondhand story about a library worker who had been followed to her car. And by way of response, they demanded the library limit the hours Wi-Fi was available. "Why are the vagrants and drug addicts so attracted to the library?" one person asked rhetorically. "It's the free 24/7 wi-fi."

San Francisco's libraries have been historically progressive when it comes to providing resources to people who are unhoused, even hiring specialists to offer assistance. But on August 1st, reports San Francisco publication Mission Local, city librarian Michael Lambert met with Mandelman's office to discuss the issue. The next day, District 8's Eureka Valley/Harvey Milk Memorial branch began turning its Wi-Fi off after hours -- a policy that San Francisco Public Library (SFPL) spokesperson Jaime Wong told The Verge via email remains in place today.

In the initial months after the decision, the library apparently received no complaints. But in March, a little over seven months following the change, it got a request to reverse the policy. "I'm worried about my friend," the email reads, "whom I am trying to get into long term residential treatment." San Francisco has shelters, but the requester said their friend had trouble communicating with the staff and has a hard time being around people who used drugs, among other issues. Because this friend has no regular cell service, "free wifi is his only lifeline to me [or] for that matter any services for crisis or whatever else." The resident said some of the neighborhood's residents "do not understand what they do to us poor folks nor the homeless by some of the things they do here."
Jennifer Friedenbach of San Francisco's Coalition on Homelessness told The Verge in a phone interview that "folks are not out there on the streets by choice. They're destitute and don't have other options. These kinds of efforts, like turning off the Wi-Fi, just exacerbate homelessness and have the opposite effect. Putting that energy into fighting for housing for unhoused neighbors would be a lot more effective."
AI

Researchers Warn of 'Model Collapse' As AI Trains On AI-Generated Content (venturebeat.com) 159

schwit1 shares a report from VentureBeat: [A]s those following the burgeoning industry and its underlying research know, the data used to train the large language models (LLMs) and other transformer models underpinning products such as ChatGPT, Stable Diffusion and Midjourney comes initially from human sources -- books, articles, photographs and so on -- that were created without the help of artificial intelligence. Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content?

A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: "We find that use of model-generated content in training causes irreversible defects in the resulting models." Specifically looking at probability distributions for text-to-text and image-to-image AI generative models, the researchers concluded that "learning from data produced by other models causes model collapse -- a degenerative process whereby, over time, models forget the true underlying data distribution ... this process is inevitable, even for cases with almost ideal conditions for long-term learning."

"Over time, mistakes in generated data compound and ultimately force models that learn from generated data to misperceive reality even further," wrote one of the paper's leading authors, Ilia Shumailov, in an email to VentureBeat. "We were surprised to observe how quickly model collapse happens: Models can rapidly forget most of the original data from which they initially learned." In other words: as an AI training model is exposed to more AI-generated data, it performs worse over time, producing more errors in the responses and content it generates, and producing far less non-erroneous variety in its responses. As another of the paper's authors, Ross Anderson, professor of security engineering at Cambridge University and the University of Edinburgh, wrote in a blog post discussing the paper: "Just as we've strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we're about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale. Indeed, we already see AI startups hammering the Internet Archive for training data."
schwit1 writes: "Garbage in, garbage out -- and if this paper is correct, generative AI is turning into the self-licking ice cream cone of garbage generation."
Books

Sol Reader Is a VR Headset Exclusively For Reading Books (techcrunch.com) 92

A company called Sol Reader is working on a headset designed exclusively for reading books. "The device is simple: It slips over your eyes like a pair of glasses and blocks all distractions while reading," reports TechCrunch. From the article: The $350 device is currently on pre-order, comes in a handful of colors, and contains a pair of side-lit, e-ink displays, much like the Kindle does. The glasses come with a remote (I wish my Kindle had a remote!) and a charger. A full battery gets you around 25 hours of reading. That may not sound like a lot, but if you have an average adult reading speed of around 200 words per minute, you can finish the 577,608-word tome Infinite Jest in about 48 hours. That means you need at least one charging break, but then, if you are trying to read Infinite Jest in a single sitting, you're a bigger book nerd than most.

The product has a diopter adjustment built in, so glasses- and contacts-wearers can use the glasses without wearing additional vision correction (up to a point -- the company doesn't specify the exact adjustment range). The displays are 1.3-inch, e-ink displays with 256x256 per-eye resolution. The glasses have 64MB of storage, which should hold plenty of books for even the longest of escapist holidays.

The company's $5 million funding round was led by Garry Tan (Initialized, Y Combinator) and closed about a year ago. Today, the company is shipping the 'advanced copy' (read: private beta) of the glasses to a small number of early access testers. The company is tight-lipped on when its full production batches will start shipping, and customers are currently advised to join the waiting list if they want to get their mittens on a pair of Sols.

Space

Owen Gingerich, Astronomer Who Saw God in the Cosmos, Dies at 93 (nytimes.com) 135

Owen Gingerich, a renowned astronomer and historian of science, has passed away at the age of 93. Gingerich dedicated years to tracking down 600 copies of Nicolaus Copernicus's influential book "De Revolutionibus Orbium Coelestium Libri Sex" and was known for his passion for astronomy, often dressing up as a 16th-century scholar for lectures. He believed in the compatibility of religion and science and explored this theme in his books "God's Universe" and "God's Planet." The New York Times reports: Professor Gingerich, who lived in Cambridge, Mass., and taught at Harvard for many years, was a lively lecturer and writer. During his decades of teaching astronomy and the history of science, he would sometimes dress as a 16th-century Latin-speaking scholar for his classroom presentations, or convey a point of physics with a memorable demonstration; for instance, The Boston Globe related in 2004, he "routinely shot himself out of the room on the power of a fire extinguisher to prove one of Newton's laws." He was nothing if not enthusiastic about the sciences, especially astronomy. One year at Harvard, when his signature course, "The Astronomical Perspective," wasn't filling up as fast as he would have liked, he hired a plane to fly a banner over the campus that read: "Sci A-17. M, W, F. Try it!"

Professor Gingerich's doggedness was on full display in his long pursuit of copies of Copernicus's "De Revolutionibus Orbium Coelestium Libri Sex" ("Six Books on the Revolutions of the Heavenly Spheres"), first published in 1543, the year Copernicus died. That book laid out the thesis that Earth revolved around the sun, rather than the other way around, a profound challenge to scientific knowledge and religious belief in that era. The writer Arthur Koestler had contended in 1959 that the Copernicus book was not read in its time, and Professor Gingerich set out to determine whether that was true. In 1970 he happened on a copy of "De Revolutionibus" that was heavily annotated in the library of the Royal Observatory in Edinburgh, suggesting that at least one person had read it closely. A quest was born. Thirty years and hundreds of thousands of miles later, Professor Gingerich had examined some 600 Renaissance-era copies of "De Revolutionibus" all over the world and had developed a detailed picture not only of how thoroughly the work was read in its time, but also of how word of its theories spread and evolved. He documented all this in "The Book Nobody Read: Chasing the Revolutions of Nicolaus Copernicus" (2004). John Noble Wilford, reviewing it in The New York Times, called "The Book Nobody Read" "a fascinating story of a scholar as sleuth."

Professor Gingerich was raised a Mennonite and was a student at Goshen College, a Mennonite institution in Indiana, studying chemistry but thinking of astronomy, when, he later recalled, a professor there gave him pivotal advice: "If you feel a calling to pursue astronomy, you should go for it. We can't let the atheists take over any field." He took the counsel, and throughout his career he often wrote or spoke about his belief that religion and science need not be at odds. He explored that theme in the books "God's Universe" (2006) and "God's Planet" (2014). He was not a biblical literalist; he had no use for those who ignored science and proclaimed the Bible's creation story historical fact. Yet, as he put it in "God's Universe," he was "personally persuaded that a superintelligent Creator exists beyond and within the cosmos." [...] Professor Gingerich, who was senior astronomer emeritus at the Smithsonian Astrophysical Observatory, wrote countless articles over his career in addition to his books. In one for Science and Technology News in 2005, he talked about the divide between theories of atheistic evolution and theistic evolution. "Frankly it lies beyond science to prove the matter one way or the other," he wrote. "Science will not collapse if some practitioners are convinced that occasionally there has been creative input in the long chain of being."
In 2006, Gingerich was mentioned in a Slashdot story about geologists' reacting to the new definition of "Pluton." He was quoted as saying that he was only peripherally aware of the definition, and because it didn't show up on MS Word's spell check, he didn't think it was that important."

"Gingerich lead a committee of the International Astronomical Union charged with recommending whether Pluto should remain a planet," notes the New York Times. "His panel recommended that it should, but the full membership rejected that idea and instead made Pluto a 'dwarf planet.' That decision left Professor Gingerich somehwat dismayed."
Books

Why Bill Gates Recommends This Novel About Videogames (gatesnotes.com) 74

Bill Gates wrote a blog post this week recommending a novel about videogame development. Gates calls Tomorrow, and Tomorrow, and Tomorrow. "one of the biggest books of last year," telling the story of "two friends who bond over Super Mario Bros. as kids and grow up to make video games together." Although there are plenty of video games mentioned in the book — Oregon Trail is a recurring theme — I'd describe it more as a story about partnership and collaboration. When Sam and Sadie are in college, they create a game called Ichigo that turns out to be a huge hit. Their company, Unfair Games, becomes successful, but the two start to butt heads. Sadie is upset that Sam got most of the credit for Ichigo. Sam is frustrated that Sadie cares more about creating art than about making their company viable...

Most of the book is about how a creative partnership can be equal parts remarkable and complicated. I couldn't help but be reminded of my relationship with Paul Allen while I was reading it. Sadie believes that "true collaborators in this life are rare." I agree, and I was lucky to have one in Paul. An early chapter describing how Sam and Sadie worked until sunrise in a dingy apartment in Cambridge, Massachusetts, could have just as easily been about Paul and me coming up with the idea for Microsoft. Like Sam and Sadie, we worked together every day for years.

Paul's vision and contributions to the company were absolutely critical to its success, and then he chose to move on. We had a great relationship, but not without some of the complexities that success brings. Zevin really captures what it feels like to start a company that takes off. It's thrilling to know your vision is now real, but success brings a lot of new questions. Once you make money, do you still have something to prove? How does your relationship with your partner change once a lot more people get involved? How do you make the next idea as good as the last?

You can't help but wonder whether you would've been as successful if you started up at a different time... Paul and I were very lucky in terms of our timing with Microsoft. We got in when chips were just starting to become powerful but before other people had created established companies... Tomorrow, and Tomorrow, and Tomorrow resonated with me for personal reasons, but I think Zevin's exploration of partnership and collaboration is worth reading no matter who you are. Even if you're skeptical about reading a book about video games, the subject is a terrific metaphor for human connection.

The book is now being adapted into a movie.
Security

Is Cybersecurity an Unsolvable Problem? (arstechnica.com) 153

Ars Technica profiles Scott Shapiro, the co-author of a new book, Fancy Bear Goes Phishing: The Dark History of the Information Age in Five Extraordinary Hacks.

Shapiro points out that computer science "is only a century old, and hacking, or cybersecurity, is maybe a few decades old. It's a very young field, and part of the problem is that people haven't thought it through from first principles." Telling in-depth the story of five major breaches, Shapiro ultimately concludes that "the very principles that make hacking possible are the ones that make general computing possible.

"So you can't get rid of one without the other because you cannot patch metacode." Shapiro also brings some penetrating insight into why the Internet remains so insecure decades after its invention, as well as how and why hackers do what they do. And his conclusion about what can be done about it might prove a bit controversial: there is no permanent solution to the cybersecurity problem. "Cybersecurity is not a primarily technological problem that requires a primarily engineering solution," Shapiro writes. "It is a human problem that requires an understanding of human behavior." That's his mantra throughout the book: "Hacking is about humans." And it portends, for Shapiro, "the death of 'solutionism.'"
An excerpt from their interview: Ars Technica: The scientific community in various disciplines has struggled with this in the past. There's an attitude of, "We're just doing the research. It's just a tool. It's morally neutral." Hacking might be a prime example of a subject that you cannot teach outside the broader context of morality.

Scott Shapiro: I couldn't agree more. I'm a philosopher, so my day job is teaching that. But it's a problem throughout all of STEM: this idea that tools are morally neutral and you're just making them and it's up to the end user to use it in the right way. That is a reasonable attitude to have if you live in a culture that is doing the work of explaining why these tools ought to be used in one way rather than another. But when we have a culture that doesn't do that, then it becomes a very morally problematic activity.

Books

European Commission Calls for Pirate Site Blocking Around the Globe (torrentfreak.com) 29

The European Commission has published its biannual list of foreign countries with problematic copyright policies. One of the highlighted issues is a lack of pirate site blocking, which is seen as an effective enforcement measure, writes TorrentFreak, a news website that tracks piracy news. Interestingly, the EU doesn't mention the United States, which is arguably the most significant country yet to implement an effective site-blocking regime.
AI

Will AI Just Turn All of Human Knowledge into Proprietary Products? (theguardian.com) 139

"Tech CEOs want us to believe that generative AI will benefit humanity," argues an column in the Guardian, adding "They are kidding themselves..."

"There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life... " AI — far from living up to all those utopian hallucinations — is much more likely to become a fearsome tool of further dispossession and despoilation...

What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon ...) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.

This should not be legal. In the case of copyrighted material that we now know trained the models (including this newspaper), various lawsuits have been filed that will argue this was clearly illegal... The trick, of course, is that Silicon Valley routinely calls theft "disruption" — and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don't apply to your new tech; scream that regulation will only help China — all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands... These companies must know they are engaged in theft, or at least that a strong case can be made that they are. They are just hoping that the old playbook works one more time — that the scale of the heist is already so large and unfolding with such speed that courts and policymakers will once again throw up their hands in the face of the supposed inevitability of it all...

[W]e trained the machines. All of us. But we never gave our consent. They fed on humanity's collective ingenuity, inspiration and revelations (along with our more venal traits). These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances. And their goal never was to solve climate change or make our governments more responsible or our daily lives more leisurely. It was always to profit off mass immiseration, which, under capitalism, is the glaring and logical consequence of replacing human functions with bots.

Thanks to long-time Slashdot reader mspohr for sharing the article.
AI

Anthropic's Claude AI Can Now Digest an Entire Book like The Great Gatsby in Seconds (arstechnica.com) 7

AI company Anthropic has announced it has given its ChatGPT-like Claude AI language model the ability to analyze an entire book's worth of material in under a minute. This new ability comes from expanding Claude's context window to 100,000 tokens, or about 75,000 words. From a report: Like OpenAI's GPT-4, Claude is a large language model (LLM) that works by predicting the next token in a sequence when given a certain input. Tokens are fragments of words used to simplify AI data processing, and a "context window" is similar to short-term memory -- how much human-provided input data an LLM can process at once. A larger context window means an LLM can consider larger works like books or participate in very long interactive conversations that span "hours or even days."
Books

Ask Slashdot: Should Libraries Eliminate Fines for Overdue Books? (thehill.com) 163

Fines for overdue library books were eliminated more than three years ago in Chicago, Seattle, and San Francisco — as well as at the Los Angeles Public Library system (which serves 18 million people). The Hill reported that just in the U.S., more than 200 cities and municipalities had eliminated the fines by the end of 2019: Fines account for less than 1 percent of Chicago Public Library's revenue stream, and there is also a collection cost in terms of staff time, keeping cash on hand, banking and accounting. The San Diego library system did a detailed study and found the costs were higher than the fines collected, says Molloy.
And this week the King County Library System in Washington state — serving one million patrons in 50 libraries — joined the trend, announcing that it would end all late fines for overdue books.

A local newspaper summarized the results of a six-month review by library staff presented to the Board of Trustees: - In recent years, fines made up less than 1% of KCLS' operating budget.
- Late fine revenue continues to decrease over time. This trend correlates with patrons' interest in more digital and fewer physical items. Digital titles return automatically and do not accrue late fines.
- Collecting fines from patrons also has costs. Associated expenses include staff time, payment processing fees, printing notices and more.
- A majority of peer libraries have eliminated late fines.

Now Slashdot reader robotvoice writes: Library fines were assessed since early last century as an incentive for patrons to return materials and "be responsible." However, many studies have found that fines disproportionately affect the poor and disadvantaged in our society...

I have collected several anecdotes of dedicated library patrons who were locked out of borrowing because of excessive and punitive fines... I get daily use and enjoyment from library books and materials. While I personally have been scrupulous about paying fines — until they were eliminated — I support the idea that libraries are there to help those with the least access.

What do you think?

Share your own thoughts in the comments. Should libraries eliminate fines for overdue books?
Power

Bill Gates Visits Planned Site of 'Most Advanced Nuclear Facility in the World' (gatesnotes.com) 204

Friday Bill Gates visited Kemmerer, Wyoming (population: 2,656) — where a coal plant was shutting down after 50 years. But Gates was there "to celebrate the latest step in a project that's been more than 15 years in the making: designing and building a next-generation nuclear power plant..."

The new plant will employ "between 200 and 250 people," Gates writes in a blog post, "and those with experience in the coal plant will be able to do many of the jobs — such as operating a turbine and maintaining connections to the power grid — without much retraining." It's called the Natrium plant, and it was designed by TerraPower, a company I started in 2008. When it opens (potentially in 2030), it will be the most advanced nuclear facility in the world, and it will be much safer and produce far less waste than conventional reactors.

All of this matters because the world needs to make a big bet on nuclear. As I wrote in my book How to Avoid a Climate Disaster , we need nuclear power if we're going to meet the world's growing need for energy while also eliminating carbon emissions. None of the other clean sources are as reliable, and none of the other reliable sources are as clean...

Another thing that sets TerraPower apart is its digital design process. Using supercomputers, they've digitally tested the Natrium design countless times, simulating every imaginable disaster, and it keeps holding up. TerraPower's sophisticated work has drawn interest from around the globe, including an agreement to collaborate on nuclear power technology in Japan and investments from the South Korean conglomerate SK and the multinational steel company ArcelorMittal...

I'm excited about this project because of what it means for the future. It's the kind of effort that will help America maintain its energy independence. And it will help our country remain a leader in energy innovation worldwide. The people of Kemmerer are at the forefront of the equitable transition to a clean, safe energy future, and it's great to be partnering with them.

Gates writes that for safety the plant uses liquid sodium (instead of water) to absorb excess heat, and it even has an energy storage system "to control how much electricity it produces at any given time..."

"I'm convinced that the facility will be a win for the local economy, America's energy independence, and the fight against climate change.
Books

'Free Comic Book Day' 2023 Celebrations Include 'Ant-Sized' Blu-Ray Discs (freecomicbookday.com) 10

All across North America today, over 2,000 comic book stores are celebrating Free Comic Book Day. As it enters its third decade — the event started in 2001, according to Wikipedia — there'll be over two dozen free comic books to choose from this, and enthusiastic stores trying to dial up the fun even more.

16 stores are also giving away Ant-Man and The Wasp: Quantumania in special "ant-sized" boxes — the size of penny — with tiny versions of the cover art from the full-sized Blu-Ray disc boxes (along with a code for a digital version of the movie). The Bleeding Cool site has a running list of stores doing additional special "cool stuff," including cookie giveaways, discounts on paperbacks and comic books, and personal appearances by comic book writers and artists.

Geek-friendly free comic books this year:

Bleeding Cool also has previews the artwork from Star Trek: Prelude to Day of Blood, a teaser for a coming "comic book crossover event between IDW's main Star Trek comic and the Star Trek: Defiant series" (that's also accompanied by a Lower Decks comic book story).

Just remember, in 2017 NPR had this advice for visiting comics fans. "While you're there, buy something... The comics shops still have to pay for the 'free' FCBD books they stock, and they're counting on the increased foot traffic to lift sales."


Piracy

US Seizes Z-Library Login Domain, But Secret URLs for Each User Remain Active (arstechnica.com) 13

US authorities have seized another major Z-Library domain but still haven't been able to wipe the pirate book site off the Internet. From a report: Z-Library claims to offer over 13 million books, up from 11 million since US authorities launched their first major operation against Z-Library late last year. "Unfortunately, one of our primary login domains was seized today," Z-Library wrote in a Wednesday message on its Telegram account. "Therefore, we recommend using the domain singlelogin[dot]re to log in to your account, as well as to register. Please share this domain with others." In November, US authorities charged Russian nationals Anton Napolsky and Valeriia Ermakova with criminal copyright infringement, wire fraud, and money laundering for allegedly operating Z-Library. The US said at the time that it seized 250 "interrelated web domains" run by Z-Library and that Napolsky and Ermakova were arrested in Argentina at the request of the US government. Other people continue to operate Z-Library, which remained available on the Tor network and returned to the clearnet in February with a new strategy of assigning personal, secret URLs to each user. Z-Library directed users to singlelogin[dot]me, where they could sign in with their login credentials and receive a unique URL to access the entire pirate library.
Books

Fake Books Are a Real Home Decor Trend (nytimes.com) 98

If it looks like a book, feels like a book and stacks like a book, then there's still a good chance it may not be a book. From a report: Fake books come in several different forms: once-real books that are hollowed out, fabric backdrops with images of books printed onto them, empty boxlike objects with faux titles and authors or sometimes just a facade of spines along a bookshelf. Already the norm for film sets and commercial spaces, fake books are becoming popular fixtures in homes. While some people are going all in and covering entire walls in fake books, others are aghast at the thought that someone would think to decorate with a book that isn't real. "I will never use fake books," said Jeanie Engelbach, an interior designer and organizer in New York City. "It just registers as pretentious, and it creates the illusion that you are either better read or smarter than you really are."

Ms. Engelbach said she has frequently used books as decor, at times styling clients' bookcases with aesthetics taking priority over function, which is a typical interior-design practice. At Books by the Foot -- a company that sells, as its name suggests, books by the foot -- one can purchase books by color (options include "luscious creams," "vintage cabernet" and "rainbow ombre"), by subject ("well-read art" or "gardening"), wrapped books (covered in linen or rose gold) and more. The tomes are all "rescue books," ones that would otherwise be discarded or recycled for paper pulp, said Charles Roberts, the president of Books by the Foot's parent company, Wonder Book. During the pandemic lockdown in 2020, remote work created increased demand for the company's services. While it mostly specializes in the sale of real books, the company has also dabbled in the world of faux ones.

The book seller has cut books -- so only the spines remain -- and glued them to shelves for cruise ships,"where they don't want to have a lot of weight or worry that the books will fall off the shelves if the weather gets bad," Mr. Roberts said. There are other, sometimes counterintuitive, uses for fake tomes as well. Although it has the capacity to hold more than 1.35 million of them, many of the books in China's 360,000-square-foot Tianjin Binhai Library aren't real. Instead, perforated aluminum plates emblazoned with images of books can be found, primarily on the upper shelves of the atrium. While the presence of artificial books in a place devoted to reading has been widely criticized -- "more fiction than books," one headline mocked -- it remains a buzzy tourist attraction. After all, the books don't need to be real if it's just for the 'gram.

Books

Spotify Tries To Win Indie Authors By Cutting Audiobook Fees (theverge.com) 5

In an effort to appeal to indie authors, Spotify's Findaway audiobook seller "will no longer take a 20 percent cut of royalties for titles sold on its DIY Voices platform -- so long as the sales are made on Spotify," reports The Verge. From the report: In a company blog post published on Monday, Findaway said that it would "pass on cost-saving efficiencies" from its integration with the streaming service. While it's free for authors to upload their audiobooks onto Findaway's Voices platform, the company normally uses an 80/20 pricing structure -- where Findaway takes a 20 percent fee on all royalties earned. But that fee comes after sales platforms take their own 50 percent cut on the list price. So under the old revenue split, an author who sold a $10 audiobook would have to give $5 to Spotify and $1 to Findaway. But moving forward, that same author will no longer have to pay the $1 distribution fee to Findaway when a sale is made through Spotify.

The margins on audiobooks are exceptionally high, much to the chagrin of the authors. For example, Audible takes 75 percent of retail sales (though it'll only take 60 percent with an exclusivity contract). Many authors share royalties with their narrators and have to pay production fees -- meaning they get an even smaller share of royalties. The move by Spotify and Findaway is likely a bid to draw more indie authors from Audible, which is currently its biggest competitor. But Spotify's audiobooks business -- which it launched last fall -- still has a long way to go. Unlike music or podcasts, most audiobooks on Spotify must be purchased individually, and sales are restricted to its web version. Even CEO Daniel Ek admitted that the current process of buying an audiobook through Spotify is "pretty horrible."
"We at Spotify are just at the beginning of our journey supporting independent authors -- we have many plans for how to help authors expand their reach, maximize revenue, and ultimately build a strong audiobooks business," said Audiobook's communications chief, Laura Pezzini.
AI

Google Has More Powerful AI, Says Engineer Fired Over Sentience Claims (futurism.com) 139

Remember that Google engineer/AI ethicist who was fired last summer after claiming their LaMDA LLM had become sentient?

In a new interview with Futurism, Blake Lemoine now says the "best way forward" for humankind's future relationship with AI is "understanding that we are dealing with intelligent artifacts. There's a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them." (Although earlier in the interview, Lemoine concedes "Is there a chance that people, myself included, are projecting properties onto these systems that they don't have? Yes. But it's not the same kind of thing as someone who's talking to their doll.")

But he also thinks there's a lot of research happening inside corporations, adding that "The only thing that has changed from two years ago to now is that the fast movement is visible to the public." For example, Lemoine says Google almost released its AI-powered Bard chatbot last fall, but "in part because of some of the safety concerns I raised, they deleted it... I don't think they're being pushed around by OpenAI. I think that's just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something." "[Google] still has far more advanced technology that they haven't made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They've had that technology for over two years. What they've spent the intervening two years doing is working on the safety of it — making sure that it doesn't make things up too often, making sure that it doesn't have racial or gender biases, or political biases, things like that. That's what they spent those two years doing...

"And in those two years, it wasn't like they weren't inventing other things. There are plenty of other systems that give Google's AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That's the one that I was like, "you know this thing, this thing's awake." And they haven't let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model...

"[W]hat it comes down to is that we aren't spending enough time on transparency or model understandability. I'm of the opinion that we could be using the scientific investigative tools that psychology has come up with to understand human cognition, both to understand existing AI systems and to develop ones that are more easily controllable and understandable."

So how will AI and humans will coexist? "Over the past year, I've been leaning more and more towards we're not ready for this, as people," Lemoine says toward the end of the interview. "We have not yet sufficiently answered questions about human rights — throwing nonhuman entities into the mix needlessly complicates things at this point in history."

Slashdot Top Deals