×
DRM

Internet Archive Targets Book DRM Removal Tool With DMCA Takedown (torrentfreak.com) 20

The Internet Archive has taken the rather unusual step of sending a DMCA notice to protect the copyrights of book publishers and authors. The non-profit organization asked GitHub to remove a tool that can strip DRM from books in its library. The protective move is likely motivated by the ongoing legal troubles between the Archive and book publishers. TorrentFreak reports: The Internet Archive sent a takedown request to GitHub, requesting the developer platform to remove a tool that circumvents industry-standard technical protection mechanisms for digital libraries. This "DeGouRou" software effectively allows patrons to save DRM-free copies of the books they borrow. "This DMCA complaint is about a tool made available on github which purports to circumvent technical protections in violation of the copyright act section 1201," the notice reads. "I am reporting a Git which provides a tool specifically used to circumvent industry standard library TPMs which are used by Internet Archive, and other libraries, to permit patrons to borrow an encrypted book, read the encrypted book, and return an encrypted book."

Interestingly, an IA representative states that they are "not authorized by the copyright owners" to submit this takedown notice. Instead, IA is acting on its duty to prevent the unauthorized downloading of copyright-protected books. It's quite unusual to see a party sending takedown notices without permission from the actual rightsholders. However, given the copyright liabilities IA faces, it makes sense that the organization is doing what it can to prevent more legal trouble. Permission or not, GitHub honored the takedown request. It removed all the DeGourou repositories that were flagged and took the code offline. [...] After GitHub removed the code, it soon popped up elsewhere.

Moon

NASA's VIPER Rover Will Be the First To Cruise the Moon's South Pole (popsci.com) 16

Popular Science describes how NASA's Volatiles Investigating Polar Exploration Rover (VIPER) will use a pair of ramps to become the first rover to explore the Moon's south pole when it arrives in late 2024. From the report: "We all know how to work with ramps, and we just need to optimize it for the environment we're going to be in," says NASA's VIPER program manager Daniel Andrews. A VIPER test vehicle recently descended down a pair of metal ramps at NASA's Ames Research Center in California, as seen in the agency's recently published photos, with one beam for each set of the rover's wheels. Because the terrain where VIPER will land -- the edge of the massive Nobile Crater -- is expected to be rough, the engineering team has been testing VIPER's ability to descend the ramps at extreme angles. They have altered the steepness, as measured from the lander VIPER will descend from, and differences in elevation between the ramp for each wheel. "We have two ramps, not just for the left and right wheels, but a ramp set that goes out the back too," Andrews says. "So we actually get our pick of the litter, which one looks most safe and best to navigate as we're at that moment where we have to roll off the lander."

VIPER is a scientific successor to NASA's Lunar Crater Observation and Sensing Satellite, or LCROSS mission, which in 2009 confirmed the presence of water ice on the lunar south pole. "It completely rewrote the books on the moon with respect to water," says Andrews, who also worked on the LCROSS mission. "That really started the moon rush, commercially, and by state actors like NASA and other space agencies." The ice, if abundant, could be mined to create rocket propellant. It could also provide water for other purposes at long-term lunar habitats, which NASA plans to construct in the late 2020s as part of the Artemis moon program.

But LCROSS only confirmed that ice was definitely present in a single crater at the moon's south pole. VIPER, a mobile rover, will probe the distribution of water ice in greater detail. Drilling beneath the lunar surface is one task. Another is to move into steep, permanently shadowed regions -- entering craters that, due to their sharp geometry, and the low angle of the sun at the lunar poles, have not seen sunlight in billions of years. The tests demonstrate the rover can navigate a 15-degree slope with ease -- enough to explore these hidden dark spots, avoiding the need to make a machine designed for trickier descents. "We think there's plenty of scientifically relevant opportunities, without having to make a superheroic rover that can do crazy things," Andrews says.

Developed by NASA Ames and Pittsburgh-based company Astrobotic, VIPER is a square golf-cart-sized vehicle about 5 feet long and wide, and about 8 feet high. Unlike all of NASA's Mars rovers, VIPER has four wheels, not six. "A problem with six wheels is it creates kind of the equivalent of a track, and so you're forced to drive in a certain way," Andrews says. VIPER's four wheels are entirely independent from each other. Not only can they roll in any direction, they can be turned out, using the rover's shoulder-like joints to crawl out of the soft regolith of the kind scientists believe exists in permanently shadowed moon craters. The wheels themselves are very similar to those on the Mars rovers, but with more paddle-like treads, known as grousers, to carry the robot through fluffy regolith. [...] Together with Astrobotic, Andrews and his team have altered the ramps, and they now include specialized etchings down their lengths. The rover can detect this pattern along the rampway, using cameras in its wheel wells. "By just looking down there," the robot knows where it is, he says. "That's a new touch." Andrews is sure VIPER will be ready for deployment in 2024, however many tweaks are necessary. After all, this method is less complicated than a sky crane, he notes: "Ramps are pretty tried and true."

Facebook

Sarah Silverman Sues Meta, OpenAI for Copyright Infringement (reuters.com) 163

Comedian Sarah Silverman and two authors have filed copyright infringement lawsuits against Meta and OpenAI for allegedly using their content without permission to train artificial intelligence language models. From a report: The proposed class action lawsuits filed by Silverman, Richard Kadrey and Christopher Golden in San Francisco federal court Friday allege Facebook parent company Meta and ChatGPT maker OpenAI used copyrighted material to train chat bots. The lawsuits underscore the legal risks developers of chat bots face when using troves of copyrighted material to create apps that deliver realistic responses to user prompts. Silverman, Kadrey and Golden allege Meta and OpenAI used their books without authorization to develop their so-called large language models, which their makers pitch as powerful tools for automating tasks by replicating human conversation. In their lawsuit against Meta, the plaintiffs allege that leaked information about the company's artificial intelligence business shows their work was used without permission.
Privacy

EFF Says California Cops Are Illegally Sharing License Plate Data with Anti-Abortion States (yahoo.com) 240

Slashdot reader j3x0n shared this report from California newspaper the Sacramento Bee: In 2015, Democratic Elk Grove Assemblyman Jim Cooper voted for Senate Bill 34, which restricted law enforcement from sharing automated license plate reader (ALPR) data with out-of-state authorities. In 2023, now-Sacramento County Sheriff Cooper appears to be doing just that. The Electronic Frontier Foundation (EFF) a digital rights group, has sent Cooper a letter requesting that the Sacramento County Sheriff's Office cease sharing ALPR data with out-of-state agencies that could use it to prosecute someone for seeking an abortion.

According to documents that the Sheriff's Office provided EFF through a public records request, it has shared license plate reader data with law enforcement agencies in states that have passed laws banning abortion, including Alabama, Oklahoma and Texas. Adam Schwartz, EFF senior staff attorney, called automated license plate readers "a growing threat to everyone's privacy ... that are out there by the thousands in California..." Schwartz said that a sheriff in Texas, Idaho or any other state with an abortion ban on the books could use that data to track people's movements around California, knowing where they live, where they work and where they seek reproductive medical care, including abortions.

The Sacramento County Sheriff's Office isn't the only one sharing that data; in May, EFF released a report showing that 71 law enforcement agencies in 22 California counties — including Sacramento County — were sharing such data... [Schwartz] said that he was not aware of any cases where ALPR data was used to prosecute someone for getting an abortion, but added, "We think we shouldn't have to wait until the inevitable happens."

In May the EFF noted that the state of Idaho "has enacted a law that makes helping a pregnant minor get an abortion in another state punishable by two to five years in prison."
Businesses

Lamborghini Takes Last Combustion Engine Model Order (reuters.com) 80

Lamborghini's combustion engine models are sold out until the end of production, its chief executive was quoted as saying in the WELT newspaper on Wednesday, as the luxury carmaker transitions towards a pure hybrid lineup. From a report: Order books for its Hurucan and Urus models are full, marking the end of combustion engine vehicle production for the company, Stephan Winkelmann, head of the Volkswagen subsidiary, said. Lamborghini announced last July it would be investing at least 1.8 billion euros ($2 billion) to produce a hybrid lineup by 2024 and more to bring out its fully electric model by the end of the decade.
The Courts

Lawsuit Says OpenAI Violated US Authors' Copyrights To Train AI Chatbot (reuters.com) 82

Two U.S. authors have filed a proposed class action lawsuit against OpenAI, claiming that the company infringed their copyrights by using their works without permission to train its generative AI system, ChatGPT. The plaintiffs, Massachusetts-based writers Paul Tremblay and Mona Awad, claim the data used to train ChatGPT included thousands of books, including those from illegal "shadow libraries." Reuters reports: The complaint estimated that OpenAI's training data incorporated over 300,000 books, including from illegal "shadow libraries" that offer copyrighted books without permission. Awad is known for novels including "13 Ways of Looking at a Fat Girl" and "Bunny." Tremblay's novels include "The Cabin at the End of the World," which was adapted in the M. Night Shyamalan film "Knock at the Cabin" released in February.

Tremblay and Awad said ChatGPT could generate "very accurate" summaries of their books, indicating that they appeared in its database. The lawsuit seeks an unspecified amount of money damages on behalf of a nationwide class of copyright owners whose works OpenAI allegedly misused.

Books

How Review-Bombing Can Tank a Book Before It's Published (nytimes.com) 46

The website Goodreads has become an essential avenue for building readership, but the same features that help generate excitement can also backfire. The New York Times: Cecilia Rabess figured her debut novel, "Everything's Fine," would spark criticism: The story centers on a young Black woman working at Goldman Sachs who falls in love with a conservative white co-worker with bigoted views. But she didn't expect a backlash to strike six months before the book was published. In January, after a Goodreads user who had received an advanced copy posted a plot summary that went viral on Twitter, the review site was flooded with negative comments and one-star reviews, with many calling the book anti-Black and racist. Some of the comments were left by users who said they had never read the book, but objected to its premise.

"It may look like a bunch of one-star reviews on Goodreads, but these are broader campaigns of harassment," Rabess said. "People were very keen not just to attack the work, but to attack me as well." In an era when reaching readers online has become a near-existential problem for publishers, Goodreads has become an essential avenue for building an audience. As a cross between a social media platform and a review site like Yelp, the site has been a boon for publishers hoping to generate excitement for books. But the same features that get users talking about books and authors can also backfire. Reviews can be weaponized, in some cases derailing a book's publication long before its release. "It can be incredibly hurtful, and it's frustrating that people are allowed to review books this way if they haven't read them," said Roxane Gay, an author and editor who also posts reviews on Goodreads. "Worse, they're allowed to review books that haven't even been written. I have books on there being reviewed that I'm not finished with yet."

Education

US Reading and Math Scores Drop To Lowest Level In Decades (npr.org) 248

The average test scores for 13-year-old students in the U.S. have decreased in reading and math since 2020, reaching the lowest levels in decades, with more significant declines in math. NPR reports: The average scores, from tests given last fall, declined 4 points in reading and 9 points in math, compared with tests given in the 2019-2020 school year, and are the lowest in decades. The declines in reading were more pronounced for lower performing students, but dropped across all percentiles. The math scores were even more disappointing. On a scale of 500 points, the declines ranged from 6 to 8 points for middle and high performing students, to 12 to 14 points for low performing students.

The math results also showed widening gaps based on gender and race. Scores decreased by 11 points for female students over 2020 results, compared with a 7-point decrease for male students. Among Black students, math scores declined 13 points, while white students had a 6-point drop. Compared with the 35-point gap between Black and white students in 2020, the disparity widened to 42 points.

While the scores show a drop from the pre-pandemic years, the results also show that there are other factors at work. The decline is even more substantial when compared with scores of a decade ago: The average scores declined 7 points in reading and 14 points in mathematics. The Education Department says plans are underway to address the learning loss. [...] The latest results are from the NAEP Long-Term Trend Assessment, traditionally administered every four years by the National Center for Education Statistics.

AI

Is AI Making Silicon Valley Rich on Other People's Work? (mercurynews.com) 111

Slashdot reader rtfa0987 spotted this on the front page of the San Jose Mercury News. "Silicon Valley is poised once again to cash in on other people's products, making a data grab of unprecedented scale that has already spawned lawsuits and congressional hearings. Chatbots and other forms of generative artificial intelligence that burst onto the technology scene in recent months are fed vast amounts of material scraped from the internet — books, screenplays, research papers, news stories, photos, art, music, code and more — to produce answers, imagery or sound in response to user prompts... But a thorny, contentious and highly consequential issue has arisen: A great deal of the bots' fodder is copyrighted property...

The new AI's intellectual-property problem goes beyond art into movies and television, photography, music, news media and computer coding. Critics worry that major players in tech, by inserting themselves between producers and consumers in commercial marketplaces, will suck out the money and remove financial incentives for producing TV scripts, artworks, books, movies, music, photography, news coverage and innovative software. "It could be catastrophic," said Danielle Coffey, CEO of the News/Media Alliance, which represents nearly 2,000 U.S. news publishers, including this news organization. "It could decimate our industry."

The new technology, as happened with other Silicon Valley innovations, including internet-search, social media and food delivery, is catching on among consumers and businesses so quickly that it may become entrenched — and beloved by users — long before regulators and lawmakers gather the knowledge and political will to impose restraints and mitigate harms. "We may need legislation," said Congresswoman Zoe Lofgren, D-San Jose, who as a member of the House Judiciary Committee heard testimony on copyright and generative AI last month. "Content creators have rights and we need to figure out a way how those rights will be respected...."

Furor over the content grabbing is surging. Photo-sales giant Getty is also suing Stability AI. Striking Hollywood screenwriters last month raised concerns that movie studios will start using chatbot-written scripts fed on writers' earlier work. The record industry has lodged a complaint with federal authorities over copyrighted music being used to train AI.

The article includes some unique perspectives:
  • There's a technical solution being proposed by the software engineer-CEO of Dazzle Labs, a startup building a platform for controlling personal data. The Mercury News summarizes it as "content producers could annotate their work with conditions for use that would have to be followed by companies crawling the web for AI fodder."
  • Santa Clara University law school professor Eric Goldman "believes the law favors use of copyrighted material for training generative AI. 'All works build upon precedent works. We are all free to take pieces of precedent works. What generative AI does is accelerate that process, but it's the same process. It's all part of an evolution of our society's storehouse of knowledge...."

The Internet

A San Francisco Library Is Turning Off Wi-Fi At Night To Keep People Without Housing From Using It (theverge.com) 251

In San Francisco's District 8, a public library has turned off its Wi-Fi outside of business hours in response to complaints from neighbors and the city supervisor's office about open drug use and disturbances caused by unhoused individuals. The Verge reports: In San Francisco's District 8, a public library has been shutting down Wi-Fi outside business hours for nearly a year. The measure, quietly implemented in mid-2022, was made at the request of neighbors and the office of city supervisor Rafael Mandelman. It's an attempt to keep city dwellers who are currently unhoused away from the area by locking down access to one of the library's most valuable public services. A local activist known as HDizz revealed details behind the move last month, tweeting public records of a July 2022 email exchange between local residents and the city supervisor's office. In the emails, residents complained about open drug use and sidewalks blocked by residents who are unhoused. One relayed a secondhand story about a library worker who had been followed to her car. And by way of response, they demanded the library limit the hours Wi-Fi was available. "Why are the vagrants and drug addicts so attracted to the library?" one person asked rhetorically. "It's the free 24/7 wi-fi."

San Francisco's libraries have been historically progressive when it comes to providing resources to people who are unhoused, even hiring specialists to offer assistance. But on August 1st, reports San Francisco publication Mission Local, city librarian Michael Lambert met with Mandelman's office to discuss the issue. The next day, District 8's Eureka Valley/Harvey Milk Memorial branch began turning its Wi-Fi off after hours -- a policy that San Francisco Public Library (SFPL) spokesperson Jaime Wong told The Verge via email remains in place today.

In the initial months after the decision, the library apparently received no complaints. But in March, a little over seven months following the change, it got a request to reverse the policy. "I'm worried about my friend," the email reads, "whom I am trying to get into long term residential treatment." San Francisco has shelters, but the requester said their friend had trouble communicating with the staff and has a hard time being around people who used drugs, among other issues. Because this friend has no regular cell service, "free wifi is his only lifeline to me [or] for that matter any services for crisis or whatever else." The resident said some of the neighborhood's residents "do not understand what they do to us poor folks nor the homeless by some of the things they do here."
Jennifer Friedenbach of San Francisco's Coalition on Homelessness told The Verge in a phone interview that "folks are not out there on the streets by choice. They're destitute and don't have other options. These kinds of efforts, like turning off the Wi-Fi, just exacerbate homelessness and have the opposite effect. Putting that energy into fighting for housing for unhoused neighbors would be a lot more effective."
AI

Researchers Warn of 'Model Collapse' As AI Trains On AI-Generated Content (venturebeat.com) 159

schwit1 shares a report from VentureBeat: [A]s those following the burgeoning industry and its underlying research know, the data used to train the large language models (LLMs) and other transformer models underpinning products such as ChatGPT, Stable Diffusion and Midjourney comes initially from human sources -- books, articles, photographs and so on -- that were created without the help of artificial intelligence. Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content?

A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: "We find that use of model-generated content in training causes irreversible defects in the resulting models." Specifically looking at probability distributions for text-to-text and image-to-image AI generative models, the researchers concluded that "learning from data produced by other models causes model collapse -- a degenerative process whereby, over time, models forget the true underlying data distribution ... this process is inevitable, even for cases with almost ideal conditions for long-term learning."

"Over time, mistakes in generated data compound and ultimately force models that learn from generated data to misperceive reality even further," wrote one of the paper's leading authors, Ilia Shumailov, in an email to VentureBeat. "We were surprised to observe how quickly model collapse happens: Models can rapidly forget most of the original data from which they initially learned." In other words: as an AI training model is exposed to more AI-generated data, it performs worse over time, producing more errors in the responses and content it generates, and producing far less non-erroneous variety in its responses. As another of the paper's authors, Ross Anderson, professor of security engineering at Cambridge University and the University of Edinburgh, wrote in a blog post discussing the paper: "Just as we've strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we're about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale. Indeed, we already see AI startups hammering the Internet Archive for training data."
schwit1 writes: "Garbage in, garbage out -- and if this paper is correct, generative AI is turning into the self-licking ice cream cone of garbage generation."
Books

Sol Reader Is a VR Headset Exclusively For Reading Books (techcrunch.com) 92

A company called Sol Reader is working on a headset designed exclusively for reading books. "The device is simple: It slips over your eyes like a pair of glasses and blocks all distractions while reading," reports TechCrunch. From the article: The $350 device is currently on pre-order, comes in a handful of colors, and contains a pair of side-lit, e-ink displays, much like the Kindle does. The glasses come with a remote (I wish my Kindle had a remote!) and a charger. A full battery gets you around 25 hours of reading. That may not sound like a lot, but if you have an average adult reading speed of around 200 words per minute, you can finish the 577,608-word tome Infinite Jest in about 48 hours. That means you need at least one charging break, but then, if you are trying to read Infinite Jest in a single sitting, you're a bigger book nerd than most.

The product has a diopter adjustment built in, so glasses- and contacts-wearers can use the glasses without wearing additional vision correction (up to a point -- the company doesn't specify the exact adjustment range). The displays are 1.3-inch, e-ink displays with 256x256 per-eye resolution. The glasses have 64MB of storage, which should hold plenty of books for even the longest of escapist holidays.

The company's $5 million funding round was led by Garry Tan (Initialized, Y Combinator) and closed about a year ago. Today, the company is shipping the 'advanced copy' (read: private beta) of the glasses to a small number of early access testers. The company is tight-lipped on when its full production batches will start shipping, and customers are currently advised to join the waiting list if they want to get their mittens on a pair of Sols.

Space

Owen Gingerich, Astronomer Who Saw God in the Cosmos, Dies at 93 (nytimes.com) 135

Owen Gingerich, a renowned astronomer and historian of science, has passed away at the age of 93. Gingerich dedicated years to tracking down 600 copies of Nicolaus Copernicus's influential book "De Revolutionibus Orbium Coelestium Libri Sex" and was known for his passion for astronomy, often dressing up as a 16th-century scholar for lectures. He believed in the compatibility of religion and science and explored this theme in his books "God's Universe" and "God's Planet." The New York Times reports: Professor Gingerich, who lived in Cambridge, Mass., and taught at Harvard for many years, was a lively lecturer and writer. During his decades of teaching astronomy and the history of science, he would sometimes dress as a 16th-century Latin-speaking scholar for his classroom presentations, or convey a point of physics with a memorable demonstration; for instance, The Boston Globe related in 2004, he "routinely shot himself out of the room on the power of a fire extinguisher to prove one of Newton's laws." He was nothing if not enthusiastic about the sciences, especially astronomy. One year at Harvard, when his signature course, "The Astronomical Perspective," wasn't filling up as fast as he would have liked, he hired a plane to fly a banner over the campus that read: "Sci A-17. M, W, F. Try it!"

Professor Gingerich's doggedness was on full display in his long pursuit of copies of Copernicus's "De Revolutionibus Orbium Coelestium Libri Sex" ("Six Books on the Revolutions of the Heavenly Spheres"), first published in 1543, the year Copernicus died. That book laid out the thesis that Earth revolved around the sun, rather than the other way around, a profound challenge to scientific knowledge and religious belief in that era. The writer Arthur Koestler had contended in 1959 that the Copernicus book was not read in its time, and Professor Gingerich set out to determine whether that was true. In 1970 he happened on a copy of "De Revolutionibus" that was heavily annotated in the library of the Royal Observatory in Edinburgh, suggesting that at least one person had read it closely. A quest was born. Thirty years and hundreds of thousands of miles later, Professor Gingerich had examined some 600 Renaissance-era copies of "De Revolutionibus" all over the world and had developed a detailed picture not only of how thoroughly the work was read in its time, but also of how word of its theories spread and evolved. He documented all this in "The Book Nobody Read: Chasing the Revolutions of Nicolaus Copernicus" (2004). John Noble Wilford, reviewing it in The New York Times, called "The Book Nobody Read" "a fascinating story of a scholar as sleuth."

Professor Gingerich was raised a Mennonite and was a student at Goshen College, a Mennonite institution in Indiana, studying chemistry but thinking of astronomy, when, he later recalled, a professor there gave him pivotal advice: "If you feel a calling to pursue astronomy, you should go for it. We can't let the atheists take over any field." He took the counsel, and throughout his career he often wrote or spoke about his belief that religion and science need not be at odds. He explored that theme in the books "God's Universe" (2006) and "God's Planet" (2014). He was not a biblical literalist; he had no use for those who ignored science and proclaimed the Bible's creation story historical fact. Yet, as he put it in "God's Universe," he was "personally persuaded that a superintelligent Creator exists beyond and within the cosmos." [...] Professor Gingerich, who was senior astronomer emeritus at the Smithsonian Astrophysical Observatory, wrote countless articles over his career in addition to his books. In one for Science and Technology News in 2005, he talked about the divide between theories of atheistic evolution and theistic evolution. "Frankly it lies beyond science to prove the matter one way or the other," he wrote. "Science will not collapse if some practitioners are convinced that occasionally there has been creative input in the long chain of being."
In 2006, Gingerich was mentioned in a Slashdot story about geologists' reacting to the new definition of "Pluton." He was quoted as saying that he was only peripherally aware of the definition, and because it didn't show up on MS Word's spell check, he didn't think it was that important."

"Gingerich lead a committee of the International Astronomical Union charged with recommending whether Pluto should remain a planet," notes the New York Times. "His panel recommended that it should, but the full membership rejected that idea and instead made Pluto a 'dwarf planet.' That decision left Professor Gingerich somehwat dismayed."
Books

Why Bill Gates Recommends This Novel About Videogames (gatesnotes.com) 74

Bill Gates wrote a blog post this week recommending a novel about videogame development. Gates calls Tomorrow, and Tomorrow, and Tomorrow. "one of the biggest books of last year," telling the story of "two friends who bond over Super Mario Bros. as kids and grow up to make video games together." Although there are plenty of video games mentioned in the book — Oregon Trail is a recurring theme — I'd describe it more as a story about partnership and collaboration. When Sam and Sadie are in college, they create a game called Ichigo that turns out to be a huge hit. Their company, Unfair Games, becomes successful, but the two start to butt heads. Sadie is upset that Sam got most of the credit for Ichigo. Sam is frustrated that Sadie cares more about creating art than about making their company viable...

Most of the book is about how a creative partnership can be equal parts remarkable and complicated. I couldn't help but be reminded of my relationship with Paul Allen while I was reading it. Sadie believes that "true collaborators in this life are rare." I agree, and I was lucky to have one in Paul. An early chapter describing how Sam and Sadie worked until sunrise in a dingy apartment in Cambridge, Massachusetts, could have just as easily been about Paul and me coming up with the idea for Microsoft. Like Sam and Sadie, we worked together every day for years.

Paul's vision and contributions to the company were absolutely critical to its success, and then he chose to move on. We had a great relationship, but not without some of the complexities that success brings. Zevin really captures what it feels like to start a company that takes off. It's thrilling to know your vision is now real, but success brings a lot of new questions. Once you make money, do you still have something to prove? How does your relationship with your partner change once a lot more people get involved? How do you make the next idea as good as the last?

You can't help but wonder whether you would've been as successful if you started up at a different time... Paul and I were very lucky in terms of our timing with Microsoft. We got in when chips were just starting to become powerful but before other people had created established companies... Tomorrow, and Tomorrow, and Tomorrow resonated with me for personal reasons, but I think Zevin's exploration of partnership and collaboration is worth reading no matter who you are. Even if you're skeptical about reading a book about video games, the subject is a terrific metaphor for human connection.

The book is now being adapted into a movie.
Security

Is Cybersecurity an Unsolvable Problem? (arstechnica.com) 153

Ars Technica profiles Scott Shapiro, the co-author of a new book, Fancy Bear Goes Phishing: The Dark History of the Information Age in Five Extraordinary Hacks.

Shapiro points out that computer science "is only a century old, and hacking, or cybersecurity, is maybe a few decades old. It's a very young field, and part of the problem is that people haven't thought it through from first principles." Telling in-depth the story of five major breaches, Shapiro ultimately concludes that "the very principles that make hacking possible are the ones that make general computing possible.

"So you can't get rid of one without the other because you cannot patch metacode." Shapiro also brings some penetrating insight into why the Internet remains so insecure decades after its invention, as well as how and why hackers do what they do. And his conclusion about what can be done about it might prove a bit controversial: there is no permanent solution to the cybersecurity problem. "Cybersecurity is not a primarily technological problem that requires a primarily engineering solution," Shapiro writes. "It is a human problem that requires an understanding of human behavior." That's his mantra throughout the book: "Hacking is about humans." And it portends, for Shapiro, "the death of 'solutionism.'"
An excerpt from their interview: Ars Technica: The scientific community in various disciplines has struggled with this in the past. There's an attitude of, "We're just doing the research. It's just a tool. It's morally neutral." Hacking might be a prime example of a subject that you cannot teach outside the broader context of morality.

Scott Shapiro: I couldn't agree more. I'm a philosopher, so my day job is teaching that. But it's a problem throughout all of STEM: this idea that tools are morally neutral and you're just making them and it's up to the end user to use it in the right way. That is a reasonable attitude to have if you live in a culture that is doing the work of explaining why these tools ought to be used in one way rather than another. But when we have a culture that doesn't do that, then it becomes a very morally problematic activity.

Books

European Commission Calls for Pirate Site Blocking Around the Globe (torrentfreak.com) 29

The European Commission has published its biannual list of foreign countries with problematic copyright policies. One of the highlighted issues is a lack of pirate site blocking, which is seen as an effective enforcement measure, writes TorrentFreak, a news website that tracks piracy news. Interestingly, the EU doesn't mention the United States, which is arguably the most significant country yet to implement an effective site-blocking regime.
AI

Will AI Just Turn All of Human Knowledge into Proprietary Products? (theguardian.com) 139

"Tech CEOs want us to believe that generative AI will benefit humanity," argues an column in the Guardian, adding "They are kidding themselves..."

"There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life... " AI — far from living up to all those utopian hallucinations — is much more likely to become a fearsome tool of further dispossession and despoilation...

What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon ...) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.

This should not be legal. In the case of copyrighted material that we now know trained the models (including this newspaper), various lawsuits have been filed that will argue this was clearly illegal... The trick, of course, is that Silicon Valley routinely calls theft "disruption" — and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don't apply to your new tech; scream that regulation will only help China — all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands... These companies must know they are engaged in theft, or at least that a strong case can be made that they are. They are just hoping that the old playbook works one more time — that the scale of the heist is already so large and unfolding with such speed that courts and policymakers will once again throw up their hands in the face of the supposed inevitability of it all...

[W]e trained the machines. All of us. But we never gave our consent. They fed on humanity's collective ingenuity, inspiration and revelations (along with our more venal traits). These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances. And their goal never was to solve climate change or make our governments more responsible or our daily lives more leisurely. It was always to profit off mass immiseration, which, under capitalism, is the glaring and logical consequence of replacing human functions with bots.

Thanks to long-time Slashdot reader mspohr for sharing the article.
AI

Anthropic's Claude AI Can Now Digest an Entire Book like The Great Gatsby in Seconds (arstechnica.com) 7

AI company Anthropic has announced it has given its ChatGPT-like Claude AI language model the ability to analyze an entire book's worth of material in under a minute. This new ability comes from expanding Claude's context window to 100,000 tokens, or about 75,000 words. From a report: Like OpenAI's GPT-4, Claude is a large language model (LLM) that works by predicting the next token in a sequence when given a certain input. Tokens are fragments of words used to simplify AI data processing, and a "context window" is similar to short-term memory -- how much human-provided input data an LLM can process at once. A larger context window means an LLM can consider larger works like books or participate in very long interactive conversations that span "hours or even days."
Books

Ask Slashdot: Should Libraries Eliminate Fines for Overdue Books? (thehill.com) 163

Fines for overdue library books were eliminated more than three years ago in Chicago, Seattle, and San Francisco — as well as at the Los Angeles Public Library system (which serves 18 million people). The Hill reported that just in the U.S., more than 200 cities and municipalities had eliminated the fines by the end of 2019: Fines account for less than 1 percent of Chicago Public Library's revenue stream, and there is also a collection cost in terms of staff time, keeping cash on hand, banking and accounting. The San Diego library system did a detailed study and found the costs were higher than the fines collected, says Molloy.
And this week the King County Library System in Washington state — serving one million patrons in 50 libraries — joined the trend, announcing that it would end all late fines for overdue books.

A local newspaper summarized the results of a six-month review by library staff presented to the Board of Trustees: - In recent years, fines made up less than 1% of KCLS' operating budget.
- Late fine revenue continues to decrease over time. This trend correlates with patrons' interest in more digital and fewer physical items. Digital titles return automatically and do not accrue late fines.
- Collecting fines from patrons also has costs. Associated expenses include staff time, payment processing fees, printing notices and more.
- A majority of peer libraries have eliminated late fines.

Now Slashdot reader robotvoice writes: Library fines were assessed since early last century as an incentive for patrons to return materials and "be responsible." However, many studies have found that fines disproportionately affect the poor and disadvantaged in our society...

I have collected several anecdotes of dedicated library patrons who were locked out of borrowing because of excessive and punitive fines... I get daily use and enjoyment from library books and materials. While I personally have been scrupulous about paying fines — until they were eliminated — I support the idea that libraries are there to help those with the least access.

What do you think?

Share your own thoughts in the comments. Should libraries eliminate fines for overdue books?
Power

Bill Gates Visits Planned Site of 'Most Advanced Nuclear Facility in the World' (gatesnotes.com) 204

Friday Bill Gates visited Kemmerer, Wyoming (population: 2,656) — where a coal plant was shutting down after 50 years. But Gates was there "to celebrate the latest step in a project that's been more than 15 years in the making: designing and building a next-generation nuclear power plant..."

The new plant will employ "between 200 and 250 people," Gates writes in a blog post, "and those with experience in the coal plant will be able to do many of the jobs — such as operating a turbine and maintaining connections to the power grid — without much retraining." It's called the Natrium plant, and it was designed by TerraPower, a company I started in 2008. When it opens (potentially in 2030), it will be the most advanced nuclear facility in the world, and it will be much safer and produce far less waste than conventional reactors.

All of this matters because the world needs to make a big bet on nuclear. As I wrote in my book How to Avoid a Climate Disaster , we need nuclear power if we're going to meet the world's growing need for energy while also eliminating carbon emissions. None of the other clean sources are as reliable, and none of the other reliable sources are as clean...

Another thing that sets TerraPower apart is its digital design process. Using supercomputers, they've digitally tested the Natrium design countless times, simulating every imaginable disaster, and it keeps holding up. TerraPower's sophisticated work has drawn interest from around the globe, including an agreement to collaborate on nuclear power technology in Japan and investments from the South Korean conglomerate SK and the multinational steel company ArcelorMittal...

I'm excited about this project because of what it means for the future. It's the kind of effort that will help America maintain its energy independence. And it will help our country remain a leader in energy innovation worldwide. The people of Kemmerer are at the forefront of the equitable transition to a clean, safe energy future, and it's great to be partnering with them.

Gates writes that for safety the plant uses liquid sodium (instead of water) to absorb excess heat, and it even has an energy storage system "to control how much electricity it produces at any given time..."

"I'm convinced that the facility will be a win for the local economy, America's energy independence, and the fight against climate change.

Slashdot Top Deals