×
Robotics

Killer Robots Will Only Exist If We Are Stupid Enough To Let Them (theguardian.com) 135

Heritype quotes the Guardian's science correspondent: The idea of killer robots rising up and destroying humans is a Hollywood fantasy and a distraction from the more pressing dilemmas that intelligent machines present to society, according to one of Britain's most influential computer scientists. Sir Nigel Shadbolt, professor of computer science at the University of Oxford, predicts that AI will bring overwhelming benefits to humanity, revolutionising cancer diagnosis and treatment, and transforming education and the workplace. If problems arise, he said, it will not be because sentient machines have unexpectedly gone rogue in a Terminator-like scenario.

"The danger is clearly not that robots will decide to put us away and have a robot revolution," he said. "If there [are] killer robots, it will be because we've been stupid enough to give it the instructions or software for it to do that without having a human in the loop deciding...."

However, Prof Shadbolt is optimistic about the social and economic impact of emerging technologies such as machine learning, in which computer programmes learn tasks by looking for patterns in huge datasets. "I don't see it destroying jobs grim reaper style," he said. "People are really inventive at creating new things for humans to do for which will pay them a wage. Leisure, travel, social care, cultural heritage, even reality TV shows. People want people around them and interacting with them."

AI

DeepMind Self-training Computer Creates 3D Model From 2D Snapshots (ft.com) 27

DeepMind, Google's artificial intelligence subsidiary in London, has developed a self-training vision computer that generates 'a full 3D model of a scene from just a handful of 2D snapshots," according to its chief executive. From a report: The system, called the Generative Query Network, can then imagine and render the scene from any angle [Editor's note: the link maybe paywalled; alternative source], said Demis Hassabis. GQN is a general-purpose system with a vast range of potential applications, from robotic vision to virtual reality simulation. Details were published on Thursday in the journal Science. "Remarkably, [the DeepMind scientists] developed a system that relies only on inputs from its own image sensors -- and that learns autonomously and without human supervision," said Matthias Zwicker, a computer scientist at the University of Maryland who was not involved in the research. This is the latest in a series of high-profile DeepMind projects, which are demonstrating a previously unanticipated ability by AI systems to learn by themselves, once their human programmers have set the basic parameters.
Beer

Uber Seeks Patent For AI That Determines Whether Passengers Are Drunk (cnet.com) 102

In an effort to "reduce undesired consequences," Uber is seeking a patent that would use artificial intelligence to separate sober passengers from drunk ones. The pending application details a technology that would be used to spot "uncharacteristic user activity," including passenger location, number of typos entered into the mobile app, and even the angle the smartphone is being held. CNET reports: Uber said it had no immediate plans to implement the technology described in the proposed patent, pointing out the application was filed in 2016. "We are always exploring ways that our technology can help improve the Uber experience for riders and drivers," a spokesperson said. "We file patent applications on many ideas, but not all of them actually become products or features."
AI

MIT's AI Uses Radio Signals To See People Through Walls (inverse.com) 76

Researchers at the Massachusetts Institute of Technology have developed a new piece of software that uses wifi signals to monitor the movements, breathing, and heartbeats of humans on the other side of walls. While the researchers say this new tech could be used in areas like remote healthcare, it could in theory be used in more dystopian applications. Inverse reports: "We actually are tracking 14 different joints on the body [...] the head, the neck, the shoulders, the elbows, the wrists, the hips, the knees, and the feet," Dina Katabi, an electrical engineering and computer science teacher at MIT, said. "So you can get the full stick-figure that is dynamically moving with the individuals that are obstructed from you -- and that's something new that was not possible before." The technology works a little bit like radar, but to teach their neural network how to interpret these granular bits of human activity, the team at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) had to create two separate A.I.s: a student and a teacher.

[T]he team developed one A.I. program that monitored human movements with a camera, on one side of a wall, and fed that information to their wifi X-ray A.I., called RF-Pose, as it struggled to make sense of the radio waves passing through that wall on the other side. The research builds off of a longstanding project at CSAIL lead by Katabi, which hopes to use this wifi tracking to help passively monitor the elderly and automate any emergency alerts to EMTs and medical professionals if they were to fall or suffer some other injury.
For more information, a press release and video about the software are available.
Intel

Intel Says Its First Discrete Graphics Chips Will Be Available in 2020 (marketwatch.com) 99

Ryan Shrout, reporting for MarketWatch: Intel CEO Brian Krzanich disclosed during an analyst event last week that it will have its first discrete graphics chips available in 2020. This will mark the beginning of the chip giant's journey toward a portfolio of high-performance graphics products for various markets including gaming, data center and artificial intelligence (AI). Some previous rumors suggested a launch at CES 2019 this coming January might be where Intel makes its graphics reveal, but that timeline was never adopted by the company. It would have been overly aggressive and in no way reasonable with the development process of a new silicon design. In November 2017 Intel brought on board Raja Koduri to lead the graphics and compute initiatives inside the company. Koduri was previously in charge of the graphics division at AMD helping to develop and grow the Radeon brand, and his departure to Intel was thought to have significant impact on the industry.
E3

Microsoft is Working on its Own Game Streaming, Netflix-Like Service (theverge.com) 73

Phil Spencer, Microsoft's gaming chief, revealed the company is building a streaming game service for any device. Our cloud engineers are building a game streaming network to unlock console gaming on any device, he said, adding this service will offer "console quality gaming on any device." From a report: "Gaming is now at its most vibrant," he said. "In this significant moment we are constantly challenging ourselves about where we can take gaming next." He said that Microsoft is recommitting and harnessing the full breath of the company to deliver on the future of play. That includes experts in Microsoft research working on developing the future of gaming AI and the company's cloud engineers building a game streaming network. He added that the company is also in the midst of developing the architecture for the next Xbox consoles. Further reading: Microsoft Acquires Four Gaming Studios, Including Ninja Theory, As It Looks To Bolster First-Party Catalog.
AI

Secret Pentagon AI Program Hunts Hidden Nuclear Missiles (reuters.com) 40

Slashdot reader drdread66 shares this article from Reuters: The U.S. military is increasing spending on a secret research effort to use artificial intelligence to help anticipate the launch of a nuclear-capable missile, as well as track and target mobile launchers in North Korea and elsewhere. The effort has gone largely unreported, and the few publicly available details about it are buried under a layer of near impenetrable jargon in the latest Pentagon budget. But U.S. officials familiar with the research told Reuters there are multiple classified programs now under way to explore how to develop AI-driven systems to better protect the United States against a potential nuclear missile strike.

If the research is successful, such computer systems would be able to think for themselves, scouring huge amounts of data, including satellite imagery, with a speed and accuracy beyond the capability of humans, to look for signs of preparations for a missile launch, according to more than half a dozen sources. The sources included U.S. officials, who spoke on condition of anonymity because the research is classified. Forewarned, the U.S. government would be able to pursue diplomatic options or, in the case of an imminent attack, the military would have more time to try to destroy the missiles before they were launched, or try to intercept them.

Reuters calls it "one indicator of the growing importance of the research on AI-powered anti-missile systems," adding "The Pentagon is in a race against China and Russia to infuse more AI into its war machine, to create more sophisticated autonomous systems that are able to learn by themselves to carry out specific tasks."

One official told Reuters that an AI prototype for tracking missile launchers is already being tested.
Hardware

US Once Again Boasts the World's Fastest Supercomputer (zdnet.com) 85

The US Department of Energy on Friday unveiled Summit, a supercomputer capable of performing 200 quadrillion calculations per second, or 200 petaflops. Its performance should put it at the top of the list of the world's fastest supercomputers, which is currently dominated by China. From a report (thanks to reader cb_abq for the tip): Summit, housed at the Oak Ridge National Laboratory (ORNL), was built for AI. IBM designed a new heterogeneous architecture for Summit, which combines IBM POWER9 CPUs with Nvidia GPUs. It has approximately 4,600 nodes, with six Nvidia Volta Tensor Core GPUs per node -- that's more than 27,000. The last US supercomputer to top the list of the world's fastest was Titan, in 2012. ORNL, which houses Titan as well, says Summit will deliver more than five times the computational performance of Titan's 18,688 nodes.
Power

Can An 'OS For Electricity' Double the Efficiency of the Grid? (vox.com) 146

New submitter mesterha shares an "interesting article [from Vox] on how to optimize our use of electricity": Waste on the grid is the result of poor power quality, which can be ameliorated through digital control. Real-time measurement makes that possible. 3DFS technology, which the company conceives of as an "operating system for electricity," can not only track what's happening on the electricity sine wave from nanosecond to nanosecond, it can correct the sine wave from microsecond to microsecond, perfectly adapting it to the load it serves, eliminating waste." "They claim energy reduction of around 15% but anticipate their AI tuning can get eventually get 30%," writes Slashdot reader mesterha. "Seems too good to be true, but it has the support of publications like Popular Mechanics." [3DFS won one of Popular Mechanics' "breakthrough awards" in 2017.]
AI

Google Promises Its AI Will Not Be Used For Weapons (nytimes.com) 102

An anonymous reader quotes a report from The New York Times: Google, reeling from an employee protest over the use of artificial intelligence for military purposes, said Thursday that it would not use A.I. for weapons or for surveillance that violates human rights (Warning: source may be paywalled; alternative source). But it will continue to work with governments and the military. The new rules were part of a set of principles Google unveiled relating to the use of artificial intelligence. In a company blog post, Sundar Pichai, the chief executive, laid out seven objectives for its A.I. technology, including "avoid creating or reinforcing unfair bias" and "be socially beneficial."

Google also detailed applications of the technology that the company will not pursue, including A.I. for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people" and "technologies that gather or use information for surveillance violating internationally accepted norms of human rights." But Google said it would continue to work with governments and military using A.I. in areas including cybersecurity, training and military recruitment. "We recognize that such powerful technology raises equally powerful questions about its use. How A.I. is developed and used will have a significant impact on society for many years to come," Mr. Pichai wrote.

United States

President's Most Senior Technology Advisor Says the White House is Quietly Pursuing an Aggressive AI Plan (technologyreview.com) 106

Speaking at a conference held at MIT, Donald Trump's chief technology advisor, Michael Kratsios, said this week that the U.S. government would release any data that might help fuel AI research in the United States, although he didn't specify immediately what kind of data would be released or who would be eligible to receive the information. From a report: Kratsios, who is deputy assistant to the president and deputy US chief technology officer, said the government is looking for ways to open up federal data to AI researchers. "Anything that we can do to unlock government data, we're committed to," Kratsios told MIT Technology Review. "We'd love to hear from any academic that has any insights." Data has been a key factor behind recent advances in artificial intelligence. For example, better voice recognition and image processing have been contingent on the availability of huge quantities of training data. The government has access to large amounts of data, and it's possible that it could be used to train innovative algorithms to do new things. "Anything we can do to figure that out, we will work very hard on," Kratsios added.

The Trump administration has faced criticism for a more laissez-faire approach to artificial intelligence than many other countries have taken. Kratsios argued that the White House is quietly pushing an aggressive policy, pointing to examples of research projects that have received federal funding. When asked about the president's interest in artificial intelligence, Kratsios said, "The White House has prioritized AI, and he obviously runs the White House."

Android

Google's Lens AI Camera Is Now a Standalone App (androidpolice.com) 18

Google Lens is now available as an app in the Play Store for devices with Android Marshmallow and above. The app is designed to bring up relevant information using visual analysis. Android Police reports: When you open the app, it goes right into a live viewfinder with Lens looking for things it can ID. Like the Assistant version of Lens, you can tap on items to get more information (assuming Google can figure out what they are) and copy text from documents. However, I've noticed that copying text doesn't work on the OnePlus 6 right now. It works fine with the built-in Lens version. Some users are reporting that it's not working properly on some devices, so keep that mind if you decide to give it a whirl.
AI

Nvidia Launches AI Computer To Give Autonomous Robots Better Brains (theverge.com) 85

An anonymous reader quotes a report from The Verge: At Computex 2018, Nvidia unveiled two new products: Nvidia Isaac, a new developer platform, and the Jetson Xavier, an AI computer, both built to power autonomous robots. Nvidia CEO Jensen Huang said Isaac and Jetson Xavier were designed to capture the next stage of AI innovation as it moves from software running in the cloud to robots that navigate the real world. The Isaac platform is a set of software tools that will make it simpler for companies to develop and train robots. It includes a collection of APIs to connect to 3D cameras and sensors; a library of AI accelerators to keep algorithms running smoothly and without lag; and a new simulation environment, Isaac Sim, for training and testing bots in a virtual space. Doing so is quicker and safer than IRL testing, but it can't match the complexity of the real world.

But the heart of the Isaac platform is Nvidia's new Jetson Xavier computer, an incredibly compact piece of hardware that's comprised of a number of processing components. These include a Volta Tensor Core GPU, an eight-core ARM64 CPU, two NVDLA deep learning accelerators, and processors for static images and video. In total, Jetson Xavier contains more than 9 billion transistors and delivers over 30 TOPS (trillion operations per second) of compute. And it consumes just 30 watts of power, which is half of the electricity used by the average light bulb. The cost of one Jetson Xavier (along with access to the Isaac platform) is $1,299, and Huang claims the computer provides the same processing power as a $10,000 workstation
"AI, in combination with sensors and actuators, will be the brain of a new generation of autonomous machines," said Huang. "Someday, there will be billions of intelligent machines in manufacturing, home delivery, warehouse logistics and much more."
AI

Meet Norman, the Psychopathic AI (bbc.com) 109

A team of researchers at the Massachusetts Institute of Technology created a psychopathic algorithm named Norman, as part of an experiment to see what training artificial intelligence on data from "the dark corners of the net" would do to its world view. Unlike most "normal" algorithms by AI, Norman does not have an optimistic view of the world. BBC reports: The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit. Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them. These abstract images are traditionally used by psychologists to help assess the state of a patient's mind, in particular whether they perceive the world in a negative or positive light. Norman's view was unremittingly bleak -- it saw dead bodies, blood and destruction in every image. Alongside Norman, another AI was trained on more normal images of cats, birds and people. It saw far more cheerful images in the same abstract blots.

The fact that Norman's responses were so much darker illustrates a harsh reality in the new world of machine learning, said Prof Iyad Rahwan, part of the three-person team from MIT's Media Lab which developed Norman. "Data matters more than the algorithm. "It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves."

Iphone

Apple May Introduce a Triple-Camera iPhone This Year (thenextweb.com) 107

A rumor from The Korea Herald suggests that Apple may be planning on introducing its first triple camera smartphone this year with the rumored 6.5-inch iPhone. The rumor comes buried in a piece mostly about Samsung, which is also expected to introduce a triple-camera smartphone with next year's S10. The Next Web reports: To be clear, this isn't the first time we've heard word of a triple camera iPhone, but the three previous reports have pointed to a 2019 release, according to MacRumors. One of these reports was from Ming Chi Kuo, an Apple analyst who has a solid track record. The fact that's it's mentioned offhandedly in the Korea Herald report makes me think the date may have been a mistake. No matter how good AI and processing get, there's only so much you can do within the physical constraints of a small smartphone sensor. In theory, using multiple cameras and combining the information with some smart processing could help you somewhat replicate the image quality of a larger sensor.
Google

Google Plans Not To Renew its Contract for Project Maven, a Controversial Drone AI Imaging Program (gizmodo.com) 59

Kate Konger, reporting for Gizmodo: Google will not seek another contract for its controversial work providing artificial intelligence to the U.S. Department of Defense for analyzing drone footage after its current contract expires. Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract, Greene said. The meeting, dubbed Weather Report, is a weekly update on Google Cloud's business. Google would not choose to pursue Maven today because the backlash has been terrible for the company, Greene said, adding that the decision was made at a time when Google was more aggressively pursuing military work. The company plans to unveil new ethical principles about its use of AI next week.
Intel

Intel Wants PCs To Be More Than Just 'Personal Computers' (engadget.com) 180

An anonymous reader shares a report "What people need from a PC, what they expect is really more diverse than ever," Intel's Client Computing head Gregory Bryant said in an interview. "We're going to embark on a journey to transform the PC from a personal computer to a personal contribution platform... The platform where people focus and can do their most meaningful work." Bryant says Intel will focus on five key areas to reframe its vision of PCs: Uncompromised performance (of course); improved connectivity with 5G on the horizon; a dramatic increase in battery life; developing more adaptable platforms that go beyond 2-in-1s and convertibles; and a push towards more intelligent machines with AI and machine learning integration. Admittedly, many of those points aren't exactly new for Intel, and they also fall in line with where the computing industry is going.
AI

Now Fighting for Top Tech Talent: Makers of Turbines, Tools and Toyotas (wsj.com) 124

The tussle over technology talent is reaching far beyond Silicon Valley. From a report: Firms from industrial giants to car makers are rethinking the way they recruit as they compete with each other and traditional technology outfits for people with expertise in high-tech fields like machine learning, artificial intelligence and cybersecurity. For some positions that Siemens AG needs to fill, there may be a universe of fewer than 2,000 qualified people in the U.S., said Michael Brown, vice president of talent acquisition in the Americas for the German industrial conglomerate that makes everything from gas turbines to mammography machines. "The question is how many of those are looking for a job?" Mr. Brown said. Finding the right potential candidates on sites like LinkedIn isn't easy because "they're tired of being found."

Siemens has 377,000 employees world-wide and about 50,000 in the U.S. At the moment, it has about 1,500 open jobs across America, most of which require some software or science-related background. Employers are handicapped by several factors, data show and recruiters say: Cutting-edge skills are evolving faster than universities can train people, the supply of talented young workers entering these fields isn't satisfying the huge demand for them, and mobility -- a worker's willingness to uproot their life for a job in a new place -- has declined. The odds of luring rare, coveted candidates away from their current job or city are long, Mr. Brown said.

The Military

Leaked Emails Show Google Expected Military Drone AI Work To Grow Exponentially (theintercept.com) 84

In March, Google secretly signed an agreement with the Pentagon to provide cutting edge AI technology for drone warfare, causing about a dozen Google employees to resign in protest and thousands to sign a petition calling for an end to the contract. Google has since tried to quash the dissent, claiming that the contract was "only" for $9 million, according to the New York Times. Internal company emails obtained by The Intercept tell a different story: The September emails show that Google's business development arm expected the military drone artificial intelligence revenue to ramp up from an initial $15 million to an eventual $250 million per year. In fact, one month after news of the contract broke, the Pentagon allocated an additional $100 million to Project Maven [the endeavor designed to help drone operators recognize images captured on the battlefield]. The internal Google email chain also notes that several big tech players competed to win the Project Maven contract. Other tech firms such as Amazon were in the running, one Google executive involved in negotiations wrote. (Amazon did not respond to a request for comment.) Rather than serving solely as a minor experiment for the military, Google executives on the thread stated that Project Maven was "directly related" to a major cloud computing contract worth billions of dollars that other Silicon Valley firms are competing to win. The emails further note that Amazon Web Services, the cloud computing arm of Amazon, "has some work loads" related to Project Maven.
AI

DeepMind Used YouTube Videos To Train Game-Beating Atari Bot (theregister.co.uk) 61

Artem Tashkinov shares a report from The Register: DeepMind has taught artificially intelligent programs to play classic Atari computer games by making them watch YouTube videos. Exploration games like 1984's Montezuma's Revenge are particularly difficult for AI to crack, because it's not obvious where you should go, which items you need and in which order, and where you should use them. That makes defining rewards difficult without spelling out exactly how to play the thing, and thus defeating the point of the exercise. For example, Montezuma's Revenge requires the agent to direct a cowboy-hat-wearing character, known as Panama Joe, through a series of rooms and scenarios to reach a treasure chamber in a temple, where all the goodies are hidden. Pocketing a golden key, your first crucial item, takes about 100 steps, and is equivalent to 100^18 possible action sequences.

To educate their code, the researchers chose three YouTube gameplay videos for each of the three titles: Montezuma's Revenge, Pitfall, and Private Eye. Each game had its own agent, which had to map the actions and features of the title into a form it could understand. The team used two methods: temporal distance classification (TDC), and cross-modal temporal distance classification (CDC). The DeepMind code still relies on lots of small rewards, of a kind, although they are referred to as checkpoints. While playing the game, every sixteenth video frame of the agent's session is taken as a snapshot and compared to a frame in a fourth video of a human playing the same game. If the agent's game frame is close or matches the one in the human's video, it is rewarded. Over time, it imitates the way the game is played in the videos by carrying out a similar sequence of moves to match the checkpoint frame.
In the end, the agent was able to exceed average human players and other RL algorithms: Rainbow, ApeX, and DQfD. The researchers documented their method in a paper this week. You can view the agent in action here.

Slashdot Top Deals