mikejuk writes "Researchers have developed an online dating system that not only matches you with partners you'll find attractive, but who are also likely to find you attractive too. The researchers at the University of Iowa have addressed an underlying problem of online dating sites. There's no doubt that such sites are ever increasing in popularity, and have good algorithms taking into account the reported likes, interests and hobbies of the person looking for a partner to come up with a potential match. What's less well catered for is the trickier aspect of the reciprocal interest – you may think person x looks nice, but will they find you equally attractive? The problem here is that if you are Average Joe and try asking out Supermodels Ann, Barbara and Cheryl, you're unlikely to get a reply. Well, not a printable one, anyway. So coming up with yet another supermodel for you to sob over isn't a lot of help.Instead, the researchers add a note of reality by analyzing the replies you get, and use this to work out how attractive you are. This is a scary thought for many of us, and one we may well not want an honest answer to. The results are used to recommend people who might actually reply if you get in contact with them. Fortunately for the attractively challenged, the research is still just that – research. However, given the fact the online dating market is worth around $3 billion a year, chances are someone is going to make use of this. We have been warned."
Follow Slashdot stories on Twitter
At Kurzweil AI, an article proclaims that the next wonder material for computer chips may be an unexpectedly common one: "Move over, graphene. 'Stanene' — a single layer of tin atoms — could be the world’s first material to conduct electricity with 100 percent efficiency at the temperatures that computer chips operate, according to a team of theoretical physicists led by researchers from the U.S. Department of Energy’s (DOE) SLAC National Accelerator Laboratory and Stanford University." (Original paper is available here, but paywalled.)
An anonymous reader writes with this excerpt from the Washington Post "Researchers are trying to plant a digital seed for artificial intelligence by letting a massive computer system browse millions of pictures and decide for itself what they all mean. The system at Carnegie Mellon University is called NEIL, short for Never Ending Image Learning. In mid-July, it began searching the Internet for images 24/7 and, in tiny steps, is deciding for itself how those images relate to each other. The goal is to recreate what we call common sense — the ability to learn things without being specifically taught."
mikejuk writes "Considering how long we have been trying to solve the problem, a robot walking is mostly amusing. Atlas is an impressive robot, evoking the deepest fears of sci fi. Watch as one of the DARPA challenge teams makes Atlas walk, unaided, on randomness. This video of Atlas was created by the Florida Institute For Human and Machine Cognition robotics team. It shows Atlas walking across a random collection of obstacles. Notice that even though it looks as if Atlas is supported by a tether, it isn't — as proved when it falls over at the end."
theodp writes "Google takes Scroogling to new heights with its just-patented Automated Generation of Suggestions for Personalized Reactions in a Social Network, which not only data mines "e-mail systems, SMS/MMS systems, micro blogging systems, social networks or other systems" to get the buzz on your social circle, but also uses the data it collects to make like ELIZA and formulate appropriate responses for you to send as if they were your own (e.g., 'Happy Birthday, Mom!). Wouldn't Turing be so proud! From the patent: 'In a third example, a friend, David, sends Alice public or private message of a particular but regularly encountered message type (e.g., "how are you doing?" a common way to greet someone in the United States). The suggestion generation module suggest a good set of reactions to David, for example, based on the professional profile of David from the social network indicating that David has changed employers. The suggestion generation module generates a reply message such as "Hey David, I am fine, You were in ABC corp. for 3 years and you recently moved to XYZ corp., how do you feel about the difference, enjoying your new workplace?" The content of this suggestion are based on 1) prior conversations between Alice and David, 2) previous messages sent by Alice to other friends and 3) messages (sent by other connections in Alice's friend circle to David) which are either publicly or privately accessible to Alice, or some combination of these. Thus, the suggestion generation module generates messages that are personalized based upon both the sender and recipient using information that is accessible (public or private) to the sender.' Looks like Facebook may not be the only one strip-mining human society!"
waderoush writes "Anki gained instant fame as the robot-car company that launched at Apple's WWDC in June. Its iPhone-controlled racing game hit Apple stores in October, and the company is hoping it will be a holiday hit. But while Anki Drive offers offers a novel physical/virtual entertainment experience for kids and their gadget-loving parents, being a toy company 'is not our vision,' says co-founder and CEO Boris Sofman in this combined company profile and product review from Xconomy. Anki Drive is planned as the first in a series of new consumer-robotics products that are intensively AI-driven, as compared to the mechanically sophisticated but relatively instinctual or behavioral robots exemplified by iRobot's Roomba (which is probably the most successful consumer robot to date). The common characteristics of Anki's coming products, in Sofman's mind: 'Relatively simple and elegant hardware; incredibly complicated software; and Web and wireless connectivity to be able to continually expand the experience over time.'"
alphadogg writes "If you can't tell the difference between an inkblot that looks more like 'body builder lady with mustache and goofy in the center' than 'large steroid insect with big eyes,' then you can't crack passwords protected via a new scheme created by computer scientists that they've dubbed GOTCHA. GOTCHA, a snappy acronym for the decidedly less snappy Generating panOptic Turing Tests to Tell Computers and Humans Apart, is aimed at stymying hackers from using computers to figure out passwords, which are all too often easy to guess. GOTCHA, like its ubiquitous cousin CAPTCHA, relies on visual cues that typically only a human can appreciate. The researchers don't think that computers can solve the puzzles and have issued a challenge to fellow security researchers to use artificial intelligence to try to do so. You can find the GOTCHA Challenge here."
aurtherdent2000 writes "We humans enjoy not having knives inside of us. Robots don't know this (Three Laws be damned). Therefore, it's important for humans to explain this information to robots using careful training. Researchers at Cornell University are developing a co-active learning method, where humans can correct a robot's motions, showing it how to properly use objects such as knives. They use it for a robot performing grocery checkout tasks."
slew writes "Although the robot technically cheats because it watches your hand and can recognize what shape you are intending to make and beat it before you even know what is happening. Apparently it takes about 60ms for you to shape your hand, but the robot can recognize the shape before it is completed, and only takes 20ms to counter your shape so the results appear to the human opponent to be virtually simultaneous. I wonder how difficult it would be to add lizard and spock to the mix.... ;^)."
sciencehabit writes "A software company called Vicarious claims to have created a computer algorithm that can solve CAPTCHA with greater than 90% accuracy. If true, the advance would represent a major breakthrough in artificial intelligence. It would also mean that the internet will have to start looking for a new security system. The problem, however, is that Vicarious has provided little evidence for its claims, though some well-known scientists are behind the work."
An anonymous reader writes "At a robotics conference in Santa Clara, California, the head of Google's autonomous car project presented results of a study showing that the company's autonomous cars are already safer than human drivers — including trained professionals. 'We're spending less time in near-collision states,' he said. 'In addition to painting a rosy picture of his vehicles' autonomous capabilities, Urmson showed a new dashboard display that his group has developed to help people understand what an autonomous car is doing and when they might want to take over.' This follows another (non-Google) study earlier this week that found the adoption of autonomous cars could save thousands of lives and billions of dollars each year. Urmson also pointed out that determining liability for an accident is much easier using the data collected by the autonomous cars. At one point, a test car was read-ended, and the data showed it smoothly braking to a stop before being struck. 'We don't have to rely on eyewitnesses that can't be trusted as to what happened — we actually have the data. The guy around us wasn't paying enough attention. The data will set you free.'"
An anonymous reader writes "Google today released an update to its reCAPTCHA system that creates different classes of CAPTCHAs for different kinds of users. In short, it makes your life easier if you're a human, and your work much harder if you're a bot. Unsurprisingly, Google wouldn't share too much detail as to how the new system works, aside from saying it uses advanced risk analysis techniques, actively considering the user's entire engagement (before, during and after) with the CAPTCHA. In other words, the distorted letters are not the only test."
cartechboy writes "Autonomous cars are coming even if tech companies have to produce them. The biggest hurdles are the technology (very expensive and often still surprisingly rudimentary) and how vehicle to vehicle (V2V) communication happens (one car anticipates or sees an accident, it should tell nearby cars). So what are the benefits to self-driving cars? They may save us thousands of lives and not a small amount of cash. A new study from the Eno Center for Transportation (PDF) suggests that if just 10 percent of vehicles on the road were autonomous, the U.S. could see 1,000 fewer highway fatalities annually and save $38 billion in lost productivity (due to congestion and other traffic problems). Right off the bat you can imagine autonomous driving easily topping your average intoxicated drivers' ability behind the wheel. At a 90 percent adoption mark those same numbers in theory would become: 21,700 lives spared, and a whopping $447 billion saved."
Lucas123 writes "Consumers appear more willing to use a self-driving car from a leading technology company, such as Google, over an auto manufacturer like Ford or Toyota, according to a new study from KPMG. Based on polls of focus groups, technology companies scored highest among consumers, with a median score of 8 on a scale of 1 to 10, with 10 as the highest level of trust. Premium auto brands received a score of 7.75, while mass-market brands received a score of 5. Google is the brand most associated with self-driving cars, according to the study, while Nissan lead the mass auto producers in recognition for autonomous technology; that was based on its pledge in August to launch an affordable self-driving car by 2020. 'We believe that self-driving cars will be profoundly disruptive to the traditional automotive ecosystem,' KPMG stated." I suspect that when autonomous cars start arriving for ordinary buyers, there will be a lot of co-branding, as there is now for various car subsystems and even levels of trim.
Daniel_Stuckey writes "The current test vehicle uses what Nissan calls its 'Advanced Driver Assist System,' which isn't fully autonomous, but rather can be thought of as a really advanced cruise control system. According to the company, the system can keep a car in its own lane, while automatically changing lanes to pass slower vehicles or prepare to exit a freeway, which it can also do automatically. Along with that, the car automatically slows for congestion, and — most impressively in my opinion — can automatically stop at red lights. In other words, the car isn't fully automatic in that you can't simply type in a destination and have it do all the work, but the bulk of driving load is taken care of. Curiously, Nissan's goal appears to be to take sloppy human drivers out of the equation to eliminate road fatalities."
Hugh Pickens DOT Com writes "Michael Harper reports that researchers at the Bielefeld University in Germany are working to develop a robotic bartender, and their most difficult challenge so far is to identify the body language that is most commonly used by customers and interpreted as someone wanting to buy a drink. A bartending robot has to be able to distinguish between customers intending to order, chatting with friends or just passing by [abstract] — and do so in a very noisy environment. The researchers examined the behavior of customers in nightclubs to see which behaviors were most successful at indicating to the barman the customer was ready to be served. 'Effectively, the customers identify themselves as ordering and non-ordering people through their behavior,' says Dr Sebastian Loth, lead author of the study. The researchers analyzed 105 attempts to order drinks at nightclubs in Bielefeld and Herford in Germany and Edinburgh in Scotland and assessed the behavior of customers 35 seconds before they were served. They found the most successful tactic, which occurred in 95% of orders, was standing squarely towards the bar with head facing forward. Looking at money saw just seven per cent of customers being served within the 35 second time frame. The findings are used to produce an update to the robotic bartender's programming to allow it to ask customers if they would like a drink when they display the right body language. What the research team has learned is being programmed into a robotic bartender called James, or Joint Action in Multimodal Embodied Systems. The researchers have been working on James since early 2011 and hope to have the project completed in January 2014."
Hugh Pickens DOT Com writes "Tom Simonite reports at MIT Technology News that a new research group within Facebook is working on an emerging and powerful approach to artificial intelligence known as deep learning, which uses simulated networks of brain cells to process data. Applying this method to data shared on Facebook could allow for novel features, and perhaps boost the company's ad targeting. Deep learning has shown already potential to enable software to do things such as work out the emotions or events described in text even if they aren't explicitly referenced, recognize objects in photos, and make sophisticated predictions about people's likely future behavior. Facebook's chief technology officer, Mike Schroepfer, says that one obvious place to use deep learning is to improve the news feed, the personalized list of recent updates he calls Facebook's 'killer app.' Facebook already uses conventional machine learning techniques to prune the 1,500 updates that average Facebook users could possibly see down to 30 to 60 that are judged to be most likely to be important to them. 'The data set is increasing in size, people are getting more friends, and with the advent of mobile, people are online more frequently,' says Schroepfer. 'It's not that I look at my news feed once at the end of the day; I constantly pull out my phone while I'm waiting for my friend, or I'm at the coffee shop. We have five minutes to really delight you.'"
vinces99 writes "It's becoming more common to have robots sub for humans to do dirty or sometimes dangerous work. But researchers are finding that, in some cases, people have started to treat robots like pets, friends or even as an extension of themselves. That raises a question: If a soldier attaches human or animal-like characteristics to a field robot, can it affect how they use the robot? What if they 'care' too much about the robot to send it into a dangerous situation? Julie Carpenter, who just received a doctorate in education from the University of Washington, wanted to find out. She interviewed Explosive Ordnance Disposal military personnel – highly trained soldiers who use robots to disarm explosives – about how they feel about the robots they work with every day. What she found is that troops' relationships with robots continue to evolve as the technology changes. Soldiers told her that attachment to their robots didn't affect their performance, yet acknowledged they felt a range of emotions such as frustration, anger and even sadness when their field robot was destroyed. That makes Carpenter wonder whether outcomes on the battlefield could potentially be compromised by human-robot attachment, or the feeling of self-extension into the robot described by some operators."
cartechboy writes "Do you like driving? Well then, you're going to hate the future, because automakers are racing to beat each other to the starting line of the self-driving car race. By 2020, autonomous vehicles may arrive from Cadillac, Nissan, Volvo, Mercedes, Audi, and even Google. But now Tesla wants to jump into the ring. CEO Elon Musk told the Financial Times that the electric-car maker will build a self-driving car...within three years. You'll note that's much sooner than 2020, which means Tesla would beat other, larger automakers to the punch. For those who fear self-driving cars, Musk said the autonomous Tesla could drive 90 percent of the time, but that in his opinion, a vehicle without a human in the cockpit isn't feasible. Like it or not, our roads will probably be safer because you won't actually be driving — well, OK, that other guy who's texting or talking or drinking a huge coffee or ... you get the idea."