5 January 2018

AS ARTIFICIAL INTELLIGENCE ADVANCES, HERE ARE FIVE TOUGH PROJECTS FOR 2018

Source Link

But have you talked to Siri or Alexa recently? Then you’ll know that despite the hype, and worried billionaires, there are many things that artificial intelligence still can’t do or understand. Here are five thorny problems that experts will be bending their brains against next year. Machines are better than ever at working with text and language. Facebook can read out a description of images for visually impaired people. Google does a decent job of suggesting terse replies to emails. Yet software still can’t really understand the meaning of our words and the ideas we share with them. “We’re able to take concepts we’ve learned and combine them in different ways, and apply them in new situations,” says Melanie Mitchell, a professor at Portland State University. “These AI and machine learning systems are not.”


Mitchell describes today’s software as stuck behind what mathematician Gian Carlo-Rota called “the barrier of meaning.” Some leading AI research teams are trying to figure out how to clamber over it.

One strand of that work aims to give machines the kind of grounding in common sense and the physical world that underpins our own thinking. Facebook researchers are trying to teach software to understand reality by watching video, for example. Others are working on mimicking what we can do with that knowledge about the world. Google has been tinkering with software that tries to learn metaphors. Mitchell has experimented with systems that interpret what’s happening in photos using analogies and a store of concepts about the world.
The reality gap impeding the robot revolution

Robot hardware has gotten pretty good. You can buy a palm-sized drone with HD camera for $500. Machines that haul boxes and walk on two legs have improved also. Why are we not all surrounded by bustling mechanical helpers? Today’s robots lack the brains to match their sophisticated brawn.

Getting a robot to do anything requires specific programming for a particular task. They can learn operations like grasping objects from repeated trials (and errors). But the process is relatively slow. One promising shortcut is to have robots train in virtual, simulated worlds, and then download that hard-won knowledge into physical robot bodies. Yet that approach is afflicted by the reality gap—a phrase describing how skills a robot learned in simulation do not always work when transferred to a machine in the physical world.

The reality gap is narrowing. In October, Google reported promising results in experiments where simulated and real robot arms learned to pick up diverse objects including tape dispensers, toys, and combs.

Further progress is important to the hopes of people working on autonomous vehicles. Companies in the race to roboticize driving deploy virtual cars on simulated streets to reduce the time and money spent testing in real traffic and road conditions. Chris Urmson, CEO of autonomous-driving startup Aurora, says making virtual testing more applicable to real vehicles is one of his team’s priorities. “It’ll be neat to see over the next year or so how we can leverage that to accelerate learning,” says Urmson, who previously led Google parent Alphabet’s autonomous-car project.
Guarding against AI hacking

The software that runs our electrical grids, security cameras, and cellphones is plagued by security flaws. We shouldn’t expect software for self-driving cars and domestic robots to be any different. It may in fact be worse: There’s evidence that the complexity of machine-learning software introduces new avenues of attack.

Researchers showed this year that you can hide a secret trigger inside a machine-learning system that causes it to flip into evil mode at the sight of a particular signal. The team at NYU devised a street-sign recognition system that functioned normally—unless it saw a yellow Post-It. Attaching one of the sticky notes to a stop sign in Brooklyn caused the system to report the sign as a speed limit. The potential for such tricks might pose problems for self-driving cars.

The threat is considered serious enough that researchers at the world’s most prominent machine-learning conference convened a one-day workshopon the threat of machine deception earlier this month. Researchers discussed fiendish tricks like how to generate handwritten digits that look normal to humans, but appear as something different to software. What you see as a 2, for example, a machine vision system would see as a 3. Researchers also discussed possible defenses against such attacks—and worried about AI being used to fool humans.

Tim Hwang, who organized the workshop, predicted using the technology to manipulate people is inevitable as machine learning becomes easier to deploy, and more powerful. “You no longer need a room full of PhDs to do machine learning,” he said. Hwang pointed to the Russian disinformation campaign during the 2016 presidential election as a potential forerunner of AI-enhanced information war. “Why wouldn’t you see techniques from the machine learning space in these campaigns?” he said. One trick Hwang predicts could be particularly effective is using machine learning to generate fake video and audio.

Alphabet’s champion Go-playing software evolved rapidly in 2017. In May, a more powerful version beat Go champions in China. Its creators, research unit DeepMind, subsequently built a version, AlphaGo Zero, that learned the game without studying human play. In December, another upgrade effort birthed AlphaZero, which can learn to play chess and Japanese board game Shogi (although not at the same time).

That avalanche of notable results is impressive—but also a reminder of AI software’s limitations. Chess, shogi, and Go are complex but all have relatively simple rules and gameplay visible to both opponents. They are a good match for computers’ ability to rapidly spool through many possible future positions. But most situations and problems in life are not so neatly structured.

That’s why DeepMind and Facebook both started working on the multiplayer videogame StarCraft in 2017. Neither have yet gotten very far. Right now, the best bots—built by amateurs—are no match for even moderately-skilled players. DeepMind researcher Oriol Vinyals told WIREDearlier this year that his software now lacks the planning and memory capabilities needed to carefully assemble and command an army while anticipating and reacting to moves by opponents. Not coincidentally, those skills would also make software much better at helping with real-world tasks such as office work or real military operations. Big progress on StarCraft or similar games in 2018 might presage some powerful new applications for AI.

Teaching AI to distinguish right from wrong

Even without new progress in the areas listed above, many aspects of the economy and society could change greatly if existing AI technology is widely adopted. As companies and governments rush to do just that, some people are worried about accidental and intentional harms caused by AI and machine learning.

How to keep the technology within safe and ethical boundswas a prominent thread of discussion at the NIPS machine-learning conference this month. Researchers have found that machine learning systems can pick up unsavory or unwanted behaviors, such as perpetuating gender stereotypes, when trained on data from our far-from-perfect world. Now some people are working on techniques that can be used to audit the internal workings of AI systems, and ensure they make fair decisions when put to work in industries such as finance or healthcare.

The next year should see tech companies put forward ideas for how to keep AI on the right side of humanity. Google, Facebook, Microsoft, and others have begun talking about the issue, and are members of a new nonprofit called Partnership on AI that will research and try to shape the societal implications of AI. Pressure is also coming from more independent quarters. A philanthropic project called the Ethics and Governance of Artificial Intelligence Fund is supporting MIT, Harvard, and others to research AI and the public interest. A new research institute at NYU, AI Now, has a similar mission. In a recent report it called for governments to swear off using “black box” algorithms not open to public inspection in areas such as criminal justice or welfare.

No comments: