Los Alamos National LaboratoryBradbury Science Museum
Your Window into Los Alamos National Laboratory
Bradbury Science Museum

Science on Tap Question

When will cars drive themselves?
July 29, 2019
Science on Tap Question

Contact  

  • Stacy Baker
  • CPA-CPO
  • (505) 664-0244
  • Email
At issue, according to Lab physicist and neuroscientist Garrett Kenyon, is the fundamental difference between computer programming and human cognition.

According to some, maybe never, though not for the reasons you might think. While several major automotive manufacturers including GM, Tesla, and Toyota already have autonomous vehicles in testing, their vehicles are far from actually being able to safely, consistently drive themselves on public roadways. At issue, according to Lab physicist and neuroscientist Garrett Kenyon, is the fundamental difference between computer programming and human cognition. Current self-driving vehicles using artificial intelligence (AI) programming can recognize input from mechanical sensors and react according to programmed instructions. What they can’t do is “learn” in a contextual sense, or imagine relationships between objects and events, which is really where the rubber meets the road for advancements in AI.

According to Kenyon, the major obstacle to true AI lies within its current model. Instead of processing millions of experiences within a neural network that is constantly adding data points and rewiring itself to better understand and relate to its surroundings, AI relies on training data added by programmers.  This reliance on external training data limits AI’s capacity to self-extrapolate, to assess sameness, or to detect adversarial noise (visual distortion of the original image), which means image identifiers using AI can be thrown off rather easily. 

For example, proto-type autonomous vehicles can recognize and react to stop signs. Their image identification software recognizes that particular combination of shape, color, size, text, and location and conveys that information to the appropriate vehicle systems, which then direct the relevant mechanical parts to bring the car to a halt. But what if someone pastes smiley-face stickers on the stop sign? 

Then, Houston, we have a problem. Those stickers are now adversarial noise that the image identification software simply can’t put into context. Now the stop sign may as well be a banana and all passengers in that vehicle had better be paying attention.  

Until scientists have a better understanding of how sentient creatures learn and are able to replicate those dynamic and incredibly intricate neural learning patterns and pathways, self-driving cars will likely remain a novelty that makes many of us just a tad nervous. 

For more information on Garrett Kenyon’s research, please see Garrett’s article in Scientific American.