Dr. Alok Aggarwal

Deep Learning Networks Are Brittle and Fail with 99% Confidence

Table of Contents


Transform Your Workflow with Scry AI Automation

Get Started

Introduction

The human brain is around three pounds in weight, has 86 billion neurons and 1,000 trillion synapses, stores 2.5 million Gigabytes of data, uses about 15-20 Calories (kilocalories) per hour, and is very efficient in producing lots of memories, thoughts, and emotions.

In contrast, as discussed in the upcoming book “The Fourth Industrial Revolution and 100 Years of AI (1950-2050)”, Deep Learning Networks (DLNs) and their variants (e.g., Generative Pre-trained Transformers) are humungous and consume enormous electricity and they are still brittle and break easily.

Examples of Deep Learning Network Failures

Google’s Imagen is a well-known Deep Learning Network (DLN) that has been incredibly successful in computer vision. Hence, Gary Marcus, a professor of Psychology at the New York University tested Imagen by giving the following prompt for producing a picture – “a horse riding an astronaut”. Even after four attempts, Imagen failed and instead provided pictures that showed “an astronaut riding a horse”. Similarly, another well-known Deep Learning Network, DALL-E-2 was given the following prompt for creating a picture – “a red ball with flowers on it in front of a blue ball with a similar pattern,” and it also failed miserably.

In fact, during the last eight years, researchers have shown that the accuracy of most DLNs goes down dramatically with small changes in data and hence they cannot be trusted. For example, in 2015, Nguyen, Yosinki, and Clune examined whether the leading image-recognizing DLNs were susceptible to false positives; a false positive occurs when an entity believes something is true, but it is false. They generated
random images by perturbing patterns, and they showed both the original patterns and their mutated copies to these neural networks. Although the perturbed patterns were essentially meaningless, the trained DLNs incorrectly recognized these with over 99% confidence as a king penguin, starfish, etc. In other words, DLNs not only categorized these images incorrectly but also did so with extremely high confidence.

In 2017, Sharif et al. showed that, by wearing certain psychedelic spectacles, ordinary people could fool a facial recognition system into thinking they were celebrities; this would obviously permit people to impersonate others without being detected by such a system. For example, in the picture given below the author of the research paper impersonating Brad Pitt fooled the trained DLN.

Similarly, researchers in 2017 added stickers to stop signs, which caused a trained DLN to misclassify them, which could have grave consequences for autonomous car driving. See the figure below.

Can AI Be Relied Upon?

Given the brittle and fallible nature of Deep Learning Networks, can humans currently trust AI systems specifically in domains like health care, legal and criminal, security, defense and military, product liability, and financial services?

The book titled “The Fourth Industrial Revolution and 100 Years of AI (1950-2050) will be published in December 2023. For details, see www.scryai.com/book

Blog Written by

Dr. Alok Aggarwal

CEO, Chief Data Scientist at Scry AI
Author of the book The Fourth Industrial Revolution
and 100 Years of AI (1950-2050)