Excerpt From New Book The Fourth Industrial Revolution & 100 Years of AI (1950-2050)

Today we are highlighting the third chapter of Alok Aggarwal’s new book on the history of Artificial Intelligence “The Fourth Industrial Revolution & 100 Years of AI (1950-2050).” The third chapter, “The Second AI Winter and Resurgence of AI During 1980-2010,” discusses that despite the initial excitement, progress in artificial intelligence (AI) experienced peaks and valleys between 1980 and 2010. Nevertheless, it set the stage for the fourth Industrial revolution that would begin in 2011 with Data Science and AI as the central theme.

In May 1997, IBM’s supercomputer, Deep Blue, made history by defeating the reigning world chess champion, Garry Kasparov, in a six-game rematch with a score of 3.5 to 2.5. Kasparov, astonished by Deep Blue’s strategic prowess, remarked on the machine’s human-like sense of danger during Game Two. This watershed moment set the stage for a future where AI-based systems would challenge and eventually surpass human capabilities in various games (e.g., Jeopardy!, GO, and Poker) in 2011 and beyond.

The rise of Expert Systems in the early 1980s led to high expectations, only to be followed by a “second AI Winter” between 1987 and 1993. However, researchers persevered, expanding the paradigm of machine learning algorithms established between 1950 and 1979. The mid-1990s saw renewed interest and funding in AI, leading to advancements in mathematical descriptions of Multilayer Perceptron’s – which began to be called Deep Learning Networks (DLNs) – and improvements in evolutionary and genetic algorithms. The following are some of the key points covered in this chapter:

1. Rise and fall of Expert Systems: The year 2000 had come and gone but predictions by the pioneers of AI of creating a computer that could imitate a human remained unfulfilled. During the early 1980s, the hype regarding Expert Systems again emerged, which went bust by the early 1990s. Nevertheless, this hype led to substantial research because innovators realized that it is vital to infuse human knowledge and subject matter expertise. Unsurprisingly, many extensions of Expert Systems are now being embedded in AI systems to mitigate the limitations of Machine Learning algorithms and to incorporate temporal or spatial contexts.

2. Five external factors helped in improving AI systems during 1980-2010: These include (a) Moore’s law concerning an exponential increase in speed and reduction in computational cost every two years, (b) ability to use computers in parallel, thereby improving the speed to train Machine Learning algorithms, (c) the emerging importance of GPUs over CPUs, (d) enormous growth in the availability of data, partly spurred by electronic communications, and (e) many important software libraries becoming open-source and almost free.

3. Expansion of the Machine Learning Paradigm: During 1980-2010, researchers continued to expand the paradigm related to Machine Learning and considerable progress was made with respect to Support Vector Machines (SVMs) and other Machine Learning algorithms (including DLNs) and their commercial applications.

Undoubtedly, because of the availability of inexpensive hardware and vast amount of data, the pace of research and development increased after 2005, thereby leading to significant growth after 2010 when many AI solutions started becoming an integral part of the fourth industrial revolution. Also, DLNs, whose underpinnings were provided in 1960s, were popularized by researchers who improved them and built variants to solve real-world problems. This was particularly impressive because investment had diminished considerably. Such innovation was vital since DLNs became eminently useful after 2011 and transformed the AI landscape.

Overall, the book, “The Fourth Industrial Revolution & 100 Years of AI (1950-2050)” provides a concise yet comprehensive exploration of AI, covering its origins, evolutionary trajectory, and its potential ubiquity during the next 27 years. Beginning with an introduction to the fundamental concepts of AI, subsequent chapters delve into its transformative journey with an in-depth analysis of achievements of AI, with a special focus on the potential for job loss and gain. The latter portions of the book examine the limitations of AI, the pivotal role of data in enabling accurate AI systems, and the concept of “good” AI systems. It concludes by contemplating the future of AI, addressing the limitations of classical computing, and exploring alternative technologies (such as Quantum. Photonics, Graphene, and Neuromorphic computing) for ongoing advancements in the field. This book is now available in bookstores and online retailers in Kindle, paperback, and hard cover formats.

Press Contacts

Srini Bharadwaj

Scry Analytics, Inc.
+1 781-929-0669
[email protected]