This is a cumulative posting about AI that was started in March 2018. I will be sharing observations and various snippets that illustrate that the latest advancements in neural networks and machine learning do not constitute “intelligence”. A much better definition for the acronym AI is “Augmented Intelligence”.
And always remember the famous quote: “Artificial intelligence is no match for natural stupidity.”
In a book released by Erik J. Larson in early 2022 (““The Myth of Artificial Intelligence”), he makes strong statements about the current reality of artificial intelligence:
The science of AI has uncovered a very large mystery at the heart of intelligence, which no one currently has a clue how to solve. Proponents of AI have huge incentives to minimize its known limitations. After all, AI is big business, and it’s increasingly dominant in culture. Yet the possibilities for future AI systems are limited by what we currently know about the nature of intelligence…
We are not on the path to developing truly intelligent machines. We don’t even know where that path might be.
May 10, 2022
Some interesting comments about AI from the CEO of IBM:
IBM chairman and CEO Arvind Krishna said IBM’s focus will be firmly on AI projects that provide near-term value for clients, adding that he believes generalized artificial intelligence is still a long time away… “Some people believe it could be as early as 2030, but the vast majority put it out in the 2050 to 2075 range”.
April 28, 2022
It’s generally difficult to emulate and repair a functioning system if you don’t know how that system works. Thus, I continue to insist that a much better understanding how the human brain works will be one of the greatest breakthroughs in the next 20 years.
There was a recent article about a scientist at Carnegie Mellon & the University of Pittsburgh where he made similar remarks with respect to computer vision:
Neuromorphic technologies are those inspired by biological systems, including the ultimate computer, the brain and its compute elements, the neurons. The problem is that no–one fully understands exactly how neurons work…
“The problem is to make an effective product, you cannot [imitate] all the complexity because we don’t understand it,” he said. “If we had good brain theory, we would solve it — the problem is we just don’t know [enough].”
April 3, 2022
A very pertinent new book from a computer engineer, entitled “A Thousand Brains – A New Theory of Intelligence”:
We know that the brain combines sensory input from all over your body into a single perception, but not how. We think brains “compute” in some sense, but we can’t say what those computations are. We believe that the brain is organized as a hierarchy, with different pieces all working collaboratively to make a single model of the world. But we can explain neither how those pieces are differentiated, nor how they collaborate.
Hawkins’ proposal, called the Thousand Brains Theory of Intelligence, is that your brain is organized into thousands upon thousands of individually computing units, called cortical columns. These columns all process information from the outside world in the same way and each builds a complete model of the world. But because every column has different connections to the rest of the body, each has a unique frame of reference. Your brain sorts out all those models by conducting a vote.
January 21, 2022
Well, we now have the Wall Street Journal coming out and telling us why AI is not really “intelligent”:
“In a certain sense I think that artificial intelligence is a bad name for what it is we’re doing here,” says Kevin Scott, chief technology officer of Microsoft. “As soon as you utter the words ‘artificial intelligence’ to an intelligent human being, they start making associations about their own intelligence, about what’s easy and hard for them, and they superimpose those expectations onto these software systems.”
Who would have thought?
December 30, 2021
A professor of cognitive and neural systems at Boston University, Stephen Grossberg, argues that an entirely different approach to artificial/augmented intelligence is needed. He has defined an alternative model for artificial intelligence based on cognitive and neural research he has been conducting during his long career.
He has a new book that attempts to explain thoughts, feelings, hopes, sensations, and plans. He defines biological models that attempt to explain how that happens.
The problem with today’s AI, he says, is that it tries to imitate the results of brain processing instead of probing the mechanisms that give rise to the results. People’s behaviors adapt to new situations and sensations “on the fly,” Grossberg says, thanks to specialized circuits in the brain. People can learn from new situations, he adds, and unexpected events are integrated into their collected knowledge and expectations about the world.
December 18, 2021
A recent issue of IEEE Spectrum contains a discussion about cognitive science, neuroscience, and artificial intelligence from Steven Pinker.
“Cognitive science itself became overshadowed by neuroscience in the 1990s and artificial intelligence in this decade, but I think those fields will need to overcome their theoretical barrenness and be reintegrated with the study of cognition — mindless neurophysiology and machine learning have each hit walls when it comes to illuminating intelligence.”
December 16, 2021
Here is something to indicate the state of investment in AI:
Enterprise AI vendor C3 AI announced the receipt of a $500 million AI modeling and simulation software agreement with the U.S. Department of Defense (DOD) that aims to help the military accelerate its use of AI in the nation’s defense.
For C3 AI, the five-year deal could not have come at a better time as the company has watched its stock price decline by almost 80 percent over the last 52 weeks
The deal with the DOD “allows for an accelerated timeline to acquire C3 AI’s suite of Enterprise AI products and allows any DOD agency to acquire C3 AI products and services for modeling and simulation,” the company said in a statement.
September 29, 2021
These so-called AI programs that these companies have created are just not yet comparable to human intelligence. I’m not telling you that these algorithms of augmented intelligence are worthless. Indeed, in the proper application, deep learning can be very valuable. In fact, we use them in the software tools that we create at one of my companies. These advanced neural nets are much more robust and provide higher performance in computer vision applications. In most cases, it is using brute force computing power to execute what I refer to as “closeness” measurements.
We just should not continue to sully the reputation of AI by setting expectations than cannot currently be achieved. Here is another recent article in IEEE Spectrum that takes a similar position:
Regardless of what you might think about AI, the reality is that just about every successful deployment has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low.
July 17, 2021
After the IBM “AI” system, known as Watson, bested many humans on the gameshow Jeopardy ten years ago, many of the managers at IBM thought they had a magical box filled with super human intelligence. They violated what I call the “11th Commandment”… From a recent article in the New York Times:
Watson has not remade any industries… The company’s missteps with Watson began with its early emphasis on big and difficult initiatives intended to generate both acclaim and sizable revenue for the company… Product people, they say, might have better understood that Watson had been custom-built for a quiz show, a powerful but limited technology… Watson stands out as a sobering example of the pitfalls of technological hype and hubris around A.I. The march of artificial intelligence through the mainstream economy, it turns out, will be more step-by-step evolution than cataclysmic revolution.
You may ask, what is the 11th Commandment? It’s simple but often overlooked by technoids with big egos: “Thou shall not bullshit thy selves…”
June 7, 2021
Here is someone else catching up with these unrealistic expectations for AI… A researcher for Microsoft has exclaimed in an interview that “AI is neither artificial nor intelligent“.
At the same time, I discovered this article in Quartz talking about the application of real-life AI in a very narrow domain. In fact, it’s an area that should be very primed for the use of computer vision and object detection. In this case, it’s the review of radiological data such as x-rays.
The inert AI revolution in radiology is yet another example of how AI has overpromised and under delivered. In books, television shows, and movies, computers are like humans, but much smarter and less emotional… computer algorithms do not have sentiment, feelings, or passions. They also do not have wisdom, common sense, or critical thinking skills. They are extraordinarily good at mathematical calculations, but they are not intelligent in any meaningful sense of the word.
I don’t mean to be such an AI naysayer, but augmented intelligence is a much more realistic assessment, and — when set with practical expectations — it does have the ability to provide business and societal value.
April 25, 2021
This article about data quality is another example of how current AI is less concerned with “intelligence”, and more focused on using digital computing for algorithms and processing large amounts of data:
Poor data quality is hurting artificial intelligence (AI) and machine learning (ML) initiatives. This problem affects companies of every size from small businesses and startups to giants like Google. Unpacking data quality issues often reveals a very human cause…
April 2, 2021
Here is article in IEEE Spectrum magazine (web site) that supports the position of this posting by exclaiming “Stop Calling Everything AI”:
Artificial-intelligence systems are nowhere near advanced enough to replace humans in many tasks involving reasoning, real-world knowledge, and social interaction. They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence… Computers have not become intelligent per se, but they have provided capabilities that augment human intelligence…
Note: IEEE (Institute of Electrical and Electronics Engineers) Spectrum is still one of the better engineering journals where the articles are not all written by “journalists” that are prostituting the engineering profession to promote a political position.
March 6, 2021
You know that a practical view of problem solving is getting out of hand when one of your software developers indicates that he’s going to fix the problem by “throwing some AI at it”. Here is a related cartoon strip:
January 3, 2021
In a recent article about the inappropriateness of the Turing Test, the lead scientist for Amazon’s Alexa talks about the proper focus of AI efforts:
“Instead of obsessing about making AIs indistinguishable from humans, our ambition should be building AIs that augment human intelligence and improve our daily lives…”
June 2, 2020
Here is another article, this time in TechSpot, that outlines some of the presumptuous and misleading claims for artificial intelligence:
In a separate study from last year that analyzed neural network recommendation systems used by media streaming services, researchers found that six out of seven failed to outperform simple, non-neural algorithms developed years earlier.
April 16, 2020
A nice article in the ACM Digit magazine provides a concise history of artificial intelligence. One of the best contributions is this great summary timeline.
January 4, 2020
A recent article in Kaiser Health News about the overblown claims of AI in healthcare:
Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanford’s Center for Biomedical Ethics… In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma ― an error that could have led doctors to deprive asthma patients of the extra care they need….
Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is “nearly at the peak of inflated expectations,” concluded a July report from the research company Gartner. “As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.”
It’s important to remember that what many people are claiming as ‘artificial intelligence’ is hardly that… In reality, it’s predominantly algorithms leveraging large amounts of data. As I’ve previously mentioned, I feel that I’m reliving the ballyhooed AI claims of the 1980s all over again.
June 29, 2019
Well, here is Joe from Forbes trying to steal my thunder in using ‘Augmented’ instead if ‘Artificial’:
Perhaps “artificial” is too artificial of a word for the AI equation. Augmented intelligence describes the essence of the technology in a more elegant and accurate way.
Not much more interesting in the article…
April 16, 2019
Here is another example of the underwhelming performance of technology mistakenly referred to “artificial intelligence”. In this case, Google’s DeepMind tool was used to take a high school math test;
The algorithm was trained on the sorts of algebra, calculus, and other types of math questions that would appear on a 16-year-old’s math exam… But artificial intelligence is quite literally built to pore over data, scanning for patterns and analyzing them…. In that regard, the results of the test — on which the algorithm scored a 14 out of 40 — aren’t reassuring.
April 3, 2019
This is a very good article in the reputable IEEE Spectrum. It explains some of the massive over-promising and under-delivering associated with the IBM Watson initiatives in the medical industry (note: I am an IBM stockholder):
Experts in computer science and medicine alike agree that AI has the potential to transform the health care industry. Yet so far, that potential has primarily been demonstrated in carefully controlled experiments… Today, IBM’s leaders talk about the Watson Health effort as “a journey” down a road with many twists and turns. “It’s a difficult task to inject AI into health care, and it’s a challenge. But we’re doing it.”
AI systems can’t understand ambiguity and don’t pick up on subtle clues that a human doctor would notice… no AI built so far can match a human doctor’s comprehension and insight… a fundamental mismatch between the promise of machine learning and the reality of medical care—between “real AI” and the requirements of a functional product for today’s doctors.
It’s been 50 years since the folks at Stanford first created the Mycin ‘expert system’ for identifying infections, and there is still a long-way to go. That is one reason that I continue to refer to AI as “augmented intelligence”.
February 18, 2019
A recent article in Electrical Engineering Times, proclaims that the latest incarnation of AI represents a lot of “pretending”:
In fact, modern AI (i.e. Siri, IBM’s Watson, etc.) is not capable of “reading” (a sentence, situation, an expression) or “understanding” same. AI, however, is great at pretending as though it understands what someone just asked, by doing a lot of “searching” and “optimizing.”…
You might have heard of Japan’s Fifth Generation Computer System, a massive government-industry collaboration launched in 1982. The goal was a computer “using massively parallel computing and processing” to provide a platform for future developments in “artificial intelligence.” Reading through what was stated then, I know I’m not the only one feeling a twinge of “Déjà vu.”…
Big companies like IBM and Google “quickly abandoned the idea of developing AI around logic programming. They shifted their efforts to developing a statistical method in designing AI for Google translation, or IBM’s Watson,” she explained in her book. Modern AI thrives on the power of statistics and probability.
February 16, 2019
I’ve had lengthy discussions with a famous neurologist/computer scientist about the man-made creation of synthetic intelligence, which I contend is constrained by the lack of the normative model of the brain (i.e., an understanding at the biochemical level of brain processes such the formation of memories). I’ve been telling him for the last 10 years that our current knowledge of the brain is similar to the medical knowledge reflected in the 17th century Rembrandt painting that depicts early medical practitioners performing a vivisection on the human body (and likely remarking “hey, what’s that?”).
Well, I finally found a neuroscientist at Johns Hopkins that exclaims that science needs the equivalent of the periodic table of elements to provide a framework for brain functions:
Gül Dölen, assistant professor of neuroscience at the Brain Science Institute at Johns Hopkins, thinks that neuroscientists might need take a step back in order to better understand this organ… Slicing a few brains apart or taking a few MRIs won’t be enough… neuroscientists can’t even agree on the brain’s most basic information-carrying unit…
The impact could be extraordinary, from revolutionizing AI to curing brain diseases.
It’s important to keep in mind that future synthetic intelligence systems do not have to precisely emulate the normative intelligence model of the human brain. As an example, early flight pioneers from Da Vinci to Langley often thought that human flight required the emulation of the bird with the flapping of wings. Many of those ultimately responsible for successful flight instead focused on the principles of physics (e.g., pressure differentials) to enable flight. As we clearly understand now, airplanes and rockets don’t flap their wings.
November 14, 2018
It’s similar to the old retort for those who can’t believe the truth: don’t ask me about my opinion on the latest rendition of AI, just ask this executive at Google:
“AI is currently very, very stupid,” said Andrew Moore, a Google vice president. “It is really good at doing certain things which our brains can’t handle, but it’s not something we could press to do general-purpose reasoning involving things like analogies or creative thinking or jumping outside the box.”
July 28, 2018
Here are some words from a venture capital investor that has had similar experiences to my own when it comes to (currently) unrealistic expectations from artificial intelligence (AI):
Last year AI companies attracted more than $10.8 billion in funding from venture capitalists like me. AI has the ability to enable smarter decision-making. It allows entrepreneurs and innovators to create products of great value to the customer. So why don’t I don’t focus on investing in AI?
During the AI boom of the 1980s, the field also enjoyed a great deal of hype and rapid investment. Rather than considering the value of individual startups’ ideas, investors were looking for interesting technologies to fund. This is why most of the first generation of AI companies have already disappeared. Companies like Symbolics, Intellicorp, and Gensym — AI companies founded in the ’80s — have all transformed or gone defunct.
And here we are again, nearly 40 years later, facing the same issues.
Though the technology is more sophisticated today, one fundamental truth remains: AI does not intrinsically create consumer value. This is why I don’t invest in AI or “deep tech.” Instead, I invest in deep value.
April 22, 2018
It’s interesting that so many famous prognosticators, such Hawking, Musk, et al., are acting like the Luddites of the 19th century. That is, they make dire predictions that new technology is harboring the end of the world. Elon Musk has gone on record stating that artificial intelligence will bring human extinction.
Fortunately, there are more pragmatic scientists, such as Nathan Myhrvold, that understand the real nature of technology adoption. He uses the history of mathematics to articulate a pertinent analogy as well as justify his skepticism.
This situation is a classic example of something that the innovation doomsayers routinely forget: in almost all areas where we have deployed computers, the more capable the computers have become, the wider the range of uses we have found for them. It takes a lot of human effort and jobs to satisfy that rising demand.
March 21, 2018
One of the reasons that I still have not bought-into true synthetic/artificial intelligence is the fact that we still lack a normative model that explains the operation of the human brain. In contrast, many of the other physiological systems can be analogized by engineering systems — the cardiovascular system is a hydraulic pumping system; the excretory system is a fluid filtering system; the skeletal system is a structural support system; and so on.
One of my regular tennis partners is an anesthesiologist who has shared with me that the medical practitioners don’t really know what causes changes in consciousness. This implies that anesthesia is still based on ‘Edisonian’ science (i.e., based predominantly on trial and error without the benefit of understanding the deterministic cause & effects). This highlights the fact that the model for what constitutes brain states and functions is still incomplete. Thus, it’s difficult to create an ‘artificial’ version of that extremely complex neurological system.
March 17, 2018
A great summary description of the current situation from Vivek Wadhwa:
Artificial intelligence is like teenage sex: “Everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.” Even though AI systems can now learn a game and beat champions within hours, they are hard to apply to business applications.
March 6, 2018
It’s interesting how this recent article about AI starts: “I took an Uber to an artificial intelligence conference… “ In my case, it was late 1988 and I took a taxi to an artificial intelligence conference in Philadelphia. AI then was filled with all these fanciful promises, and frankly the attendance at the conference felt like a feeding frenzy. It reminded me of the Jerry Lewis movie “Who’s Minding the Store” with all of the attendees pushing each other to get inside the convention hall.
Of course, AI didn’t take over the world then and I don’t expect it to now. However, with advances in technology over the last 30 years, I do see the adoption of a different AI — ‘augmented intelligence’ –becoming more of a mainstay. One reason (which is typically associated with the ‘internet of things’) is sensors cost next to nothing and they are much more effective – i.e., recognizing voice commands; recognizing images & shapes; determining current locations; and so on. This provides much more human-like sensing that augments people in ways we have not yet totally imagined (e.g. food and voice recognition to support blind people).
On the flip side, there are many AI-related technologies that are really based more on the availability of large amounts of data and raw computing power. These are often referred to with esoteric names such as neural networks and machine learning. While these do not truly represent synthetic intelligence, they have the basis for making vast improvements in analyses. For example, we’re working with a company to accumulate data on all the details of everything that they make to enable them to rapidly understand the true drivers of what makes a good part versus a bad part. This is enabled by the combination of the sensors described above along with the advanced computing techniques.
The marketing people in industry have adopted the use of the phrase ‘digital transformation’ to describe the opportunities and necessities that exist with the latest technology. For me, I just view it as the latest generation of computer hardware and software that is enabling another great wave — If futurist Alvin Toffler were alive today, he’d likely be calling it the ‘fourth wave’.