Does AI Mean ‘Augmented Intelligence’?

Update

Here is another article, this time in TechSpot, that outlines some of the presumptuous and misleading claims for artificial intelligence:

In a separate study from last year that analyzed neural network recommendation systems used by media streaming services, researchers found that six out of seven failed to outperform simple, non-neural algorithms developed years earlier.

 

April 16, 2020

A nice article in the ACM Digit magazine provides a concise history of artificial intelligence.   One of the best contributions is this great summary timeline.

January 4, 2020

A recent article in Kaiser Health News about the overblown claims of AI in healthcare:

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanford’s Center for Biomedical Ethics… In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma ― an error that could have led doctors to deprive asthma patients of the extra care they need….

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is “nearly at the peak of inflated expectations,” concluded a July report from the research company Gartner. “As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.”

It’s important to remember that what many people are claiming as ‘artificial intelligence’ is hardly that… In reality, it’s predominantly algorithms leveraging large amounts of data.   As I’ve previously mentioned, I feel that I’m reliving the ballyhooed AI claims of the 1980s all over again.

 

June 29, 2019

Well, here is Joe from Forbes trying to steal my thunder in using ‘Augmented’ instead if ‘Artificial’:

Perhaps “artificial” is too artificial of a word for the AI equation. Augmented intelligence describes the essence of the technology in a more elegant and accurate way.

Not much more interesting in the article…

 

April 16, 2019

Here is another example of the underwhelming performance of technology mistakenly referred to “artificial intelligence”.  In this case, Google’s DeepMind tool was used to take a high school math test;

The algorithm was trained on the sorts of algebra, calculus, and other types of math questions that would appear on a 16-year-old’s math exam… But artificial intelligence is quite literally built to pore over data, scanning for patterns and analyzing them…. In that regard, the results of the test — on which the algorithm scored a 14 out of 40 — aren’t reassuring.

 

April 3, 2019

This is a very good article in the reputable IEEE Spectrum.  It explains some of the massive over-promising and under-delivering associated with the IBM Watson initiatives in the medical industry (note: I am an IBM stockholder):

Experts in computer science and medicine alike agree that AI has the potential to transform the health care industry. Yet so far, that potential has primarily been demonstrated in carefully controlled experiments… Today, IBM’s leaders talk about the Watson Health effort as “a journey” down a road with many twists and turns. “It’s a difficult task to inject AI into health care, and it’s a challenge. But we’re doing it.”

AI systems can’t understand ambiguity and don’t pick up on subtle clues that a human doctor would notice… no AI built so far can match a human doctor’s comprehension and insight… a fundamental mismatch between the promise of machine learning and the reality of medical care—between “real AI” and the requirements of a functional product for today’s doctors.

It’s been 50 years since the folks at Stanford first created the Mycin ‘expert system’ for identifying infections, and there is still a long-way to go.  That is one reason that I continue to refer to AI as “augmented intelligence”.

 

February 18, 2019

A recent article in Electrical Engineering Times, proclaims that the latest incarnation of AI represents a lot of “pretending”:

In fact, modern AI (i.e. Siri, IBM’s Watson, etc.) is not capable of “reading” (a sentence, situation, an expression) or “understanding” same. AI, however, is great at pretending as though it understands what someone just asked, by doing a lot of “searching” and “optimizing.”…

You might have heard of Japan’s Fifth Generation Computer System, a massive government-industry collaboration launched in 1982. The goal was a computer “using massively parallel computing and processing” to provide a platform for future developments in “artificial intelligence.” Reading through what was stated then, I know I’m not the only one feeling a twinge of “Déjà vu.”

Big companies like IBM and Google “quickly abandoned the idea of developing AI around logic programming. They shifted their efforts to developing a statistical method in designing AI for Google translation, or IBM’s Watson,” she explained in her book. Modern AI thrives on the power of statistics and probability.

 

February 16, 2019

I’ve had lengthy discussions with a famous neurologist/computer scientist about the man-made creation of synthetic intelligence, which I contend is constrained by the lack of the normative model of the brain (i.e., an understanding at the biochemical level of brain processes such the formation of memories).  I’ve been telling him for the last 10 years that our current knowledge of the brain is similar to the medical knowledge reflected in the 17th century Rembrandt painting that depicts early medical practitioners performing a vivisection on the human body (and likely remarking “hey, what’s that?”).

Well, I finally found a neuroscientist at Johns Hopkins that exclaims that science needs the equivalent of the periodic table of elements to provide a framework for brain functions:

Gül Dölen, assistant professor of neuroscience at the Brain Science Institute at Johns Hopkins, thinks that neuroscientists might need take a step back in order to better understand this organ… Slicing a few brains apart or taking a few MRIs won’t be enough… neuroscientists can’t even agree on the brain’s most basic information-carrying unit

The impact could be extraordinary, from revolutionizing AI to curing brain diseases.

 

November 14, 2018

It’s similar to the old retort for those who can’t believe the truth: don’t ask me about my opinion on the latest rendition of AI, just ask this executive at Google:

AI is currently very, very stupid,” said Andrew Moore, a Google vice president. “It is really good at doing certain things which our brains can’t handle, but it’s not something we could press to do general-purpose reasoning involving things like analogies or creative thinking or jumping outside the box.”

 

July 28, 2018

Here are some words from a venture capital investor that has had similar experiences to my own when it comes to (currently) unrealistic expectations from artificial intelligence (AI):

Last year AI companies attracted more than $10.8 billion in funding from venture capitalists like me. AI has the ability to enable smarter decision-making. It allows entrepreneurs and innovators to create products of great value to the customer. So why don’t I don’t focus on investing in AI?

During the AI boom of the 1980s, the field also enjoyed a great deal of hype and rapid investment. Rather than considering the value of individual startups’ ideas, investors were looking for interesting technologies to fund. This is why most of the first generation of AI companies have already disappeared. Companies like Symbolics, Intellicorp, and Gensym — AI companies founded in the ’80s — have all transformed or gone defunct.

And here we are again, nearly 40 years later, facing the same issues.

Though the technology is more sophisticated today, one fundamental truth remains: AI does not intrinsically create consumer value. This is why I don’t invest in AI or “deep tech.” Instead, I invest in deep value.

 

April 22, 2018

It’s interesting that so many famous prognosticators, such Hawking, Musk, et al., are acting like the Luddites of the 19th century.  That is, they make dire predictions that new technology is harboring the end of the world.  Elon Musk has gone on record stating that artificial intelligence will bring human extinction.

Fortunately, there are more pragmatic scientists, such as Nathan Myhrvold, that understand the real nature of technology adoption.  He uses the history of mathematics to articulate a pertinent analogy as well as justify his skepticism.

This situation is a classic example of something that the innovation doomsayers routinely forget: in almost all areas where we have deployed computers, the more capable the computers have become, the wider the range of uses we have found for them. It takes a lot of human effort and jobs to satisfy that rising demand.

 

March 21, 2018

One of the reasons that I still have not bought-into true synthetic/artificial intelligence is the fact that we still lack a normative model that explains the operation of the human brain.  In contrast, many of the other physiological systems can be analogized by engineering systems — the cardiovascular system is a hydraulic pumping system; the excretory system is a fluid filtering system; the skeletal system is a structural support system; and so on.

One of my regular tennis partners is an anesthesiologist who has shared with me that the medical practitioners don’t really know what causes changes in consciousness.  This implies that anesthesia is still based on ‘Edisonian’ science (i.e., based predominantly on trial and error without the benefit of understanding the deterministic cause & effects).  This highlights the fact that the model for what constitutes brain states and functions is still incomplete.  Thus, it’s difficult to create an ‘artificial’ version of that extremely complex neurological system.

 

March 17, 2018

A great summary description of the current situation from Vivek Wadhwa:

Artificial intelligence is like teenage sex: “Everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.” Even though AI systems can now learn a game and beat champions within hours, they are hard to apply to business applications.

 

March 6, 2018

It’s interesting how this recent article about AI starts: “I took an Uber to an artificial intelligence conference… “  In my case, it was late 1988 and I took a taxi to an artificial intelligence conference in Philadelphia.  AI then was filled with all these fanciful promises, and frankly the attendance at the conference felt like a feeding frenzy.  It reminded me of the Jerry Lewis movie “Who’s Minding the Store” with all of the attendees pushing each other to get inside the convention hall.

Of course, AI didn’t take over the world then and I don’t expect it to now.  However, with advances in technology over the last 30 years, I do see the adoption of a different AI — ‘augmented intelligence’ –becoming more of a mainstay.  One reason (which is typically associated with the ‘internet of things’) is sensors cost next to nothing and they are much more effective – i.e., recognizing voice commands; recognizing images & shapes; determining current locations; and so on.  This provides much more human-like sensing that augments people in ways we have not yet totally imagined (e.g. food and voice recognition to support blind people).

On the flip side, there are many AI-related technologies that are really based more on the availability of large amounts of data and raw computing power.  These are often referred to with esoteric names such as neural networks and machine learning.  While these do not truly represent synthetic intelligence, they have the basis for making vast improvements in analyses.  For example, we’re working with a company to accumulate data on all the details of everything that they make to enable them to rapidly understand the true drivers of what makes a good part versus a bad part.  This is enabled by the combination of the sensors described above along with the advanced computing techniques.

The marketing people in industry have adopted the use of the phrase ‘digital transformation’ to describe the opportunities and necessities that exist with the latest technology.  For me, I just view it as the latest generation of computer hardware and software that is enabling another great wave —  If futurist Alvin Toffler were alive today, he’d likely be calling it the ‘fourth wave’.

11 comments

  1. Pingback: tech blog topics
  2. Pingback: us news
  3. You can certainly see your expertise in the article you write.

    The world hopes foor more passionate writers like
    you who aren’t afraid to mention how they believe.
    Always follow your heart.

  4. D’où la nécessité de s’informer sur la provenance du produit avant de
    procéder à son achat.

  5. Hі i ϳust visited yߋur website fоr the first time andd i trսly loved іt, i saved іt and wiⅼl be back.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.