5 Ways AI Can Make Your Business Dumb (Tech Strategy – Daily Article)

Use cases are a good way to approach the subject of AI. Just hunt industry by industry and see what companies are using AI for and not. It’s the best way to separate out the hype. And make sure you look at industries other than your own. I find scanning use cases across industries is a good way to stay on top of the subject. One good source for this is Nvidia’s AI Podcast.

First a bit of review of the state of AI, from Kai Fu Lee.

The Four Waves of Artificial Intelligence

China AI guru Kai Fu Lee (author of the book AI Superpowers) presented the below slide at a China conference.

He argued that there have been 4 waves of AI, and that the first wave was Internet AI.

This was digital companies moving from data analytics to AI. Think companies like Google, Amazon, NetEase, Baidu, Facebook, Taobao, WeChat, JD, Meituan, and ByteDance. Going into AI was a natural move for these software-based companies, which were already completely digital creatures with tons of data. It was a natural step to go from their extensive data analytics activities to predictive analytics (i.e., AI).

The second wave was Business AI.

This was when real world businesses started to apply AI to their already well-developed enterprise and software systems. Think companies like Palantir and IBM. As in the first wave, these companies had long been collecting information about their businesses. So this was a lot about mining their existing data for additional insights. And while businesses are usually quite good at making correlations based on strong features, AI can be very useful for finding correlations based on weak features.

Wave three was Perception AI.

This one is pretty cool. This was about digitizing the physical world. Think lots of sensors, microphones, cameras, and IoT devices being deployed. The physical world is starting to be converted into usable data that AI can run algorithms on. This is not a small thing. Up until this point, computers really only knew what we told them. We had to type or load in their information. But it turns out computers have really good vision (i.e., computer vision). And they can increasingly see and process the physical world for themselves, without our involvement. Think products like SenseTime and Amazon Echo.

Finally, there is the wave four, Autonomous AI.

This is when AI prediction starts to be joined by autonomous decision-making. So that is companies like Tesla and Waymo. Autonomous AI is when cheap prediction is joined with decision-making and the cars start to drive themselves. This is the frontier of “AI meets robotics”.

First, AI Has Massive Advantages Over Human Brains

Machines are a lot bigger, stronger and sturdier than humans. It’s not even close. Large airplanes can fly at +3oo miles per hour. Tractors can demolish buildings. Robots can go to the Antarctica and to the depths of the ocean. Satellites can go into deep space. Machines are physically able to do things way beyond humans.

Well, the same phenomenon is happening with cognitive abilities. Humans cannot compete with AI in memory or processing speed. Computers have perfect memories. And they last forever. They don’t forget. Computers can solve equations and calculations at lightning speed. And they have tremendous precision and consistency. A computer can do a calculation or retrieve a random fact a million times without mistake. Without ever getting tired. Forever.

Our brains can’t do anything like that. So it’s worth keeping in mind that when it comes to memory, processing speed, precision and consistency, we are outgunned. Don’t try to compete with machines in these areas. Plus, what one machine knows they all know. They are connected. We can’t connect our brains.

Ok.

So when is AI bad? When is it wrong? How can it make a company really stupid?

Five Ways Humans Are Better than AI

There are things that humans are much better at in terms of intelligence. Or, to put it another way, there are things that AI is just terrible at. These are the situations where AI will give you meaningless and wrong predictions. And make your company worse.

Here’s a big one:

1.AI Can’t Do Analogies.

Douglas Hofstadter (Nobel prize for cognitive science and how our brains work) called analogies “the fuel and fire of thinking.” When we see an activity, read a passage, or hear a conversation, we are able to focus on the most salient features, the “skeletal essence.” And we are able to extrapolate this to other situations. Human beings are very good at distilling. And then looking for both similarities and differences in other situations. “True intelligence” is the ability to recognize and assess this type of essence in a situation. And we are really good at this. We collect and categorize human experiences. And then we compare, contrast and combine.

Humans are great at analogies and determining the “skeletal essence”. It’s the key to human intelligence.

But AI can’t do analogies at all. In fact, AI can’t think at all.

If you see a Picasso line drawing of a dog, you know it is a dog, just by the outline. AI has a very hard time doing that because it doesn’t know what a dog is. It doesn’t know who Picasso is. It has no intelligence or understanding of anything. It just scans for pixels. AI operates the same way New Zealander Nigel Richards wins victory after victory in French Scrabble, even though he doesn’t speak French. He memorized lots of French words and now he puts together combinations of letters. He can do that well and win at Scrabble, even though he has no idea what the words mean.

This is how AI functions. And that is why it can’t do analogies.

2.AI Can’t Recognize Bad or Fake Data.

Garbage in, garbage out is a real problem for AI.

First of all, AI is hunting for correlations in data – but it has no understanding of the data. Or the question. Or the subject.. So if the data entered is bad or fake, the AI is completely unaware of this. Just as it is unaware of everything. A researcher studying a question and testing for a causal mechanism is far more likely to spot bad data.

AI also requires big data. It needs tons of data to find real correlations. AI doesn’t work well on small data sets. But the more data you use, the more likely you will have bad, fake or biased data included. So big data makes the bad data problem more likely.

Basically, AI has difficulty identifying bad data, corrupted data, incomplete data, fake data and biased data. It’s a big problem.

3.AI Can’t Recognize Bias.

A lot of the ideas in this article are from Gary Smith, an economics professor at Pomona College and author of the AI Delusion. He writes a lot about the problem of bias in analysis. And there are lots of types. Survivor bias. Self-selection bias. And many others.

4.Big Data is Full of Meaningless Patterns and Correlations.

If you randomly generate 1,000 numbers, you are going to see all sorts of patterns that strike you as meaningful. Because correlations and patterns are everywhere. They will jump out. And we think patterns are unusual and therefore meaningful. But they usually aren’t.

Random data always has tons of patterns and correlations. With no underlying cause. It’s just random. Mediocre stock managers can have great returns for 5 years. Average baseball players can have long runs of success for 15 games. If you flip a coin ten times, there is a +40% chances of a streak of 4 or longer. Meaning patterns are normal.

And it turns out AI is super-efficient at finding meaningless patterns in really big data. In fact, this is arguably their primary activity. The vast majority of correlations AI identifies when grinding through data are going to be meaningless.

5.AI Is About Correlation. But the Scientific Method Requires Causation.

Causation vs. correlation is the really the central problem. The scientific method is based on causation. Come up with a rule or causal mechanism, and then test it. Over and over. See if it is true. See if it makes sense. Correlation is just looking for statistical patterns, without  a reason. Both are very powerful. But I believe only causation can make you smarter over time.

When testing something with AI or data analytics, you should always:

  • Avoid data mining. You always want to have a cause you are testing.
  • So you start with a theory.
  • Then you try to test it using a small dataset that is accurate and understandable.
  • Ideally, you test your theory going forward with an experimental and a control group.
  • You then retest going forward with fresh data. It must be repeatable. Always with new data.
  • Then you ask. Does this make sense?

That’s causation and the scientific method. But AI can’t really do much of that list (so far). It still mostly just fits the data.

So the irony is AI is not artificial intelligence. It is not really intelligent at all. AI is the “appearance of intelligence”.

That’s it for today. cheers, -jeff

———-

Related articles:

From the Concept Library, concepts for this article are:

  • AI – Cheap Prediction
  • Smile Marathon: Machine Learning

From the Company Library, companies for this article are:

  • n/a

Photo by Possessed Photography on Unsplash

———

I write, speak and consult about how to win (and not lose) in digital strategy and transformation.

I am the founder of TechMoat Consulting, a boutique consulting firm that helps retailers, brands, and technology companies exploit digital change to grow faster, innovate better and build digital moats. Get in touch here.

My book series Moats and Marathons is one-of-a-kind framework for building and measuring competitive advantages in digital businesses.

This content (articles, podcasts, website info) is not investment, legal or tax advice. The information and opinions from me and any guests may be incorrect. The numbers and information may be wrong. The views expressed may no longer be relevant or accurate. This is not investment advice. Investing is risky. Do your own research.

Leave a Reply