BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Levels And Limits Of AI

This article is more than 4 years old.

I recently spoke with the innovation team of a Fortune 50 company about their 2020 initiatives, one of which was artificial intelligence. When I asked what specifically they want to use AI for, an executive replied, “Everything.” I pushed a little more asking, “Are there any specific problems that you’re seeking AI vendors for?” The reply was something like “We want to use AI in all of our financial services groups.” This was particularly unsatisfying considering that the company is a financial services company.

I have these kinds of conversations frequently.  For example, I met with the head of a large government department to discuss artificial intelligence, and their top agency executive asked for a system that could automate the decision-making of key officials. When the executive was asked for details, he more or less wanted a robotic version of his existing employees.

AI is not a panacea and it cannot simply replace humans. Artificial intelligence is mathematical computation, not human intelligence, as I have discussed in previous posts. One of my key roles as an investor is separating “real AI” from AI hype.

Buyers should not focus on whether or not a company is “AI,” but rather whether or not it solves a real problem. While technology is important, the most important part of any company is serving the customer. There are specific customer needs that artificial intelligence can address really well. Others, not so much. For example, AI may be well suited to detect digital fraud, but it would not be well suited to be a detective in the physical world. AI should be treated like any other software tool…as a product that needs to yield a return. To do so, it is important to understand what artificial intelligence can actually do, and what it can’t. 

There are several “levels” of artificial intelligence. A few years ago my friends John Frank and Jason Briggs, who run Diffeo, suggested breaking artificial intelligence into 3 levels of service: Acceleration, Augmentation, and Automation. Acceleration is taking an existing human process and helping humans do it faster. For example, the current versions of textual auto-complete that Google offers are acceleration AI. They offer a completed version of what the user might already say. The next level, augmentation, takes what a human is doing and augments it. In addition to speeding up what the human is doing (like acceleration), it makes the human’s product better. An example of this is what Grammarly does with improving the grammar of text. The final level is automation. In the previous two levels there are still “humans in the loop.” Automation achieves a task with no human in the loop. The aspiration here is Level 5 autonomous driving like Aurora and Waymo are pursuing.

When evaluating AI companies it makes sense to ask if what they are setting out to achieve is actually attainable at the level of AI that the vendor is promising. Below is a rough demonstrative chart with the “Difficulty of AI” on the y-axis and “Level of AI” on the x-axis.

The dashed line is what I call the “AI feasibility curve.” Within the line is “AI feasibility,” which means that there is a technology, infrastructure and approach to actually deliver a successful product at that level of AI in the near term. In reality it is a curve, not a line, and it is neither concave nor convex. It has bumps. Certain problems are really difficult, but are attainable because a spectacular AI team has worked really hard to "push out” the AI feasibility curve for that specific problem. AlphaGo is included because it was an incredibly difficult and computationally intensive task, but the brilliant team at Google was able to shift the curve out in that area. If a company proposes it has built a fully-autonomous manager or strategy engine, I become highly skeptical. As you can see, the AI difficulty of those two tasks is quite high. The “difficulty” of AI is some function of the problem space and data quality (which I will discuss in a future article). In the chart, treat “difficulty for AI” as a directional illustration of difficulty, not a quantifiable score.

When purchasing a vendor’s AI, determine if its value proposition is feasible. If it is not, then the return on investment may be a disappointment. Look to whether it is being marketed as fully automated, but is too difficult a problem for full automation- this could be a sign that the product is actually accelerated AI. Keeping the feasibility curve in mind is important for investing as well, because if the customer is not well served, then the company will eventually fail.

When evaluating a company, I try to determine where on this chart the company would fall. If it is still building out product, I think about its technology innovation. Will the engineers be able to “shift out” the curve in that particular problem space? In evaluating AI, pick products that you can know with confidence will provide ROI. Don’t be like the Fortune 50 that is looking for AI “for everything” or the government agency trying to have AI that basically does exactly what one of its officers does. Instead, evaluate an AI product for what it really offers you. Then, make an informed decision.


Disclosure: Sequoia is an investor in Aurora and author is an investor in Alphabet, both of which were used as examples in the article.

Follow me on Twitter or LinkedIn