The worst AI advice you’ll ever hear

Facebook (now Meta) has promoted the spirit of Silicon Valley by saying “move fast and smash things”. This approach may be successful when disrupting the social media business, but it causes all sorts of problems for them as well as other major players in the field of AI. Breaking things down and moving quickly may be the reason many AI projects fail. According to a study by the Massachusetts Institute of Technology, more than 85% of AI projects fail to achieve their stated goals, and 70% of data science projects never come to fruition. Obviously, moving fast and breaking things doesn’t work if you don’t come close to success.

There is a difference between repeating to succeed and destroying things

The problem with these failures is that because AI is still nascent, despite more than seven decades of development, organizations tend to give up completely without reaching the point of success. Part of the reason for this is that the ethos of agility often does not work when data is the cornerstone of an AI project’s success. Rather than just focusing on moving fast, organizations realize that they need to focus on short, iterative sprints that bring AI projects closer to achieving their desired goals.

From this perspective, rather than just moving fast and breaking things, organizations need to “think big, start small, and iterate often.” This goes slightly against Silicon Valley’s perception that all innovation must be disruptive. The AI ​​solution may eventually become a malfunction, but to get to this point of turmoil, AI projects first need to iterate many stages of “small wins”. However, many organizations simply do not have the patience for those small gains.

The separation of artificial intelligence in the real world

One of the main reasons AI projects fail so often is that in the rush to get things to production, organizations will take a small proof of concept conducted in a controlled environment and push that into the messier realities of the real world. As a result, there is little surprise that an AI project fails due to a mismatch of expectations with the real world.

There are two versions of the problem of artificial intelligence that meet reality. The first version is that people build models before they actually know how they will be used in the real world. This means that ML engineers, AI project managers, and data scientists are working on a limited understanding of the real world that is challenged in the actual real world. This is part of the reason why many autonomous vehicle projects face significant challenges in the real world.

The second version of this problem is that the environment of how the model is developed does not match the environment in which the model is used. Usually this is because real world data is difficult to obtain, cluttered, or requires a significant amount of time to clean up and prepare. As organizations push themselves in a limited time frame, instead of dealing with the messy real-world reality of data, they simply use the best available subset and train models on that data.

Organizations now realize that jumping the gun to move from testing to production does more harm to projects than it helps. The result is a more cautious approach to AI that aims to test a real project in the real world, even if it is less ambitious and enchanting than a greater effort. This allows the organization to test how the real world will benefit from the AI ​​model before implementing it on a larger and more risky project.

This approach applies to scalable, highly repetitive, and low-risk AI projects for simple use cases such as classifying documents using natural language processing. For example, organizations that aim to use NLP to automate document classifiers to separate PDFs into invoices, offers, and contracts using NLP, image recognition, or some other cognitive technology options. The goal is to remove the human from separating those documents manually thus speeding up the process, reducing errors, lowering costs, and eliminating bottlenecks. When deploying the first version of the model to perform a small part of this task, the organization may find that the model works but does not actually save any human work. Your workers still spend the same amount of time classifying documents. At this point, you might think that AI isn’t working, but that’s not always the case. It is possible that you built the wrong type of model. Or the documents used for training were not representative of real world documents. This incremental iterative approach is popularized through methodologies such as Cross-Industry Standard Process for Data Mining (CRISP-DM) and the Project Cognitive Management for Artificial Intelligence (CPMAI) methodologies.

“Think Big. Start Small. Repeat Often”

Even if the enterprise model matches the expectations of the project, it can still fail because the way the model is actually implemented does not match the way you assumed it. Let’s say you’re building a model that uses facial recognition to unlock a phone. This model is on an edge device that requires internet or cloud access to work. If you are walking in a forest and trying to view a map that you have saved and you don’t have good reception and you can’t unlock your phone, you are going to have some problems. The system will then have to rely on non-AI backups such as passcodes or other access methods. This way, while the model works in some cases, it doesn’t in many cases, jeopardizing the project’s ROI.

One of the main goals of AI systems is to provide real-world functionality that leverages machine learning capabilities and other approaches to replace tasks that would otherwise be challenging, labor-intensive, or decision-making. In this way, the goal is not to fail fast and break things often, but rather to iterate slowly toward success, even if the end result isn’t the kind of disruption that forms the cornerstone of Silicon Valley’s ethos.

Leave a Reply

Your email address will not be published.