Sahar Cohen
Managing an AI project: how to minimize the risk by reducing uncertainty

Whether it’s a single AI solution for a non-tech organization or AI features for a tech giant, AI projects tend to go through the following three phases:
Ideation, in which the subject of the project is decided and the AI task is characterized.
POC-ing, in which research is conducted in order to build a working POC that can be examined in terms of the value it can produce.
Productization, in which the POC is engineered as part of a working system.
AI projects might expend a considerable amount of effort, and since the very essence of these projects is innovation, success rates (measured from idea to delivery) are typically low. Consequently, many AI projects that start with a burst of high energy end up as failures. In this post, we use the three typical phases of an AI project to explore a minimal-risk approach to AI
The basic observation that supports our proposed approach is that the effort required to complete each of the three phases tends to increase as the project progresses. You can generally complete the ideation phase within a few days; POC-ing may take a few weeks, and productization usually consumes a few months. On the other hand, as the project progresses, the amount of uncertainty drops drastically. Let us focus on each of these phases, and see what can be done in order to reduce uncertainty.
AI Ideation
AI is all about automating your decision making. The first and probably the most important consideration in any AI project is the exact understanding of the repetitive decision-making process that needs to be automated as part of the project. It might take some time and involve such things as data research, discussions, and simulation. But, the bottom line is that if you cannot accurately define the repetitive decision-making process you want to automate, you still don’t know what you are trying to build, and you therefore have a situation of maximum uncertainty.
Here are some questions that might be helpful at this stage:
What does a single decision-making instance looks like (e.g., a credit card transaction that we need to characterize as either legitimate or fraudulent)?
What are the potential decisions for this type of instance?
What are the consequences of each decision?
How do you measure the attractiveness of each possible consequence?
Can you pre-tailor the right decision for every possible decision-making instance?
Given a black box that always makes the right decision, how would you improve your business performance? What would the magnitude of such improvement be?
Once you have convincing answers to all of the questions above, you should have a far better idea of the problem you are solving. But, before you can start POC-ing, you must be certain that you have the relevant data to solve the task that you defined. Basically, you will need access to realizations of historic decision-making instances. Generally, the more realizations you can collect, the higher the chances that you can train an AI model to automate the decision-making process. For each of these realizations, you should have a set of explaining variables (often called features). The explaining variables should be wide enough to describe each realization in a way that facilitates the decision-making process.
In many cases, a good way to validate this is to have a domain expert look at a few instances. If the domain expert can make the right decision based on the available variables, it is highly likely that an algorithm will be able to learn a decision-making policy as well. In addition to explaining variables, each realization should come with some indication of the right decision. It is helpful to ask yourself the following questions:
How many realizations of decision-making instances do I have?
What are the available explaining variables?
In which systems are these variables stored?
What is the quality of the data? Is any of the data missing or incorrect? Does it require significant pre-processing?
By the end of the ideation phase, you should have:
A clear definition of a decision-making process that you want to automate, and:
A good understanding of the data sources and their quality.
AI POC-ing
The purpose of the POC-ing phase is to build a real mechanism that will automate the decision-making process. While building the POC, we simplify our work by ignoring some of the technical aspects, such as integration with other systems, continuous updating of the AI model, and the monitoring of the entire process. The things we ignore are important for a production system, but from a functional perspective, they shouldn’t affect the validity of the solution designed for the business. The typical steps during this phase are as follows:
Data preparation: integrate the data sources into a single file, in which every row represents a single decision-making instance.
Data QA: validate the preparation process and search for quality issues in the data (mainly: missing values and outliers).
Initial data exploration: examine the important features and, in particular, search for correlations between explaining features and the right decision.
Modeling: search for the right AI algorithm and use it to model the decision.
Validation: use independent data (test set) to measure the performance of the model on new instances.
Conclusion: compare the performance of the model to predefined acceptance criteria in order to make go/no-go decisions
By the end of the POC-ing phase, you should have a very clear idea about the value of the project to the business task. This understanding should be a good basis for productizing the model.
Since POC-ing can require effort, the right way to run it is first to search for important correlations (if you can’t find those, the uncertainty increases) and only then try modeling. Start with simple modeling approaches before attempting more complex solutions.
AI Productization
Productization of AI models is an engineering task. In this phase, we automate the training of the model and the predictions that this model produces, as part of a system. Typical steps during this phase are as follows:
Designing the training pipeline: automation of data preparation and pre-processing, deciding on the re-training criteria, implementing the actual training, automating quality testing
Designing the prediction pipeline: automating the pre-processing of new instances and implementing business logic based on the model prediction.
Designing the monitoring of the model performance
By the end of this step, you should have implemented a fully working model.
These three phases usually require an increasing level of effort; however, this is compensated for by the decreasing level of uncertainty. Moving to the next phase is not a must, but rather something to be decided based on a better understanding of the potential value versus the expected effort. Spending time thinking about valueless ideas is no big deal. In fact, it’s impossible to innovate without doing exactly that. But you really don’t want to fully engineer a complex system that requires significant effort without seeing significant value once you are done.
Takeout:
AI projects might be time-consuming. It is perfectly reasonable for some of your ideas to be invalid, and it is even okay to do some research and find out that you cannot attain a business-valuable solution at this time. However, it is not okay to proceed in the flow of the project without minimizing the level of uncertainty. You don’t conduct significant research if you don’t know what you are trying to solve, and you definitely don’t productize and engineer a solution if you are unable to prove that it’s useful to the business.