Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on September 11, 2020

COVID-19 made your data set worthless. Now what?


COVID-19 made your data set worthless. Now what? Image by: Flickr

The COVID-19 pandemic has perplexed data scientists and creators of machine learning tools as the sudden and major change in consumer behavior has made predictions based on historical data nearly useless. There is also very little point in trying to train new prediction models during the crisis, as one simply cannot predict chaos. While these challenges could shake our perception of what artificial intelligence really is (and is not), they might also foster the development of tools that could automatically adjust.

When it comes to predicting demand or consumer behavior, there is nothing in the historical data that resembles what we see now. Thus, a model based purely on historical data will try to reproduce “what is normal” and is likely to give inaccurate predictions.

Let me give you a simple analogy of the problem that data scientists and machine learning professionals are now experiencing. If you want to predict how long it is going to take to drive from A to B in London next Thursday at 18:00, you can ask a model that looks at historical driving times, and possibly at various scales. For instance, the model might look at the average speed on any day at around 18:00. It might also look at the average speed on a Thursday versus other days in the week, and at the month of April versus other months. The same reasoning can be extended to other time scales as one year, ten years, or whatever is relevant for the quantity you are trying to predict. This will help predict the expected driving time under “normal” conditions. However, if there is major disruption on that particular day, like a football game or a big concert, your travelling time might be significantly affected. That is how we see the current crisis in comparison with normal times.

Perhaps unsurprisingly, many AI and machine learning tools deployed across various businesses – from transport to retail, professional services and the likes – are currently struggling in trying to cope with massive changes in the behavior of both users and the environment. Clearly, one can try making prediction algorithms focus on smaller parts of data. However, it is also pretty obvious that one cannot expect “normal” outcomes and the same quality of predictions as before.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

What to do?

There is some good news for data scientists and the likes though. Generally, data science solutions are built on historical data, but current, “extraordinary” data should come in when continually assessing the performance of those existing solutions. If performance starts to drop off consistently, then that can be an indication that the rules have changed. 

This performance monitoring is independent of predictive systems for now – it tells us how things are doing, but will not change anything. However, I believe that we are now seeing a major push towards systems that could adjust automatically to the new rules. This is something we can call “adaptive goal-directed behaviour”, which is how we define AI at Satalia. If we can make a system adaptive, then it is going to adjust itself based on that current data when it recognizes performance dropping off. We have aspirations to do this, but we are not there just yet. In the short run, however, we can do the following:

 

  • Do not try to train a brand new model from Day 1 of the crisis, it is pointless. You cannot predict chaos;
  • Gather more data points and try to understand/analyze, how the model is affected by the situation;
  • If you have data from a previous crisis with similar characteristics, train a model on that data and test it offline to see if it works better;
  • Make sure your training data is always up to date. Every day, the new day goes into the data and the oldest day goes out. Like a sliding window. The model will then gradually adjust itself;
  • Shrink the timeline of your dataset as much as possible without affecting your metrics. If you have a very long dataset, it will take too long for it to adjust to the new reality; and
  • Manage client expectations. Make it clear that noise is making things very hard to predict. Computing KPIs during this time is next to impossible. 

Clearly, building a model that is able to respond to extreme events may incur significant extra costs, and perhaps it is not always worth the effort. However, should you decide to build a model that is able to respond to extreme events, then they should be considered during development/training. In this case, make sure to capture the long- and short-term history of your data when training the model. Assigning different weights on long- and short-term information will enable you to adapt more sensibly to extreme changes.

In the long run, though, this crisis reminded us that there are events so complex even we humans still struggle to understand, let alone predictive systems we have built to systematize our understanding in normal times. Even us humans need to adapt to this “new normal” by updating our own internal parameters to help us better forecast how long it will take to do the weekly shop or choosing a new optimal path when walking down the street. This adaptability is natural for us humans and it is a feature we should be constantly trying to impart on our new silicon work colleagues. Ultimately, we need to recognize that an AI solution can never be seen as a finished product in the ever-changing and uncertain world in which we live. How we enable AI systems to adapt as efficiently as we do – in terms of the number of data points – is very much an open question whose answer will define how much our technology will be able to be of help during the extremely volatile times that might be ahead of us.

I thank my colleagues Alex Lilburn, Ted Lappas, Alistair Ferag, Sinem Polat, Jonas De Beukelaer, Roberto Anzaldua, Yohann Pitrey and Rūta Palionienė for providing insights and helping me to prepare this article.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top