David BECK analyzes technological issues from a political, economic and management perspective.

AI-driven operations forecasting in data-light environment

David BECK Academic - Society, Politics & Techology

What do internal functions as diverse as risk assessment, capital-expenditure planning, and workforce planning have in common? Each is fundamentally about understanding demand — making demand forecasting an essential analytical process. Amid rising pressure to increase forecasting accuracy, more companies have come to rely on AI algorithms, which have become increasingly sophisticated in learning from historical patterns.

Too many companies still rely on manual forecasting because they think AI requires better-quality data than they have available.

Applying AI-driven forecasting to supply chain management, for example, can reduce errors by between 20 and 50 percent — and translate into a reduction in lost sales and product unavailability of up to 65 percent. Continuing the virtuous circle, warehousing costs can fall by 5 to 10 percent, and administration costs by 25 to 40 percent.

Automated AI-driven forecasting promotes these benefits by consuming real-time data and continuously identifying new patterns. This capacity enables fast, agile actions because the model anticipates demand changes rather than just responding to them.
In contrast, traditional approaches to demand forecasting require constant manual updating of data and adjustments to forecast outputs. These interventions are typically time-consuming and do not allow for agile responses to immediate changes in demand patterns.

Yet despite AI’s numerous advantages, organizations have faced challenges that limit its adoption. As of 2021, a solid majority — 56 percent — of surveyed organizations reported that they had adopted AI in at least one function. For many organizations, limited data availability — or limited usefulness of the data that are available — is still a problem.

While it’s generally true that more data can improve results, the experiences of companies with widely disparate levels of data quality show that most organizations have enough data to derive value from AI-driven forecasting. It’s a matter of building specific and actionable strategies to apply these models even in data-light environments:

  • Choosing the right AI model. In many instances, machine-learning (ML) models can test and validate multiple models to find the optimal choice, with minimal human involvement.
  • Leveraging data-smoothing and augmentation techniques. This technique works when a period within a time series is not representative of the rest of the data.
  • Preparing for prediction uncertainties. Sophisticated scenario-planning tools that let people insert a wide range of parameters can help when forecasting models do not achieve satisfactory accuracy.
  • Incorporating external data APIs. This option is applicable when external data sources are necessary to inform the forecast values.

Choosing the right AI model

Having more historical data generally makes for more-robust forecasting. Long-running historical data are not available for all cases. In such situations, a successful forecast provides reasonable outputs for cases with a low sample size while maximizing the accuracy of outputs for cases with long-term historical data.

Moreover, other factors could further complicate the forecasting process. For example, patterns of seasonality may be very complex, varying on a weekly, monthly, and yearly basis. These patterns might also gradually change over time, owing to different business initiatives. Similarly, observed trends may be inconsistent over time and show multiple change points throughout.

Testing a range of models with different complexity levels for every data set improved forecast accuracy by almost 10 percent for volume, and about half that for average holding time. Overall, this forecasting approach reduced costs by about 10 to 15 percent, while improving service levels by 5 to 10 percent — particularly by enabling faster transaction time.

Leveraging data-smoothing and augmentation techniques

Often, time-series data are influenced by anomalous periods that disrupt overall trend patterns and make it extremely difficult for any AI model to learn and forecast properly. Smoothing is a technique to reduce the significant variation between time steps. It removes noise and creates a more representative data set for models to learn from.
The impact of smoothing becomes more evident when the time-series data are affected by a particular event in the past that is not expected to recur regularly in the future.

The company’s goal was to forecast sales in its retail stores. Although the drop in sales volume during April and May seemed to have been a one-time event, it significantly affected the machine-learning process. The anomalous period has completely different patterns of seasonality and trend compared with the rest of the time series. But the machine-learning models will not automatically treat this period as anomalous. Instead, they will try to learn from it alongside the rest of the time series as they generalize the overall patterns. In this example, the anomalous period confused the model, and it was unable to learn the intrinsic seasonality patterns as expected.

Preparing for prediction uncertainties

Relying solely on statistical forecasts may not provide the business insight required. This is especially true for long-term forecasts, as unexpected events that affect trends and seasonality make it more difficult to learn from historical patterns. Given the inherent uncertainty of forecasting analysis in such cases, it is useful to use what-if scenarios.

What-if scenarios are especially important when data samples are too small, making forecasting with a high degree of confidence nearly impossible. They are also useful for dealing with the high uncertainty of forecasting over a period of time far into the future.

What-if scenario tools are particularly valuable when demand and supply patterns are volatile and multiple new business initiatives arise in close succession.

Incorporating external data APIs

Externally sourced data can cover a variety of sources and content, including social-media activity, web-scraping content, financial transactions, weather forecasts, mobile-device location data, and satellite images. Incorporating these data sets can significantly improve forecast accuracy, especially in data-light environments. These sources provide an excellent option for the inputs required for AI-driven models and create reasonable outputs.

Providers of external data often offer their services through APIs, making it convenient for users to access data and integrate the data into their current processes.

After applying AI-driven forecasting across a multitude of industries with different data landscapes, companies have seen a tremendous performance improvement in their models over traditional and spreadsheet-based forecasting. By leveraging AI techniques that are suitable for data-light environments, companies can improve their operations significantly.

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.