All example engagements
Data Platform

SKU-level demand forecasting at store granularity

Standing up a forecasting platform that gives category planners a defensible weekly view rather than a black-box number.

Typical duration
16-20 weeks across three pilot categories
Team shape
1 data engineer + 1 ML engineer + 1 analytics engineer + a delivery lead from your side

What good looks like

Forecast cadence
Weekly retraining with a planner review loop
Stockout direction
Meaningfully reduced in pilot categories where supply isn't the bottleneck
Planner workflow
Spreadsheets replaced with reviewable, annotated artefacts

The problem this addresses

Demand planning often lives in a long-lived spreadsheet workflow run by a small group of category planners with deep tacit knowledge. Organisations know they're carrying concentration risk, and previous attempts with off-the-shelf forecasting tools tend to stall because the data foundations underneath aren't consistent enough for the tools to be trusted. The kind of engagement we take on is deliberately scoped: build the platform, prove it on a handful of categories, and leave the planning team better off whether or not the modelling lifts accuracy materially.

How we'd approach it

We treat the data platform as the primary deliverable and the forecasting models as the second deliverable. The first stretch goes into reconciling product, store, and sales hierarchies across source systems and writing the lineage down in a form the planning team can read. From there, per-category models combine a baseline statistical model (for interpretability) with a gradient-boosted residual model that incorporates weather, public holidays, promotions, and known event calendars. We resist moving to a single deep model across the catalogue, planner trust comes from being able to see why a forecast moved week-on-week, and that matters more than aggregate accuracy. Forecasts surface in a review tool where planners can override and annotate, and those overrides feed back into the next training run.

What we'd build

A platform with documented data contracts, weekly retraining pipelines, and a planner-facing review app. We typically prove it on three pilot categories; the remainder run on the legacy process and migrate over time, often by your internal team. Replenishment optimisation, supplier-side ordering integration, and in-store labour planning are explicitly out of scope, useful follow-ons, but not v1.

Honest considerations

Forecasting is not the bottleneck for stockouts in categories where supplier lead times dominate, measuring the model on those categories in year one will produce misleading conclusions. If your point-of-sale data only arrives as a nightly extract, the discovery loop will be slower than it should be; insist on near-real-time access early. And if there's no appetite to change the planner workflow, a better model alone won't move the needle, the review tool is the change-management surface.