Richard Golian

1995-born. Charles University alum. Head of Performance at Mixit. 10+ years in marketing and data.

#myjourney #myfamily #health #cognition #philosophy #digital #artificialintelligence #darkness #security #finance #politics #banskabystrica #carpathians

Slovenčina

Manage subscription Choose a plan

RSS
Newsletter
New articles to your inbox
Richard Golian

Hi, I am Richard. On this blog, I share thoughts, personal stories — and what I am working on. I hope this article brings you some value.

AI sales forecast: 9 traps so far

Building an AI sales forecast — marketing attribution traps from one evening with an agent

By Richard Golian

Yesterday I could not tear myself away from the computer. When I lifted my head, it was half past eight in the evening. I had been sitting alone upstairs for about three hours.

I was teaching an AI agent that can work independently with data and code. The task: a short-term sales forecast — a predictive view of incoming orders and revenue.

The plan was simple.

Give the agent the data, the campaigns, the context, and let it forecast orders for the next thirty days, every morning. And teach it to understand why the number lands where it lands on a given day.

I decided to build this more robustly than this particular output strictly requires. The reason is broader than one prediction. Once the agent understands what revenue is made of — the tail of a season, a short-term push, an unexpected outage, the effect of overlapping campaigns — a whole field of possibilities opens up for what else I can put it to work on.

One thing was clear from the start. Throwing a pile of numbers at the agent is not enough. For the result to be usable, it has to understand the connections between them. It has to be able to answer "if that seasonal campaign were not running, what would the chart look like?". It has to say "this mid-month peak we are expecting because of a retention campaign ending in two weeks". It has to answer what-if questions and return believable simulations.

The goal is clear.

Another step toward the state when your AI agent joins the team. Get the agent to a level where someone else can say "fine, you take this over, I will do something else". It is not easy.

THE NAIVE FIRST VERSION

The starting position was this. The data warehouse keeps daily order aggregates. The project management tool stores campaigns with tags, start and end dates, types. The marketing plan provides year-on-year growth assumptions.

I gave it to the agent and it produced a formula:

baseline(2026-D) = actual(2025-D, weekday-aligned)
forecast(2026-D) = baseline × growth_target × campaign_multiplier

The multiplier (the number you multiply the baseline by to reflect the impact of a campaign) it pulled from history. A day at the peak of a particular campaign historically had some multiple of revenue compared to the state when no campaign was running. A different multiple for seasonal holidays.

At first glance it looked decent. Close enough to be worth tuning. I started building a dashboard so I could visualise the result while tuning it.

When I asked it to explain the logic and visualise the data, a several-hour battle began.

ROUND 1 — WHY WERE MY PROFILE MULTIPLIERS LYING?

One of the campaigns the model flagged as the strongest. That was wrong.

I wrote to it: "That is completely off. This thematic week is one of the weaker ones. The other campaign running in parallel has a much bigger impact."

The problem was in the baseline (the reference state against which campaign impact is measured). The multiplier was being computed as the ratio of (median of days when the campaign ran) to (median of other days). But "other days" included other parallel campaigns. The baseline was artificially inflated. The lift attribution (the increment in revenue assigned to a campaign) was distorted in both directions — some campaigns overstated, others understated, depending on which other campaigns happened to be running during their inactive days. In overlap periods — which is most of the year — the attribution was completely off.

After I objected, the agent rewrote the baseline definition to "median of days when no push campaign was running". But the result was not suitable as a starting point for analysis. There were few clean days. For some markets and weekdays I did not even have five examples. Campaigns overlap almost continuously.

ROUND 2 — WHY IS AD ATTRIBUTION ONLY THE TIP OF THE ICEBERG?

Then came the attempt to add more context. Measurable campaign impact via ad attribution (assigning orders to a specific ad) — conversions from the ad platforms.

The agent could not interpret it correctly again.

I wrote to it: "But you did not account for consent rate. How many people refuse cookies."

Through the ad platforms only a portion of orders gets matched. The rest goes through non-consent customers who refused cookies — they do not show up in the ad platforms, but they do in the order records. The agent knew about this gap, but did not include it in the prediction method.

After recalculation the campaign numbers rose to more realistic values.

And straight away we hit another layer.

ROUND 3 — HOW DOES ON-SITE COMMUNICATION CHANGE CONVERSION RATES?

I wrote to it: "A campaign does not have impact only through ads. When a campaign appears on the website, conversion lifts for everyone who arrives, not just clicks from ads. Including those from search referrals, direct, and so on."

Continue reading:

Full access to my thoughts, personal stories, findings, and what I learn from the people I meet.

Join the Library

Summary

I am teaching an AI agent to forecast incoming orders and revenue. In one evening I fought through nine rounds: profile multipliers were lying, ad attribution overlooked the consent gap, on-site visibility motivated all visitors, keyword matching credited the wrong campaign, subtasks held different deadlines, switching off a campaign in scenario mode did not subtract the full size, new markets needed a rolling fallback, scenario mode had to hold its boundaries without persistence, and the chart library remembered the old drawing. The prototype runs on a dashboard with stacked decomposition, three-component measurement of campaign impact, and click-through to campaign level. Half way there.
Richard Golian

If you have any thoughts, questions, or feedback, feel free to drop me a message at mail@richardgolian.com.

Newsletter

New articles to your inbox

Common questions on this article's topic

Can an AI agent forecast sales and orders?
Yes, but not naively. I built an AI sales forecast through an autonomous agent that combines historical data, campaign records and a marketing plan. The first version produced plausible-looking numbers and required nine rounds of iteration before the agent grasped consent gap, web CR uplift, mail attribution and scenario mode.
How do you build an AI sales forecast?
Start with a weekday-aligned year-on-year model: take the actual figure from the same weekday last year and multiply by a growth assumption and a campaign multiplier. That is the naive baseline — the full system needs three campaign attribution components (ad-attributed, web CR uplift, mail attribution), per-market subtasks and a fallback for new markets without history.
What is the consent gap in ad attribution?
Cookies refused by the customer mean the order does not pass through the ad platforms, but does appear elsewhere in your own data. Without a consent coefficient an AI agent will underestimate campaigns, because it counts only ad-attributed conversions.
How do you measure the impact of a marketing campaign on conversion rate?
Use the formula extra_orders = (CR_during − CR_baseline) × sessions_during. Web CR uplift measures the effect of on-site campaign visibility on all visitors, including organic search and direct traffic, not just clicks from ads.
Can an AI agent replace a marketing analyst?
Not yet for the whole job, but for routine daily sales forecasting it can. The goal is to reach a state where someone can say — fine, the agent takes this over, I will do something else. The forecast falls slightly under an experienced manual estimate and is still being tuned.
How do you forecast orders for a market with no history?
A conservative fallback: a rolling average across four matching weekdays, without applying the growth multiplier. Campaign multipliers still apply normally. Without this fallback the weekday-aligned year-on-year model returns zero for new markets, because no data exists in the previous year.