Hi, I am Richard. On this blog I share my thoughts, not investment advice. This is not a recommendation to buy or sell securities.
Building an AI Stock Market Prediction System That Grades Itself
Building an AI S&P 500 forecaster — and writing about it as I go
In July 2024 I wrote about the changing moods of the stock market. I described tracking valuation ratios alongside media narratives as a mood thermometer. That article ended with a line I have come back to often: the stock market is a reflection of collective human emotions and behaviours.
Two years later, that thermometer still mostly lives in a Google spreadsheet — valuation ratios in columns, my own commentary on what the financial press was saying alongside them. It works for me. It does not work for anyone else. And more importantly, it cannot be tested. I cannot point to a calibrated record of how often my readings have been right, how often they have been wrong, or whether I am better than a coin flip when I claim to see elevated valuation.
So I built something. I have been at it since five this morning. I am writing this at eight. The first version is now running on my own machine. The pipeline works end to end. It does not yet have enough graded predictions in it to tell me anything meaningful — that part is just beginning. This article is the first in a series that documents the build itself, and what the system tells me once the record starts to fill up.
WHAT IS A CALIBRATED PREDICTION SYSTEM BUILT ON?
Three older articles converge into the design of what I have built. Each is a separate idea. Together they form the spine.
The first is risk-reward asymmetry. Every prediction the system emits comes with explicit probabilities and a confidence number. It has to answer the question I keep asking myself out loud. If I am wrong, how much do I lose? If I am right, how much do I gain? And is the ratio in my favour?
The second is decision quality over decision outcome. It runs through both Decision-Making in Marketing and Advertising Under Uncertainty and I make mistake after mistake. The primary metric is not hit-rate. It is calibration error. When the system says 70 per cent, does the world deliver 70 per cent? A predictor that says 95 per cent and is right 80 per cent of the time is more dangerous than one that says 70 per cent and is right 70 per cent of the time. The build enforces this in its own UI. Hit-rate is never reported without calibration error next to it. The numbers will only become meaningful once the record has enough graded predictions in it. A later article in this series will go into how the comparison is computed.
The third is the mood thermometer. I described it as my way of reading the market — partly through how expensive it was against its own history, and partly through how the financial press was talking about it. I returned to both halves of it later, in The Stock Market Hums with Hope and Do you know what CAPE is?. In the first phase of the build, the system formalises only the valuation half. It computes the CAPE percentile against the full distribution since 1871. It classifies the market into one of five regimes. Every smart prediction is conditioned on the regime it was made in. The narrative half stays in the spreadsheet, for now.
WHAT THE FIRST VERSION ACTUALLY DOES
Daily, on my own machine, the build ingests S&P 500 OHLCV data, FRED macro indicators, and the Shiller CAPE series. It also pulls valuation fundamentals from yfinance.
It then computes valuation features. Trailing and forward P/E. Price to book. Dividend yield. CAPE percentile against the long historical distribution. From those features it labels today's regime, choosing one of five: low-volatility uptrend, high-volatility correction, range-bound, elevated valuation, or cyclical trough.
It then emits predictions for the S&P 500 across six horizons, from one day to twelve months. Each prediction is a probability distribution with a calibrated confidence number attached. It is not a single number.
Each prediction is graded at its review date. The record is never edited. The system judges itself by aggregates, not single hits. A minimum sample of thirty predictions per metric is required before any number is considered meaningful. The record began today. The interesting part of this series begins once it stops being small.
Full access to my thoughts, personal stories, findings, and what I learn from the people I meet.
Join the LibraryDisclaimer
Summary
Sources
Common questions on this article's topic
Can AI predict the stock market?
What does it mean to calibrate a prediction system?
Why is hit-rate a misleading metric for stock forecasts?
What is the CAPE ratio and why does it matter for S&P 500 forecasting?
Can a stock market forecasting system run locally without cloud services?
How do you keep a forecasting track record honest?
If you have any thoughts, questions, or feedback, feel free to drop me a message at mail@richardgolian.com.