Coming Soon

The complete weather SDK for prediction markets.

Observations, forecasts, satellite, and settlement. All the data behind every Kalshi and Polymarket weather contract. Built for AI agents and quants.

MOSTLY RIGHT SDK · PREVIEW
$ pip install mostlyright

mostlyright 0.11.0 · ready

Cities............60+
History...........1948live
Forecasts.........NBM · GFS · HRRR · ECMWF
Satellite.........GOES-16 + 19
Markets...........Kalshi · Polymarket
Market data.......candles · L2 books (full depth)
Settlement........NWS CLI (LST-aware)
Reconciliation....source-priority dedup
Precision.........T-group (decimal)
Output............TOON · pandas · polars · json

$ _

The Problem

Public data is free. The engineering isn't.

Plumbing, not strategies.

Market prices live in one API, weather sits in another. Satellites publish NetCDF, forecasts arrive as GRIB. You spend months on engineering before you make a single trade.

One reading. Four formats.

ASOS sensors record in Fahrenheit. Live APIs serve Celsius. Historical archives round to tenths of Celsius. Satellite reports in Kelvin. You reconcile four units for every single reading.

Same city. Different truth.

Kalshi's NYC high-temp settles on Central Park ASOS. Polymarket settles on Wunderground LaGuardia. They cover the same city but use different data. Every new market takes days of research.

NOAA publishes satellite data for free. It arrives as 3.7 million NetCDF files across two GOES satellites. A new scan every five minutes, each one a binary blob designed for climate researchers, not traders. You need to reproject the grid, deduplicate overlapping scans, validate against ground stations, and reconcile units — all before you get one clean number. We handle that entire pipeline.


Weather

Every signal that drives a weather market.
Cleaned, reconciled, and settlement-grade.

Kalshi and Polymarket run weather contracts across 60+ cities. Both settle on official station data. We pull every source into one API, reconcile the units automatically, and match each reading to the right settlement station.

Source-priority dedup removes duplicates before they reach your model. Live data returns the same shape as historical, so backtests just work in production. Settlement windows follow Local Standard Time. Climate records stay held until the NWS publishes. Every reading passes validation before it ships. Bad data, zeros, and NaNs get caught at ingest, not in your backtest.

Ingest: AWC/ IEM/ NCEI GHCNh/ NWS CLI/ IEM MOS/ Open-Meteo/ GOES-16/ GOES-19
60+ cities
Kalshi + Polymarket
77 years of history
since 1948
4 forecast models
NBM · GFS · HRRR · ECMWF

Markets

Query Kalshi and Polymarket like one exchange.

We normalize Kalshi and Polymarket into one consistent schema. Query tables directly for raw data, or call snapshot() and get everything an agent needs in a single response.

candles Covers full history at native cadence. Both platforms share one schema.
book_snapshot Returns the Level-2 order book at any point in time.
pairs Links each contract to the weather station and settlement window that resolves it.
cli_record The NWS record that settled the contract.
markets Unifies Kalshi and Polymarket. One contract ID works across both platforms.
snapshot() Returns observations, climate, and settlement data for a city in one call as a single response.

Built for agents

What your agent writes.

Designed around two questions an agent has to answer correctly or lose money: what would I have known at time T, and what does this number actually mean?

Without Mostly Right

# scrape AWC METAR text, parse manually
# pull IEM CSV for gap-fill, dedup somehow
# pull NCEI GHCNh for historical backfill
# parse NWS CLI for settlement records
# stream GOES NetCDF from S3, project per pixel
# wire it to Kalshi and Polymarket
# convert units, handle nulls and sentinels
# hope the live data matches the historical
# discover at trade time that it doesn't

With Mostly Right

from mostlyright import MostlyRightClient
client = MostlyRightClient()

# Temporally-honest query
snap = client.snapshot("NYC", as_of="2024-07-04T18:00:00Z")

snap.observations        # filtered to LST window
snap.climate             # CLI record, or None
snap.version             # reproducibility token

# Agent-native context
# Anthropic-compatible tool definitions
tools = client.as_tools()
# ~60% fewer tokens than JSON
toon = client.snapshot("NYC", format="toon")
TOON Compress a year of daily pairs into 24K tokens. ~60% smaller than JSON.
as_tools() Drop SDK methods into any Anthropic tool-use call. Zero glue code.
as_of Pin every query to a timestamp. Replay any backtest exactly.

What's Next

One SDK. Every prediction market vertical.

We built this SDK because we trade these markets ourselves. Weather is the first vertical we ship. Every vertical that follows gets the same depth: the same source discipline, the same settlement-grade accuracy, and the same agent-ready schemas. We only ship a vertical when we understand it well enough to trade it.

Sports

Game results, player stats, and injury reports from official league feeds. NFL, NBA, MLB, NHL, and soccer. Every resolution traces back to its primary source so your agent never trades on unverified data.

Economics

FOMC decisions, jobs reports, CPI, GDP. We parse the original filings and PDFs so your agent gets structured data, not documents. Every release pinned to its publication time so backtests match what the market actually saw.

Esports

Match results, tournament brackets, and player performance extracted from league APIs and broadcast feeds. CS2, League of Legends, Dota 2, Valorant. We structure the complicated formats so your agent gets clean, queryable data.

Commodities

Oil, gas, metals, and agriculture from EIA, USDA, and CME. Price reports and inventory data normalized into the same schema your agent already uses for weather. Same SDK, same settlement discipline.


You find the edge.
We handle everything else.

Waitlist gets first SDK access and a free API tier.