Open Thoughts Project

A DataComp and Bespoke Labs community effort to curate the best open reasoning datasets.

Our first goal is to curate a reasoning dataset to train state of the art small reasoning models that surpass DeepSeek-R1-Distill-32B and DeepSeek-R1-Distill-7B on math and code reasoning benchmarks.

Latest Results

ModelDatasetAIME24AIME25 IMATH500GPQA-DLCBv2
LIMO-32B0.8k56.749.386.658.160.0
s1-32B1k36.025.384.850.540.9
s1.1-32B1k64.749.389.060.165.5
OpenThinker-32B114k66.053.390.661.668.9
R1-Distill-32B800k76.755.989.457.671.2

The numbers reported in the table above are evaluated with our open-source tool Evalchemy.

About us

We are a team of researchers and engineers from Stanford, University of California Berkeley, University of Washington, Bespoke Labs, Juelich Supercomputing Center (JSC), LAION, UCLA, UNC Chapel Hill, and Toyota Research Institute united around building the best datasets (and thus the best models). See our previous works at datacomp.ai and mlfoundations.

Open Thoughts is supported by Bespoke Labs, NSF IFML, UT Austin Machine Learning Lab, Juelich Supercomputing Center, Toyota Research Institute, Lambda Labs.

Announcements

Subscribe for updates

* indicates required