This is an unfiltered "think aloud" trace of a complete research project—capturing every question, pivot, disaster, and breakthrough across 1,603 timestamped events. This interactive flow diagram shows how an initial attempt to reproduce a real LLM phenomenon in a controlled synthetic system evolved into a broader investigation of how learned representations influence fine-tuning generalization.
The project began with a straightforward goal: create a 2D equirectangular map of cities with populations over 100,000. Initially inspired by the paper "Language Models Represent Space and Time," it quickly evolved into investigating a deeper question: What world representations emerge when training on data downstream of the world, and how do these representations adapt during fine-tuning?
What followed was a six-week journey involving:
git checkout and rebuilding themThis visualization contains the complete trajectory across all 1,603 events.
Research progress is tracked using the following event types:
Training transformer models to learn spatial representations of world geography—from city visualizations to geodesic distance prediction and multi-task spatial reasoning. Hover over nodes to see the evolving research context at each moment.