We show that LLMs benefit from succinct, distilled documentation of SQL constructs that can be retrieved on demand—avoiding the need to parse and chunk web documentation or overload context windows with full manuals. Using DuckDB as our primary case study, we analyze query logs from MotherDuck, a cloud analytics platform built on DuckDB, to show that real-world SQL extends far beyond analytical queries, with users frequently employing dialect-specific functionality that LLMs struggle with. We develop DuckDB-DocBench to evaluate six LLMs, finding that without additional knowledge they achieve only around 60% execution accuracy. A simple approach-LLM-generated function descriptions and examples derived from existing documentation and organized for targeted retrieval-improves accuracy by up to 34%, and also benefits tasks adjacent to Text-to-SQL, such as query repair. To validate generalizability, we apply the same methodology to PostgreSQL (PG-DocBench), observing 15–26% relative improvement despite its extensive presence in training corpora, with newer features showing larger gains.