Tagged “llm”

  1. JSON is Making You Lose Money!!! Slash LLM Token Costs with TOON Format

    ~ cat post <<

    JSON vs TOON Token Explosion

    Let's be real: every time you shove a bloated JSON blob into an LLM prompt, you're literally burning cash. Those curly braces, endless quotes, and repeated keys? They're token vampires sucking your OpenAI/Anthropic/Cursor bill dry. I've been there – cramming user data, analytics, or repo stats into prompts, only to hit context limits or watch costs skyrocket.

    But what if I told you there's a format that cuts tokens by up to 60%, boosts LLM accuracy, and was cleverly designed for exactly this problem? Meet TOON (Token-Oriented Object Notation), the brainchild of Johann Schopplich – a dev who's all about making AI engineering smarter and cheaper.

    Johann nailed it with TOON over at his original TypeScript repo: github.com/johannschopplich/toon. It's not just another serialization format; it's a lifeline for anyone building AI apps at scale.

    Why JSON is Robbing You Blind in LLM Prompts

    JSON is great for APIs and config files. But for LLM context? It's a disaster:

    • Verbose AF: Braces {}, brackets [], quotes around every key and string – all eating tokens.
    • Repeated Keys: In arrays of objects, every row repeats the same field names. 100 users? That's 100x "id", "name", etc.
    • No Built-in Smarts: LLMs have to parse all that noise, leading to higher error rates on retrieval tasks.
    • Token Explosion at Scale: A modest dataset can balloon to thousands of unnecessary tokens.

    Result? Higher costs, slower responses, and more "context too long" errors. If you're querying GPT-5-nano or Claude with tabular data, JSON is quietly making you poor.

~ <<

See all tags .