Historical Note (October 2025): This post was originally written for "TrueNames," our project's original name inspired by Ursula K. Le Guin's "The Rule of Names." We rebranded to Mamut Lab in October 2025 to avoid confusion with TrueName.ai, an existing AI company.
The new name honors the Kikinda mammoth ("Kika")—discovered in Kikinda, Serbia in 1996, preserved for 500,000 years. Like the mammoth's preservation, Mamut Lab preserves research context across extended timescales.
The Le Guin inspiration and philosophy of deep understanding remain central to Mamut Lab. This post is preserved as part of our heritage.
What Ursula K. Le Guin Understood About Power
In her 1964 short story "The Rule of Names"1, Ursula K. Le Guin introduced readers to Earthsea's magic system, where knowing something's true name gives you power over it. Not the name everyone uses in conversation—that's a use-name, a convenient label. The true name: the word in the Old Speech that captures the essential nature of a thing.
When the wizard Blackbeard discovers that the mild-mannered Underhill's true name is Yevaud—the name of an ancient dragon—he speaks it aloud, believing he can control the dragon's form. The story turns on this moment: knowing a true name isn't about having information, it's about understanding essence.
Le Guin established a principle that would resonate through decades of fantasy literature: sharing your true name with another is an act of complete trust, because that knowledge grants power over you. Use-names are safe precisely because they're superficial. True names are dangerous because they're real.
This distinction—between labels and essence, between what something is called and what it actually is—has surprising relevance sixty years later in a domain Le Guin never wrote about: AI-generated code.
The Problem with Generated Code
AI coding assistants generate code at remarkable speed. Ask for OAuth authentication, receive 500 lines of Python. Request a React component, get hooks and effects and styled-components. The code looks right. Syntax highlighting works. Types check. Tests pass.
Six months later, you need to modify it. That's when you discover the truth: AI generated use-names, not true names.
The code does what it says on the surface—"authenticates users," "fetches data," "renders dashboard." But the why behind the implementation, the constraints that shaped the design, the assumptions embedded in the architecture—these are absent. The AI generated functioning code without understanding, producing something that works but cannot be truly known.
This is why AI-generated codebases become unmaintainable so quickly. Every function has a use-name (what it does) but no true name (why it exists, what it assumes, how it relates to the system's essence). Developers six months later face code that executes but cannot be understood—a thousand use-names with zero true names.
What It Means to Know a True Name
In Earthsea's magic system, knowing a true name means understanding something's nature deeply enough that you can influence it. Le Guin wasn't writing about memorizing arbitrary labels—she was exploring how understanding the essence of something grants mastery over it.
Consider what a true name actually represents:
- Essence, not description: A rock's true name isn't "grey hard object" but the word that captures what makes it rock rather than anything else
- Power through comprehension: Knowing the true name means you understand enough to change, control, or work with the thing
- Relationship, not ownership: Sharing a true name creates deep connection—it's trust, not transaction
- Responsibility: With knowledge of true names comes the obligation to use that power appropriately
Le Guin embedded profound epistemological insight in her fantasy magic: real knowledge is understanding, not mere information. You can memorize that "tolk" means rock in the Old Speech, but knowing rock's true name means comprehending its nature deeply enough to change it, shape it, or speak it into being.
Use-Names in Software: The Illusion of Understanding
Modern software is built almost entirely from use-names:
def authenticate_user(username: str, password: str) -> Token:
"""Authenticates user and returns JWT token."""
hashed = bcrypt.hashpw(password.encode(), salt)
user = db.query(User).filter(User.username == username).first()
if user and user.password_hash == hashed:
return generate_jwt(user.id)
raise AuthenticationError("Invalid credentials")
The function name tells you what it does. The docstring repeats that information. The implementation shows how. But the true name—the essential understanding—is missing:
- Why bcrypt specifically? (Resistance to GPU-accelerated attacks, adaptive cost factor)
- Why query by username not email? (Username uniqueness constraint from legacy system migration)
- Why return JWT not session ID? (Stateless auth for microservices, distributed system requirement)
- What assumptions does this make? (DB availability, bcrypt cost=12, token expiry handled elsewhere)
- What breaks if these assumptions change? (Replay attacks if token validation fails, account enumeration if error messages differ)
The true name of this function is the constellation of constraints, assumptions, and architectural decisions that make this particular implementation correct for this specific system. AI generates the use-name effortlessly. The true name requires human understanding.
When Use-Names Fail
Six months after AI generates your authentication system, requirements change: you need to support OAuth providers.
The developer opens authenticate_user, sees the use-name ("authenticates user"), and assumes they can
simply add OAuth as another authentication path.
But the function's true name reveals this is impossible: it's tightly coupled to password-based auth through bcrypt hashing and database password fields. OAuth requires different data models (provider IDs, access tokens, refresh flows), different security assumptions (trusting external validators), and different error handling (network failures, token expiry). The architecture's true name— "password-based authentication with stateless JWT for distributed system"—makes OAuth a fundamental refactor, not a feature addition.
The developer who knows only the use-name attempts the addition. The developer who knows the true name recognizes a redesign is needed.
Why AI Cannot Generate True Names
Current AI code generation operates through pattern matching over training data. It recognizes that "authenticate user" typically involves passwords, databases, and token generation. It produces syntactically correct code implementing this pattern. But pattern matching is fundamentally incapable of generating true names because true names require understanding context, constraints, and architectural decisions that patterns cannot capture.
Consider what AI cannot know when generating authentication code:
- Why your system uses bcrypt (security requirements you never mentioned)
- Which authentication flows are planned for the future (OAuth, SAML, SSO—not in the prompt)
- What scalability constraints exist (are you building for 100 or 100 million users?)
- Which parts of the codebase assume password-based auth (tight coupling hidden across files)
- What regulatory requirements apply (GDPR, HIPAA, SOC2 password storage rules)
- How authentication integrates with authorization, session management, audit logging
These constraints, assumptions, and relationships are the true name. AI generates code that matches statistical patterns of "authentication" in its training data, producing something that works for the average case but lacks the specific understanding your system requires.
Le Guin's insight applies precisely here: use-names are safe because they're superficial; true names are powerful because they're real. AI-generated code is safe to write because it's superficial—statistical patterns without deep understanding. Human-understood code is powerful because developers know its true names: the constraints, assumptions, and relationships that make it correct for this specific system.
Mamut Lab: Understanding, Not Just Generation
Our original name "TrueNames"—plural—captured this insight: maintaining code requires knowing multiple true names simultaneously:
- The true name of your architecture (microservices vs monolith, stateless vs stateful, eventual consistency vs strong)
- The true names of your constraints (latency requirements, security compliance, backward compatibility)
- The true names of your assumptions (this will scale to X, this API is stable, this database is available)
- The true names of your components (why they exist, what they assume, how they compose)
When AI suggests changes without knowing these true names, it produces modifications that compile but break system essence— like knowing someone's use-name and assuming you can command them.
Mamut Lab doesn't just generate code—it maintains understanding:
- Multi-model verification: Claude generates, Codex reviews—each checks whether changes preserve true names
- Explicit constraint tracking: Guardrails formalized as YAML, not buried in conversation history
- Architectural awareness: Before changing authentication, verify what assumes password-based auth
- Context preservation: Maintain the why behind implementations, not just the what
When Blackbeard discovered Underhill's true name and spoke it, he thought he could control the dragon. Le Guin's story reveals the danger of knowing true names without wisdom: Blackbeard had the name but not the understanding. He knew what Underhill was but not who Yevaud had become.
Similarly, AI that generates code knowing only use-names produces implementations that work until the first time someone needs to change them. Then the absence of true understanding becomes catastrophic.
The Philosophical Depth: Names as Relationship, Not Data
Le Guin wasn't writing about memorization or label management. Her exploration of names in Earthsea concerned how knowledge creates relationship and power emerges from understanding.
In "The Rule of Names," sharing your true name with someone is described as "an act of complete trust"1—not because names are secret codes, but because giving someone your true name means letting them understand you deeply enough to influence you. This is relationship, not ownership. It's vulnerability, not transaction.
Software development requires the same kind of relationship with code. You cannot maintain what you don't understand. You cannot safely modify architecture whose true names you don't know. The proliferation of AI-generated codebases where developers understand nothing they didn't write themselves is producing a crisis of code without relationship— repositories full of functions with use-names, devoid of true understanding.
Le Guin's stories explored how wizards in Earthsea spent years learning true names at Roke, not memorizing words but developing the deep comprehension necessary to wield power responsibly. The training wasn't vocabulary—it was understanding how magic works, what it costs, where its limits lie.
Modern software development has attempted to skip this learning phase: just ask AI to generate code, iterate until tests pass, ship to production. This is the equivalent of learning a few true names from a book and assuming you're a wizard. The results are predictable: code that works until it doesn't, systems that run until they catastrophically fail, architectures that satisfy requirements until those requirements change.
What Using True Names Actually Means
Understanding this distinction changes how you build software:
1. Generate code with verification, not just pattern matching
AI generates implementations (use-names). Separate verification step checks whether the implementation preserves system constraints and architectural assumptions (true names). Code review by a second model asking "Does this break anything fundamental?" rather than "Does this syntax compile?"
2. Document constraints and assumptions explicitly
Comments that say "authenticates user" add no value—that's the use-name, already in the function name. Comments should capture true names: "Uses bcrypt cost=12 for NIST 800-63B compliance. Do not reduce—creates security audit flag." "Assumes UserDB.username is unique (enforced by DB constraint). If this changes, update error handling for collisions."
3. Test understanding, not just functionality
Tests that verify "authentication works" check use-names. Tests that verify "changing password cost doesn't break existing hashes" and "adding OAuth doesn't invalidate existing JWT assumptions" check true names—the deep properties that must be preserved across changes.
4. Recognize that understanding cannot be automated
Le Guin's wizards couldn't learn true names from books—they required years of study and mentorship. Similarly, developers cannot understand code merely by reading AI-generated implementations. Understanding emerges from grappling with tradeoffs, experiencing failures, and building mental models of how systems actually work.
AI can accelerate writing code. It cannot accelerate understanding code. The former generates use-names; the latter requires knowing true names.
Conclusion: Names Are Not Labels
Ursula K. Le Guin's "The Rule of Names" appeared in 1964, establishing a principle that transcends fantasy literature: knowing something's true name—its essential nature—grants power over it, while use-names provide only superficial interaction.
Sixty years later, AI generates millions of lines of code daily. All of it has use-names: function names, variable names, class names that describe surface functionality. Almost none of it has true names: the deep understanding of constraints, assumptions, and architectural relationships that make code maintainable.
The distinction matters because software isn't written once—it's maintained for years. Code generated without true understanding becomes technical debt the moment it's committed. Six months later, when requirements change, developers face implementations with use-names only, forced to reverse-engineer true names from behavior rather than understanding them from design.
Mamut Lab—originally named "TrueNames" (plural) because complex systems have multiple essential natures simultaneously—builds AI tooling that maintains understanding, not just generation. Multi-model verification checks whether changes preserve system essence. Explicit constraint tracking formalizes true names rather than leaving them implicit. Context management ensures the why behind implementations persists alongside the what.
Le Guin understood that names are not arbitrary labels but relationships of understanding. Knowing a true name means comprehending something deeply enough to influence it responsibly. In Earthsea, this manifests as magic. In software development, it manifests as the ability to maintain, modify, and evolve code without breaking what it fundamentally does.
AI that generates code knowing only use-names produces implementations that work until the first modification. AI that maintains true names—understanding constraints, assumptions, and relationships—produces code that can be understood, trusted, and evolved over years.
That difference, between use-names and true names, between information and understanding, between generation and comprehension, is why we exist.
References
1 Le Guin, Ursula K. "The Rule of Names." Fantastic, April 1964. Later collected in The Wind's Twelve Quarters. Wikipedia: The Rule of Names
Explore Further
See our technical documentation on how Mamut Lab maintains architectural understanding, constraint verification, and context preservation across code generation cycles.
Or contact us to discuss building AI systems that understand code, not just generate it.