Building Mneme · Part 2

From Lessons to E-books - Persona Tuning and LoRA-Driven Quality

~9 min read · by Dave Wheeler (aka Weller Davis)

The first lesson Mneme generated was technically correct-but boring as hell. It had all the right keywords, perfect grammar, and zero personality. Reading it felt like drowning in a sea of "leverage synergies" and "utilize best practices." I realized I'd built a content factory that produced... content I wouldn't read.

The problem wasn't the model or the prompts-it was the format. Short-form lessons compressed ideas into summaries instead of letting them breathe. The fix wasn't "use a bigger model." It was a system: persona tuning, editorial contracts, and LoRA lineage that made learning stick across versions.

TL;DR - E-book Creator raised content quality by (1) giving ideas room to develop, (2) enforcing an editorial contract, and (3) training personas via LoRA lineage so improvements compound. The result: useful, consistent long-form output that's actually worth publishing-and proof that continual learning works in production.

The Bad Lesson (and Why It Happened)

Here's what Lesson Creator V1 produced when asked to teach "Effective API Design Principles":

Lesson Creator V1 Output (Bad)
Effective API Design Principles

APIs are essential for modern software development. To design effective APIs,
developers should follow best practices and leverage industry-standard patterns.

Key principles include:
• Consistency in naming conventions
• Proper error handling mechanisms
• Comprehensive documentation
• Scalability considerations

By implementing these principles, teams can build robust, maintainable APIs
that meet business requirements and technical standards.

Technically accurate? Sure. Useful? Barely. It reads like a corporate memo written by committee. No examples, no pitfalls, no personality. This is what happens when you optimize for brevity over depth-every paragraph becomes a generic summary.

Why Lesson Creator Struggled

I could have kept tweaking prompts forever. Instead, I asked: what if we gave ideas space to develop?

Why E-books Clicked: Long-Form as a Feature

Long-form content wasn't just "more words"-it changed the dynamics. E-books gave Mneme's personas room to build context, introduce scaffolding, include examples, and address edge cases. Structure became a feature, not an afterthought.

Here's what E-book Creator V1 produced for the same topic (excerpted):

E-book Creator V1 Output (Better)
Chapter 3: Effective API Design Principles

The best APIs disappear. When you're building a client against a well-designed
API, you don't think about the API-you think about the problem you're solving.
That invisibility comes from disciplined design choices, not luck.

Consistency: The Invisible Contract

Consider two endpoints from a hypothetical API:
• GET /users/{id} → returns user object
• POST /create-new-product → creates product

What's wrong here? The naming convention shifts mid-API. The first endpoint
follows RESTful resource patterns; the second reads like an RPC call. A developer
hitting this API will second-guess every endpoint: "Is it /orders/{id} or
/get-order-by-id? Do I POST to /products or /create-product?"

Pitfall: Inconsistent naming multiplies cognitive load. Every endpoint
becomes a puzzle instead of a predictable pattern.

Example (Good):
• GET /users/{id}
• POST /users
• GET /products/{id}
• POST /products

Now the pattern is clear. Resource nouns, HTTP verbs for actions. Developers
can guess the next endpoint without reading docs...

Still room for improvement, but dramatically better. The difference? Space to explain, concrete examples, identified pitfalls, and a conversational tone. This is teachable content.

Persona Tuning: Contracts Over Clever Prompts

"Be helpful" is not a contract. A good contract is explicit and enforceable. For E-books, I formalized a short editorial spec that became the north star:

Editorial Contract (E-book Creator)
Audience: Intermediate practitioner (has shipped code, understands basics, wants depth)
Tone: Clear, non-hype, pragmatic. Avoid marketing-speak and buzzwords.
Structure: intro → concept → example → pitfalls → checklist/summary
Required per chapter: • 1 hands-on example (code snippet or scenario) • 1 diagram cue (where a visual would clarify) • 3 "gotchas" or common mistakes • No filler or generic definitions-assume a technical reader

This spec is simple, but it gives the persona a tight lane. When deviations occur, we don't tweak the prompt-we adjust the contract or train the LoRA. This separation of concerns kept iteration focused: is the contract wrong, or is the persona not following it?

Building out the prompt template system and integrating persona-driven prompt generation for deeper inference tasks gave an abstraction over continuous learning that reduced risks of overfitting and allowed for diverse perspectives across personas.

Before and After: The Editorial Contract in Action

Here's a side-by-side comparison of how the editorial contract transformed a generic paragraph into something useful:

Before (V1, no contract)
Error handling is important
in API design. APIs should
return appropriate status
codes and error messages to
help developers debug issues
effectively.
After (V2, with contract)
When your API encounters an
error, return a status code
that matches the failure type:

• 400: Client sent bad data
• 401: Authentication failed
• 404: Resource doesn't exist
• 500: Server-side failure

Pitfall: Returning 200 OK
with {"error": "failed"} in
the body breaks HTTP semantics
and confuses monitoring tools.

The contract forced specificity: examples instead of platitudes, pitfalls instead of assertions, concrete advice instead of "best practices."

LoRA Lineage: Making Learning Stick

Mneme's personas improve via small, targeted LoRA adapters with version lineage. Each version builds on the last, merging prior learning while remaining rollbackable.

E-book Persona Evolution
V1 (baseline) → generic structure, surface-level examples
V2 (fix structure + tone) → better paragraphs, fewer buzzwords
V3 (add examples + pitfalls) → consistent use of examples, identified gotchas
V4 (diagram cues + checklists) → proactive suggestions for visuals, actionable summaries

Practically: I curated pairs of "bad vs. good" snippets from early Lesson Creator output. Each pair showed the same concept, but one followed the editorial contract and one didn't. I trained LoRA adapters to bias toward the good form. Over time, the persona started producing "good" structure by default.

The LoRA Training Moment

The first time I trained a LoRA adapter on curated feedback and saw the persona remember what good structure looked like-generating the "good form" by default on the very next run-that's when I knew continual learning wasn't just a buzzword. It was working.

I'd fed V2 twenty "bad vs. good" pairs focused on one behavior: "always include a concrete example after introducing a concept." The next e-book chapter I generated opened with an abstract idea, then immediately followed with a code snippet. No prompt engineering, no few-shot examples in the request-the persona had learned the pattern.

That moment validated the entire LoRA lineage approach. This wasn't static prompt tuning; it was behavioral evolution with version control.

Dreaming is Learning: Continuous Improvement in Mneme

Most AI systems are static-trained once, deployed, frozen. They cannot adapt to feedback or new information without expensive retraining. The human brain solves this through neural plasticity: sleep and dreaming consolidate memories and refine neural pathways.

As someone who values sleep for processing ideas-I do my best thinking after a night's rest-building a system that "dreams" felt natural. Mneme implements this biologically-inspired cycle, enabling personas to learn continuously from daily feedback and user interactions.

How Dreaming Works in Mneme

During the day, personas generate artifacts, interact with users, and collect feedback. At night, the Pondering Engine replays activity, analyzes validation results, and generates training data. LoRA adapters-lightweight, parameter-efficient fine-tuning layers-are trained on this feedback locally, without touching the frozen base model. Each persona's adapter updates independently.

The Dream Cycle
Day (Active Generation):
  → E-books, tutorials, blogs created
  → Users mark outputs: "good" / "needs work"
  → Validation results logged (structure, examples, tone)

Night (Consolidation):
  → Pondering Engine: "What worked today? What failed?"
  → Generate training pairs from feedback
  → Train LoRA adapter (V2 → V3 merge + new learning)
  → Persona wakes with improved behavior

LoRA training delivers behavioral specialization at a fraction of full retraining cost, runs locally on modest hardware, and keeps learning private. Personas improve incrementally: each day they're more aligned with user preferences and quality standards. Over time, feedback loops ensure continuous, autonomous improvement without expensive cloud infrastructure or manual intervention.

E-book Creator: Outline → Draft → Validate → Revise → Publish

The workflow enforces the editorial contract at every phase:

  1. Outline pass: Persona generates a chapter/section plan against the editorial contract. Must include: learning objectives, key concepts, example placeholders, pitfall candidates.
  2. Draft pass: Long-form text with embedded calls for figures/examples (placeholders tagged for later). Persona follows structure from outline.
  3. Validation: Automated checks for contract compliance: Does each chapter have ≥1 example? Are there 3 gotchas? Is tone consistent? Any generic phrases flagged?
  4. Revision: Targeted fixes if a checklist item is missing. Re-generate only the affected sections, not the entire chapter.
  5. Artifacts: Pandoc → EPUB/PDF; Audiobook via Chatterbox + voice cloning (covered in Part 3).

Context is Key: Dealing with Local Constraints

The Case for Personalized Local AI

Cloud-based LLMs scale infinitely but sacrifice agency and privacy. Mneme represents a different path: smaller, personalized AI running on local hardware under user control. While cloud systems concentrate power in a single place, local systems enable specialization, continuous learning from personal feedback, and resilience. Solving these problems on a smaller scale locally could be the key to unlocking the potential.

This approach avoids the existential risks of god-like, centralized AI-sidestepping surveillance, single points of failure, and homogenized outputs. Mneme tests the boundaries of current technology on the journey toward AI that serves individuals, not corporations.

Breaking Down the Context Problem

Local models have finite context windows-a hard constraint cloud systems bypass with massive memory. Mneme solves this through continuous decomposition loops:

Running summaries maintain continuity across sections; validation gates prevent hallucination and repetition. This disciplined workflow transforms a limitation into an advantage: tighter, higher-quality output through iterative refinement rather than single-pass generation.

Proof It's Working: Measurable Improvements

How do I know the system works? The numbers tell part of the story:

But the real proof came from reader feedback.

Reader Feedback That Stuck

One reader sent me a message after reading an e-book on API design patterns: "This is the first AI-generated content I've read that didn't feel like AI-generated content. The examples were specific, and I actually finished it."

That feedback validated everything. The goal wasn't to hide that AI helped write it-it was to make it useful regardless of origin. When readers stop caring about the process and focus on the value, you've built something real.

Another comment from an early beta reader: "The checklist at the end of Chapter 4 is now printed and taped to my monitor." That's when I knew the editorial contract-demanding actionable summaries-was working.

Publishing Loop: Real Readers, Real Signals

To measure value, I published e-books and supporting blogs under Weller Davis (wellerdavis.com) and promoted via LinkedIn. Manual posting (due to API limits) was enough to get signal-some attention, a trickle of sales, and concrete feedback.

The key takeaway: a narrow audience prefers detailed, actionable chapters over broad, surface-level lessons. Depth beats breadth when you're teaching practitioners.

Common Pitfalls (and Fixes)

Pitfalls Encountered
  • Hallucinated citations and sources
  • Definition padding (obvious explanations)
  • Repetition across chapters
  • Tone drift in long documents
Fixes Applied
  • Remove citations unless verified; add "sources only if checked" rule to contract
  • Collapse obvious definitions; prioritize examples and pitfalls
  • Cosine similarity check across sections; revise only overlapping parts
  • Validation pass specifically for tone consistency; flag shifts

Mini-Checklist You Can Steal

If you're building AI content systems, here's what worked for me:


© Dave Wheeler · wellerdavis.com · Built one error message at a time.