In 2026, everyone is building "Chat with your PDF" apps and AI Resume Screeners. These are the new to-do lists. Tutorial projects that prove you can call an API but not much else.
I was stuck in that loop too. Then I started building for an obsession I've had since I was a kid: stories.
How I Got Here
I was 13 when I wrote my first lines of QBasic. But my obsession wasn't code. It was worlds. Cartoons first, then anime and manhwa (Solo Leveling had me tracking power systems across 200+ chapters like a database), and eventually web novels. By the time I hit college at IIIT Sri City, I was reading Omniscient Reader's Viewpoint and Mother of Learning. Stories with thousands of chapters, hundreds of characters, and lore systems deep enough to crash a context window.
I didn't just want to read these stories. Since childhood, I'd wonder what it would be like to actually talk to these characters. And by 2020, I had entire worlds spinning in my head that I wanted to write down, but my writing couldn't keep up with my imagination.
So I built the tool I wished existed.
FableWeaver.ai
FableWeaver.ai is an AI-powered platform for writing interactive web novels. Not a wrapper around a chat API, but an actual system where characters remember their lore across hundreds of chapters and can talk to each other autonomously.
Building it forced me into two engineering problems that no tutorial covers.
Problem 1: LLMs Forget Everything
I call this the Goldfish Effect. Most student AI projects work fine for short documents. But feed an LLM a 200-chapter novel and it forgets the protagonist's hidden motive from Chapter 2 by the time you hit Chapter 50. The context window just isn't big enough to hold an entire world.
My first instinct was basic RAG: retrieve relevant chunks and stuff them into the prompt. That works for documentation search. It does not work for narrative, where foreshadowing from 30 chapters ago matters as much as what happened last paragraph.
So I built a layered summary system instead. Three tiers of context, each serving a different purpose:
- World lore: the rules of the universe, character backstories, magic systems. Static. Never changes unless the author edits it.
- Arc summaries: a compressed version of the current ~10-chapter plot arc. Updated every few chapters.
- Chapter recap: a detailed summary of the immediately preceding chapter. Regenerated every time.
When the AI writes a new chapter, it gets all three layers injected into the prompt. It knows the world, it knows the current plot arc, and it knows what just happened.
// Fetch the three context layers for prompt injection
const { data: layers } = await supabase
.from('context_layers')
.select('type, content')
.in('type', ['world_lore', 'arc_summary', 'chapter_recap'])
.order('type');
const worldLore = layers?.find(l => l.type === 'world_lore')?.content ?? '';
const arcSummary = layers?.find(l => l.type === 'arc_summary')?.content ?? '';
const lastChapter = layers?.find(l => l.type === 'chapter_recap')?.content ?? '';
const prompt = `World: ${worldLore}
Current Arc: ${arcSummary}
Previous Chapter: ${lastChapter}
Continue the story. Write Chapter ${currentChapter}.`;
Problem 2: Characters That Talk to Each Other
While most people build 1-on-1 chatbots, I wanted a full cast that could argue with each other. And with the reader.
Each character is its own AI agent with a system prompt that locks down their voice, their secrets, and their constraints. A brooding anti-hero gets "Never agree easily. Question motives. Use short sentences." A court scholar gets "Speak formally. Reference historical precedents. Never use contractions."
The tricky part was orchestration. I wired it up with Supabase Realtime so the agents run in a shared channel:
- A user drops a message into the group chat.
- The hero agent responds.
- The rival agent "hears" the response, runs it through a personality-weighted prompt, and decides whether to interject or stay quiet.
- A turn-taking manager prevents infinite agent loops. This was a real problem in early builds. Two agents would just argue forever.
The result is a conversation that feels alive. You're not chatting with a bot; you're in a room with characters who have their own agendas.
What Broke
Two things nearly killed the project.
Summary decay. I initially had the AI summarize the previous summary every few chapters. Classic shortcut. By chapter 20, it was like a game of telephone. The plot had hallucinated into something unrecognizable. A character's betrayal got softened into a "disagreement," key plot points vanished entirely. I fixed this by anchoring every 5th summary back to the original world lore, so the summaries could never drift too far from ground truth.
Character bleed. After about 10 messages in the group chat, every character started sounding the same. Polite, helpful, agreeable. Turns out LLMs have a strong gravitational pull toward a "default helpful assistant" voice. I had to fight this with negative prompting: explicitly telling each agent what they would never say or do. That made the difference between a cast of identical chatbots and characters with actual friction.
The Takeaway
My first shipped project was GoodWill, a messy MERN stack app connecting NGOs with donors. Three organizations actually used it. The tech was barely held together, but it taught me something no tutorial ever did: a shipped, imperfect product is worth more than a perfect one that never leaves localhost.
FableWeaver was the same lesson at a harder difficulty. It taught me context window management, multi-agent orchestration, and real-time state management. Not because I was following a curriculum, but because I needed to solve these problems to make my thing work.
In 2026, the bar has moved. Recruiters don't care that you can call an API. They want to see that you can manage state, handle latency, and keep an AI system coherent over time. You learn that by building something you actually care about, something where cutting corners means your own experience gets worse.
Find the gap in your hobbies. Build the bridge.
