NotebookLM Research (JSON Direct Ingest)
This is the student workflow for Track 2 (Research Library). You will curate high-quality notebooks and import them directly into Writers Factory to power your Story Bible.
The Process at a Glance
1. CURATE 2. CLEANSE 3. EXPORT 4. IMPORT 5. CONNECT
────────── ───────────────── ───────────────── ────────── ──────────
Fill Notebook Run Cleaning Use Extension Drag JSON to App finds
with Sources Protocol & Delete to download Writers Factory ingredients
True Noise .json file (Research Tab) automatically
Step 1: Curate Your Notebooks
Pick your notebook categories (e.g., “Magic System”, “World Politics”). For each notebook:
- Add Sources: PDFs, URLs, audio transcripts, etc.
- Curate for Quality: Focus on sources that genuinely fascinate you—the Codex is story-agnostic and can power multiple novels.
[!TIP] Use the Templates We provide 7 standardized notebook templates (Arena, Speculation, Voice, etc.) in the Research Notebooks section.
[!TIP] Optional: Context Note You can create a “WHY I CHOSE THIS [NOTEBOOK]” note to document your intent. See individual notebook templates for examples. After writing, convert it to a source so the AI can read it.
Step 2: Cleanup Pass (Critical)
Copy and paste this prompt into your notebook to identify sources to delete:
## Cleaning Protocol
**Role:** Developmental Editor and Data Curator.
**Objective:** Clean this notebook for a Knowledge Graph. Your goal is to remove "True Noise" while protecting "Narrative Assets" (metaphors, jargon, and sensory details), even if they seem off-topic.
**Step 1: Establish Narrative Anchors**
Scan any Context Notes and sources to lock in these three parameters:
1. **Setting:** (Where are we? e.g., UK Estate, Mars Colony).
2. **Central Metaphor:** (The lens for this notebook, e.g., "Immigration is Legacy Software").
3. **Core Conflict:** (The engine of the plot, e.g., "Systemic Integrity vs. Evolution").
**Step 2: The "Keep vs. Kill" Evaluation**
Filter all *other* sources through those anchors:
• **KEEP (The "Hidden Gems"):**
◦ **Lexicon:** Technical manuals or reports that provide the specific *jargon* my characters speak (e.g., software terms for a hacker protagonist).
◦ **Texture:** Sources that offer visceral details, slurs, or rhetoric that adds realism, even if the jurisdiction is wrong (e.g., US laws used as a model for a fictional regime).
◦ **Psychology:** Academic papers that explain the *motive* of the killer or the *trauma* of the victim.
◦ **Strategic Analysis:** Any Context Notes or audit documents you created.
• **DELETE (The "True Noise"):**
◦ **Conflicting Reality:** Data that contradicts the story's internal logic (e.g., biological bird migration stats if the metaphor is purely digital).
◦ **Functional Clutter:** Instructions, prompts, or table generation scripts (e.g., `Table_Generation_Engine.md`).
◦ **Dry Data:** Raw statistics or tables (GDP, crop yields) that lack narrative context or emotional weight.
**Step 3: The Output**
Provide a numbered **DELETE LIST**.
• For each file, write a 1-sentence **"Disconnect Reason"** explaining why it fails to serve the Narrative Anchors.
Action: Use the DELETE LIST to remove sources in NotebookLM before exporting. The NotebookLM Tools extension (next step) makes bulk deletion easier.
Step 3: Tool Setup (One-Time)
To manage and export your data cleanly, you need the NotebookLM Tools browser extension.
[!IMPORTANT] Install NotebookLM Tools
- Install the NotebookLM Tools by _trungpv extension.
- This tool allows for easy bulk deletion of sources (critical for Step 2B cleanup).
- It also allows you to export the JSON (full metadata) format required by Writers Factory.
Step 4: Export to JSON
For each cleaned notebook you want to import:
- Click on the NotebookLM Tools icon in the browser bar.
- Find and select your notebook.
- Click the backup/restore button.
- Select JSON (full metadata).
- Click Export Backup and save the file.
Step 5: Drag & Drop to Codex
- Go to Writers Factory → Architect Mode → Research Library.
- Drag your JSON files into the dropzone.
- Watch the progress bar: “Indexing to Codex”.
What happens: The system embeds every paragraph of your research into a mathematical vector space. This makes it “searchable” by concept, not just keyword.
Step 6: The Connection Point
Once you have:
- Imported your JSON research (The Codex)
- Completed your Story Development (The Story Bible)
You are ready to connect them.
- Go to Architect Mode.
- Click “Connect Research”.
- The system uses your Story Bible (Protagonist, Theme, World Rules) as a lens.
- It searches your entire Codex for relevant ingredients.
- It creates a focused Research Graph ~50 ingredients for this story.
Why this is better: You don’t need to manually tag connections. The AI finds that your “Submarine Manual” (Research) is relevant to your “Isolation Theme” (Story) automatically.
Summary: The Complete Workflow
CURATE → CLEANSE → EXPORT → IMPORT → CONNECT
↓ ↓ ↓ ↓ ↓
Build Run Cleaning JSON Codex Research
Notebook Protocol & Export Indexing Graph
Delete Noise
Key insight: The Cleaning Protocol (Step 2) is what transforms a raw collection into a curated knowledge base. It removes data that contradicts your story while keeping “hidden gems” that add lexicon, texture, and psychology—even if they seem off-topic at first glance.