Uploading
Uploading in Narrova is the moment your draft changes shape: from a single document into a living reference the assistant can search, quote, and think with.
You bring the pages. Narrova helps turn those pages into usable context that still sounds like your voice. It is less like throwing text into a black box, and more like setting a careful table for the conversations ahead.
What happens when you upload a draft
When you drop in a chapter, treatment, beat sheet, or messy zero draft, Narrova does three things:
- Stores the file in the Context you chose.
- Prepares the text so it can be searched quickly.
- Makes it retrievable during future replies (once indexing is complete).
Think of it like handing your manuscript to a very organized story partner:
- It does not memorize every word in a rigid line.
- It breaks your text into labeled note cards.
- It can quickly pull the few cards that matter for the moment you are working on.
TIP
Uploading is not just storage. It is conversion into retrieval-ready narrative memory.
How upload limits work
For Story Context and Storyform Context, the more important practical limit is the overall document footprint, not a simple file count.
- Dramatica: 2 GB document storage
- Dramatica Plus: 10 GB
- Dramatica Pro: 20 GB
That means one large draft and several smaller notes may matter more than the raw number of files. Ten small files are not automatically worse than one large file; what matters more is the total volume of material and whether those files belong together in the same working Context.
NOTE
The number of files does not matter by itself. What matters more is how much material you keep in Context and whether that Context stays focused on the files you actually want Narrova to reason from.
Why indexing can take a little longer here
Some systems stop at: upload file -> split text -> search it.
Narrova goes further. During indexing, it does more than generic retrieval prep:
- It parses your material with a narrative lens informed by Dramatica theory.
- It checks for structural signals tied to Throughlines, Storypoints, and Perspective.
- It runs internal quality passes so retrieval favors meaningful narrative context, not just keyword overlap.
That extra work is intentional. It is part of why Narrova can respond with deeper, more story-aware guidance once indexing is complete.
NOTE
So yes, indexing may take a little longer than basic systems. The tradeoff is richer narrative alignment once you begin asking real story questions.
Image-based PDFs, comic books, and graphic novels
Comics and graphic novels are handled as a special upload case. They should be uploaded through a Story Context Manager or Storyform Context Manager so Narrova can run the full comic pipeline and store a reusable script artifact.
Use Story or Storyform managers only
For comics and graphic novels:
- Upload from the Story or Storyform document manager.
- Set Upload as: Comic when you know the file is sequential art.
- Avoid conversation-only uploads for this workflow when you want durable retrieval and story analysis.
This ensures the file enters the long-lived context pipeline instead of a short-lived chat attachment flow.
What Narrova does for comic uploads
Comic uploads run as an asynchronous pipeline:
- Classify or respect manual mode (
Comic,Document,Auto detect). - Render PDF pages to images.
- Analyze page-by-page for panels, speakers, dialogue, captions, and SFX.
- Compile a one-time Comic Script Outline artifact.
- Save:
- structured JSON (local artifact for downstream analysis)
- Markdown script outline (retrieval-primary artifact)
- Upload Markdown to OpenAI Files and attach it to the vector store.
The result is that Narrova retrieves from the generated script artifact instead of repeatedly re-interpreting the original PDF.
Comic status stages you may see
During processing, status may move through:
Classifying document...Preparing comic pages...Analyzing panels and dialogue...Compiling script artifact...Script ready, indexing...
After completion, the processed filename typically ends in -comic-script.md.
Partial output behavior (important)
Comic processing can complete with Partial output: yes.
That means:
- At least one page was successfully analyzed.
- Unreadable or failed pages are inserted as placeholders with confidence flags.
- The artifact is still attached so work can continue immediately.
Common reasons include:
- stylized lettering or low contrast bubbles
- blur, compression, skew, or scan noise
- model output formatting issues on some pages
If no pages can be analyzed, Narrova falls back to the original PDF path.
Confidence and speaker attribution
Comic scripts use conservative attribution:
- explicit names only when confidence is high
- fallback speaker labels (
Speaker_#) when ambiguous - per-line and per-panel confidence flags for uncertainty
Treat early outputs as analysis-ready scaffolding, then refine with your own notes where needed.
Best practices for cleaner comic extraction
- Use high-resolution PDF exports (300 DPI or higher when possible).
- Keep full uncropped pages (do not clip balloons or captions).
- Avoid aggressive compression and heavy JPEG artifacts.
- Upload one canonical comic version per Story or Storyform Context.
- If lettering is highly stylized, add a companion notes file for key beats and names.
The three Contexts
Narrova gives you three places to upload, each with a different purpose, so your material stays clear instead of tangled.
Story Context
Use this for material that should apply across many Storyforms in the same Story.
- World rules
- Character dossiers
- Tone references
- Series bible notes
If it should stay true across versions of the story, this is its home.
Worldbuilding files belong here when they define the larger creative universe rather than the current Dramatica structure. A good Story Context upload might include World Bible.md, Character Dossiers.md, Series Timeline.md, Magic System Rules.md, Historical Research.md, or Style Guide.md.
Keep character files role-neutral when they live in Story Context. Describe the person, not the structural job they perform in a particular Storyform. For example, “Walburga hides her grief by managing everyone else’s obligations” is durable character context. “Walburga is the Main Character” belongs in Storyform Context unless that role is true for every structure you will ever build from this Story.
Storyform Context
Use this for material tied to one specific Storyform.
- Storyform-specific draft pages
- Throughline experiments
- Structural revision notes
- Alternate signpost treatments
If it belongs to this exact structural version of the story, upload it here.
Storyform Context is the right place for files that name current roles and structure: selected Main Character, Influence Character, Protagonist, Antagonist, Throughline assignments, Storypoints, Signposts, Storybeats, encoded scenes, and draft pages for this Storyform.
IMPORTANT
Do not use Story Context as the only place for current Storyform decisions. Narrova treats uploaded worldbuilding as reference, while the active Storyform and your latest instruction define the structure it should follow now.
Conversation Context
Use this for short-lived, chat-specific reference.
- Temporary comparisons
- One-off excerpts
- Quick analysis files you may remove later
This is the lightest-weight option and usually does not need long-term organization.
Document intent labels
When the upload flow shows Document intent, choose the label that describes how Narrova should treat the file.
| Intent | Use for | Best Context |
|---|---|---|
| World bible | Setting rules, metaphysics, factions, geography, lore, shared canon | Story Context |
| Character profile | Role-neutral character biographies, histories, relationships, pressure points | Story Context |
| Timeline snapshot | Current chronology, continuity status, version checkpoint | Story Context or Storyform Context |
| Prior story/events | Earlier events that should be remembered as background | Story Context |
| Style guide | Voice, format, expression, translation, prose or screenplay preferences | Story Context or Storyform Context |
| Other | References that do not fit a standard category | Any Context |
The label is a guide, not a command. Narrova still follows the current Storyform, active Perspective selections, and your latest instruction when they conflict with older uploaded material.
Writing use: If you want Narrova to remember what happened before without repeating it, label the file as prior story/events and phrase the upload as background history.
A practical codex structure
For large worlds, split one giant codex into smaller files by job. Smaller, well-named files make retrieval cleaner and make it easier to turn specific sources on or off.
Use a structure like this:
World Bible - Current Canon.mdCharacter Profiles - Role Neutral.mdTimeline Snapshot - Before Story Two.mdPrior Events - Story One.mdStyle Guide - Voice and Format.mdCurrent Storyform - Roles and Throughlines.mdCurrent Storyform - Signposts and Storybeats.md
Location: upload the first five files to Story Context. Upload the current Storyform files to Storyform Context.
Action: open the Story or Storyform page, use the Documents manager, choose the closest Document intent, and wait for indexing to finish.
Outcome: Narrova can retrieve the right layer of information without treating old role assignments, prior events, or stylistic preferences as if they were all the same kind of authority.
Validation: ask Narrova a targeted question such as, “Using the current Storyform, summarize what Walburga is doing as Main Character and treat prior events as background only.” The answer should honor the current role while using older material only as context.
TIP
If Narrova seems to pull from the wrong era or draft, check which Context the file lives in and whether an older file should be hidden, renamed, replaced, or moved.
Visual map: where your upload goes
flowchart LR
A["You Upload a Draft"] --> B{"Choose Context"}
B --> C["Story Context<br/>Shared across Storyforms"]
B --> D["Storyform Context<br/>Specific to one Storyform"]
B --> E["Conversation Context<br/>Current chat only"]
C --> F["Narrova indexes and retrieves relevant passages"]
D --> F
E --> F
Chunking and embeddings (for dummies)
Here is the plain-language version.
Chunking
Chunking means splitting your file into smaller pieces of text.
Why? Because asking a model to re-scan an entire book every time is slow and noisy. Smaller pieces are faster, cleaner, and more accurate.
So instead of one giant blob:
- Chapter becomes sections
- Sections become passages
- Passages become searchable units
Embeddings
An embedding is a numeric fingerprint of meaning.
Narrova creates one for each chunk so it can find chunks that mean similar things to your prompt, even if the wording changes.
If you ask:
“Where does my Main Character avoid responsibility?”
Narrova can still find chunks about dodging blame, deflection, or denial, even if the exact phrase “avoid responsibility” never appears on the page.
Visual map: chunking + embeddings
flowchart TD A["Your Uploaded File"] --> B["Extract text"] B --> C["Chunk into small passages"] C --> D["Create embedding for each chunk<br/>(meaning fingerprint)"] D --> E["Store in searchable index"] F["Your question"] --> G["Find closest-meaning chunks"] E --> G G --> H["Narrova builds answer using best passages"]
NOTE
Narrova does not need every chunk for every reply. It pulls the most relevant ones first, then builds from that focused set.
What this means for writers
If you want cleaner answers, upload with intention:
- Keep one major draft version per Context.
- Name files clearly so you can toggle sources with confidence.
- Store enduring canon in Story or Storyform Context, not only in chat.
When your Context is clean, Narrova feels like a close collaborator: focused, structurally aware, and aligned with the story you are actually writing now. Your draft stays the heart of the work; Narrova simply helps you hear it more clearly.
Trust and data boundaries
Writers ask this all the time, and they should.
Are uploads used to train models?
Short answer: no.
Your uploaded drafts are used to support your retrieval and responses, not to train Narrova on your manuscript. Your work matters because it is your work and your lived expression, not generic model fuel.
Can someone reverse engineer my draft from embeddings?
Not in any practical, reliable way.
Embeddings are high-dimensional numeric representations of meaning, not readable text. They are useful for similarity search, but they are not a clean blueprint that can be converted back into your original prose.
NOTE
In plain terms: embeddings help Narrova find relevant passages; they are not a recoverable copy of your document.
What do you keep so I can remember what I uploaded?
For human-facing reference, Narrova keeps only a small preview portion from the beginning of the document (plus filename/context metadata) so you can recognize what was uploaded.
That means you get a helpful reminder without exposing full-document text in list views.