Skip to content

Uploading

Uploading in Narrova is the moment your draft changes shape: from a single document into a living reference the assistant can search, quote, and think with.

You bring the pages. Narrova helps turn those pages into usable context that still sounds like your voice. It is less like throwing text into a black box, and more like setting a careful table for the conversations ahead.

What happens when you upload a draft

When you drop in a chapter, treatment, beat sheet, or messy zero draft, Narrova does three things:

  1. Stores the file in the Context you chose.
  2. Prepares the text so it can be searched quickly.
  3. Makes it retrievable during future replies (once indexing is complete).

Think of it like handing your manuscript to a very organized story partner:

  • It does not memorize every word in a rigid line.
  • It breaks your text into labeled note cards.
  • It can quickly pull the few cards that matter for the moment you are working on.

TIP

Uploading is not just storage. It is conversion into retrieval-ready narrative memory.

Why indexing can take a little longer here

Some systems stop at: upload file -> split text -> search it.

Narrova goes further. During indexing, it does more than generic retrieval prep:

  • It parses your material with a narrative lens informed by Dramatica theory.
  • It checks for structural signals tied to Throughlines, Storypoints, and Perspective.
  • It runs internal quality passes so retrieval favors meaningful narrative context, not just keyword overlap.

That extra work is intentional. It is part of why Narrova can respond with deeper, more story-aware guidance once indexing is complete.

NOTE

So yes, indexing may take a little longer than basic systems. The tradeoff is richer narrative alignment once you begin asking real story questions.

The three Contexts

Narrova gives you three places to upload, each with a different purpose, so your material stays clear instead of tangled.

Story Context

Use this for material that should apply across many Storyforms in the same Story.

  • World rules
  • Character dossiers
  • Tone references
  • Series bible notes

If it should stay true across versions of the story, this is its home.

Storyform Context

Use this for material tied to one specific Storyform.

  • Storyform-specific draft pages
  • Throughline experiments
  • Structural revision notes
  • Alternate signpost treatments

If it belongs to this exact structural version of the story, upload it here.

Conversation Context

Use this for short-lived, chat-specific reference.

  • Temporary comparisons
  • One-off excerpts
  • Quick analysis files you may remove later

This is the lightest-weight option and usually does not need long-term organization.

Visual map: where your upload goes

flowchart LR
  A["You Upload a Draft"] --> B{"Choose Context"}
  B --> C["Story Context<br/>Shared across Storyforms"]
  B --> D["Storyform Context<br/>Specific to one Storyform"]
  B --> E["Conversation Context<br/>Current chat only"]
  C --> F["Narrova indexes and retrieves relevant passages"]
  D --> F
  E --> F

Chunking and embeddings (for dummies)

Here is the plain-language version.

Chunking

Chunking means splitting your file into smaller pieces of text.

Why? Because asking a model to re-scan an entire book every time is slow and noisy. Smaller pieces are faster, cleaner, and more accurate.

So instead of one giant blob:

  • Chapter becomes sections
  • Sections become passages
  • Passages become searchable units

Embeddings

An embedding is a numeric fingerprint of meaning.

Narrova creates one for each chunk so it can find chunks that mean similar things to your prompt, even if the wording changes.

If you ask:

“Where does my Main Character avoid responsibility?”

Narrova can still find chunks about dodging blame, deflection, or denial, even if the exact phrase “avoid responsibility” never appears on the page.

Visual map: chunking + embeddings

flowchart TD
  A["Your Uploaded File"] --> B["Extract text"]
  B --> C["Chunk into small passages"]
  C --> D["Create embedding for each chunk<br/>(meaning fingerprint)"]
  D --> E["Store in searchable index"]
  F["Your question"] --> G["Find closest-meaning chunks"]
  E --> G
  G --> H["Narrova builds answer using best passages"]

NOTE

Narrova does not need every chunk for every reply. It pulls the most relevant ones first, then builds from that focused set.

What this means for writers

If you want cleaner answers, upload with intention:

  1. Keep one major draft version per Context.
  2. Name files clearly so you can toggle sources with confidence.
  3. Store enduring canon in Story or Storyform Context, not only in chat.

When your Context is clean, Narrova feels like a close collaborator: focused, structurally aware, and aligned with the story you are actually writing now. Your draft stays the heart of the work; Narrova simply helps you hear it more clearly.

Trust and data boundaries

Writers ask this all the time, and they should.

Are uploads used to train models?

Short answer: no.

Your uploaded drafts are used to support your retrieval and responses, not to train Narrova on your manuscript. Your work matters because it is your work and your lived expression, not generic model fuel.

Can someone reverse engineer my draft from embeddings?

Not in any practical, reliable way.

Embeddings are high-dimensional numeric representations of meaning, not readable text. They are useful for similarity search, but they are not a clean blueprint that can be converted back into your original prose.

NOTE

In plain terms: embeddings help Narrova find relevant passages; they are not a recoverable copy of your document.

What do you keep so I can remember what I uploaded?

For human-facing reference, Narrova keeps only a small preview portion from the beginning of the document (plus filename/context metadata) so you can recognize what was uploaded.

That means you get a helpful reminder without exposing full-document text in list views.