Skip to main content

Documentation Index

Fetch the complete documentation index at: https://linkly.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

This page answers some of the questions Linkly AI users ask most often.

How does Linkly AI protect user data privacy?

Linkly AI is built on a local-first architecture. Your documents, full-text index, vector index, and the embedding model itself all run on your device — nothing is uploaded to any server by default.

What stays on your device

  • Document originals: stay in their original folders. Linkly AI only reads them — it does not copy or relocate them.
  • Full-text index (BM25): built locally with Tantivy.
  • Vector index: stored in a local database.
  • Embedding model: Qwen3-Embedding-0.6B (GGUF quantized), running locally via llama.cpp. Apple Silicon Macs automatically use Metal GPU acceleration.
  • App logs: written to local files only — never auto-uploaded.

Where chat data goes

When the chatbot calls a large language model, where the request goes depends on the provider you choose:

Local model

Ollama, LM Studio, or any other OpenAI-compatible local service. Data stays entirely on your machine.

Linkly Official

Forwarded to a third-party model provider via api.linkly.ai. Requests pass through Linkly’s servers.

Third-party direct

Connects directly to OpenAI, Anthropic, etc. Requests do not pass through Linkly’s servers.
You can add, switch, or disable any provider in Settings → AI Models.

User Experience Improvement Program (telemetry)

To help us understand which features are used and how the app is running, Linkly AI sends an anonymous usage report by default:
What is reportedWhat is NOT reported
Feature usage counters (aggregated by action)Document content, file names, paths
App version, OS, architectureChat content, search queries
A locally generated random device IDAPI keys, custom URLs
You can turn it off any time in Settings → Data Privacy → User Experience Improvement Program. Once disabled, any in-memory events that have not yet been sent are discarded as well.
The “Data Privacy” panel in Settings shows a visual breakdown of where each feature’s data flows (Local / Official cloud / Third-party), so you can see at a glance what does and does not leave your device.

Privacy commitments

  • No third-party analytics SDKs (Google Analytics, etc.).
  • We do not read your browser history or clipboard.
  • No mandatory account login — core features work offline.
  • App logs stay on your machine. They are sent to us only if you explicitly share them.

How long does Linkly AI take to finish indexing?

Indexing time depends on the number of files, file types, machine performance, and indexing mode. Linkly AI indexes in three stages:
1

Filename quick index (seconds)

As soon as files are discovered, their paths and names are written to the full-text index, so you can search by filename even while content indexing is still running.
2

Full-text extraction and BM25 index (minutes to hours)

Document contents (txt, md, html, docx, pdf, images) are parsed; outlines and metadata are written to Tantivy. Multiple workers can run in parallel.
3

Vector embedding (minutes to hours)

The local embedding model generates a vector for each document chunk and writes it to the vector index.

What affects speed

  • File count: roughly linear with total time.
  • File format: plain text is fastest; PDFs require page parsing.
  • Machine performance: Apple Silicon Macs use Metal GPU acceleration for embeddings, which is significantly faster than CPU inference. On Windows / Linux, embedding currently runs on CPU.
  • Indexing mode: pick Performance / Balanced / Auto in Settings → Indexing. Performance mode uses higher concurrency and more CPU; Auto upgrades to Performance when the system is idle.

Rough expectations

These are order-of-magnitude estimates only — actual times vary considerably with hardware and file mix:
ScenarioApproximate time
Thousands of plain-text files (txt / md), M-series MacA few minutes
Tens of thousands of mixed formats (some PDFs), M-series MacTens of minutes to a few hours
Many images requiring OCRSignificantly longer
Same workload on Windows / Linux pure CPUSlower than Mac
You can watch indexing progress live (indexed / pending) at the top of the launcher. Indexing runs in the background and does not block search — once filename indexing is done, search is immediately available.
If you are just trying Linkly AI out, start with a small knowledge base (hundreds to thousands of files), then add larger directories once you are familiar with it.

How do I get Linkly AI’s application runtime logs?

While running, the app automatically writes its full log to a local app.log file (capped at 2 MB per file, with rotation). Attaching this file when you report an issue dramatically speeds up debugging. Sensitive data is already redacted. There are two ways to grab the log: This is the simplest path:
1

Open the About page

In Linkly AI, go to Settings → About.
2

Open the data directory

Find the Data Directory row and click the Open button on the right. Your file manager will open at the folder that holds the app’s data.
3

Grab app.log from the logs subfolder

Inside that folder, open the logs/ subdirectory and send us app.log.

Option 2: Open the data directory manually

If the app has crashed or won’t launch, open the folder directly from disk:
  1. Open Finder.
  2. From the menu bar, choose Go → Go to Folder… (or press + + G).
  3. Paste this path and press Return:
    ~/Library/Application Support/ai.linkly.desktop/logs
    
  4. Find app.log in that folder and send it to us.
If even those paths do not exist, the app must have crashed before writing any logs. In that case, take a screenshot of the crash dialog or window and send it along with your OS version.