A local AI app engine — and your general-purpose personal assistant.
Two faces of one runtime: out of the box, the assistant chats and acts;
install skills and the same runtime hosts them as full apps.
Website · Apps · Skills · Docs · Discord
curl -fsSL https://linggen.dev/install.sh | bash
lingOpens the web UI at http://localhost:9898. macOS and Linux.
Architecturally, Linggen is the root system for AI agents. The core runtime manages agent processes, communication, and execution; everything else (skills, agents, missions) grows on top as files. An "AI app" in Linggen is a skill, an agent, or a mission — markdown + scripts, not code plugins. The runtime gives every app a process, syscalls (built-in tools), a filesystem (memory), permissions, and a network surface (P2P rooms).
Apps drop into a folder and run.
| OS | Linggen |
|---|---|
| Process | Agentic loop — one running agent |
| Interrupt | User message queue — checked each iteration |
| Thread / Fork | Subagent delegation — concurrent child execution |
| Syscall | Tool call — built-in tools are the kernel API |
| Dynamic library | Skill — loaded at runtime, no code changes |
| Cron job | Mission — scheduled agent / app / script |
| Driver | Model provider — Ollama, Claude, GPT, Gemini, Bedrock |
| Filesystem | Memory store — core markdown + LanceDB RAG via ling-mem |
| Process privilege | Permission modes (chat / read / edit / admin) + path scoping |
| Network share | Rooms — share models with peers over P2P WebRTC |
Full table and design principles in doc/product-spec.md;
vision and roadmap in doc/insight.md.
- Sys Doctor — AI health analyst for your Mac. Disk, security, performance, dormant apps, buyer's guide. Bundled
.appavailable. - Memory —
ling-memskill. LanceDB semantic store with typed facts, embeddings, first-class forgetting. Same store reachable from Linggen, Claude Code, or any tool that can shell out. - Model Sharing — Rooms. Open one and let friends use your models over P2P WebRTC. No keys for the consumer, no cloud middleman, owner controls budget and tools.
- Architecture Guardian — Agent + mission. Reviews code and updates dependency graphs on a schedule, flags design violations.
- DevOps — Mission. Monitor CI/CD, auto-fix flaky tests, manage deployments — all defined in markdown.
Skills, agents, missions — all files. New apps are a folder away. Browse community skills at linggen.dev/skills.
Drop a markdown file in ~/.linggen/ — available immediately, no restart:
---
# ~/.linggen/agents/reviewer.md
name: reviewer
description: Code review specialist.
tools: ["Read", "Glob", "Grep"]
model: claude-sonnet-4-20250514
---
You review code for bugs, style issues, and security vulnerabilities.Skills (~/.linggen/skills/<name>/SKILL.md) and missions (cron-scheduled
agent / app / script) follow the same drop-in pattern. Skills use the open
Agent Skills standard and work in Claude Code
and Codex too.
- Local-first. Runtime, data, and inference (when you pick local models) live on your machine. Cloud is opt-in via your own API keys.
- Model-agnostic. Any model — Ollama, Claude, GPT, Gemini, DeepSeek, Groq, OpenRouter. Routing policies (
local-first,cloud-first, custom) decide which model handles each request. - App platform, not a single product. Coding is one app among many.
- P2P, not centralized. Remote access and model sharing flow over WebRTC data channels.
linggen.devacts as a signaling relay; it does not see chat content. - Skills as the contract. Apps follow the open Agent Skills standard.
ling login # link to linggen.devThen open linggen.dev/app from any browser. P2P-encrypted tunnel back to
your machine; no VPN, no port forwarding.
- Design docs — architecture, specs, internals
- Product spec — system definition + design principles
- Insight — vision, roadmap, problems Linggen solves
- Skill spec — how to write skills
- Full docs — guides and reference
Linggen sends a small amount of anonymous usage data to https://linggen.dev/api/track so we can see whether the project is being used and where to invest. Specifically:
install— once on first launch on a machine, and once after each upgrade. Includes the install source (wrapper,brew,cargo,sys-doctor,unknown) and the previous + current versions.command— one event per meaningful action:engine.start(each daemon start),session.start,skill.<name>.open, etc. The verb is a short stable identifier; no chat content, no file paths, no model output.system_state— included inengine.startpayload: which sibling Linggen products (Sys Doctor, ling-mem) are detected on this machine via marker files in~/.linggen/. Lets us track adoption of those products without each needing its own phone-home.
What's never sent: chat messages, prompts, model responses, file contents, paths, embeddings, your IP (the receiver doesn't store it), or any user-identifying string. The installation_id is a random UUIDv4 generated on first run and stored at ~/.linggen/installation_id.
Disabling telemetry:
- Runtime: set
LINGGEN_NO_TELEMETRY=1, ortouch ~/.linggen/no-telemetry. - Compile time: build with
cargo build --release --no-default-features(no telemetry code is even linked in).
Source is open on both ends: client at src/telemetry/, receiver at linggensite/functions/api/_lib/analytics.ts. No third-party analytics — only linggen.dev/api/track.
Apache 2.0 — engine and bundled skills. Branded apps shipped from linggen-releases ship under their own terms.
