IntelNav

Decentralized, pipeline-parallel LLM inference.

prompt → [you: layers 0..k) → peer A: [k..m) → peer B: [m..N) → tokens

IntelNav splits a model into layer-range slices, scatters them across volunteer hardware, and streams hidden states through the chain to answer a prompt. No single peer holds the whole model.

Slices are addressed on a Kademlia DHT. Every peer must contribute — you either host a slice or run as a DHT relay.

Apache-2.0 — this page is a placeholder while v1 stabilises.