NoKV is a Go-native storage engine that mixes RocksDB-style manifest discipline with Badger-inspired value separation. You can embed it locally, drive it via multi-Raft regions, or front it with a Redis protocol gatewayβall from a single topology file.
- π Dual runtime modes β call
NoKV.Openinside your process or launchnokv servefor a distributed deployment, no code changes required. - π Hybrid LSM + ValueLog β WAL β MemTable β SST pipeline for latency, with a ValueLog to keep large payloads off the hot path.
- β‘ MVCC-native transactions β snapshot isolation, conflict detection, TTL, and iterators built into the core (no external locks).
- π§ Multi-Raft regions β
raftstoremanages per-region raft groups, WAL/manifest pointers, and tick-driven leader elections. - π°οΈ Redis gateway β
cmd/nokv-redisexposes RESP commands (SET/GET/MGET/NX/XX/TTL/INCR...) on top of raft-backed storage. - π Observability first β
nokv stats, expvar endpoints, hot key tracking, RECOVERY/TRANSPORT metrics, and ready-to-use recovery scripts. - π§° Single-source config β
raft_config.jsonfeeds local scripts, Docker Compose, Redis gateway, and CI so thereβs zero drift.
Start an end-to-end playground with either the local script or Docker Compose. Both spin up a three-node Raft cluster (plus the optional TSO) and expose the Redis-compatible gateway.
# Option A: local processes
./scripts/run_local_cluster.sh --config ./raft_config.example.json
# In another shell: launch the Redis gateway on top of the running cluster
go run ./cmd/nokv-redis --addr 127.0.0.1:6380 --raft-config raft_config.example.json
# Option B: Docker Compose (cluster + gateway + TSO)
docker compose up --build
# Tear down
docker compose down -vOnce the cluster is running you can point any Redis client at 127.0.0.1:6380 (or the address exposed by Compose).
For quick CLI checks:
# Inspect stats from an existing workdir
go run ./cmd/nokv stats --workdir ./artifacts/cluster/store-1Minimal embedded snippet:
package main
import (
"fmt"
"log"
NoKV "github.com/feichai0017/NoKV"
"github.com/feichai0017/NoKV/utils"
)
func main() {
opt := NoKV.NewDefaultOptions()
opt.WorkDir = "./workdir-demo"
db := NoKV.Open(opt)
defer db.Close()
key := []byte("hello")
if err := db.SetCF(utils.CFDefault, key, []byte("world")); err != nil {
log.Fatalf("set failed: %v", err)
}
entry, err := db.Get(key)
if err != nil {
log.Fatalf("get failed: %v", err)
}
fmt.Printf("value=%s\n", entry.Value)
entry.DecrRef()
}βΉοΈ
run_local_cluster.shrebuildsnokv,nokv-config,nokv-tso, seeds manifests vianokv-config manifest, and parks logs underartifacts/cluster/store-<id>/server.log. UseCtrl+Cto exit cleanly; if the process crashes, wipe the workdir (rm -rf ./artifacts/cluster) before restarting to avoid WAL replay errors.
Everything hangs off a single file: raft_config.example.json.
- Local scripts (
run_local_cluster.sh,serve_from_config.sh,bootstrap_from_config.sh) ingest the same JSON, so local runs match production layouts. - Docker Compose mounts the file into each container; manifests, transports, and Redis gateway all stay in sync.
- Need more stores or regions? Update the JSON and re-run the script/Composeβno code changes required.
| Layer | Tech/Package | Why it matters |
|---|---|---|
| Storage Core | lsm/, wal/, vlog/ |
Hybrid log-structured design with manifest-backed durability and value separation. |
| Concurrency | mvcc/, txn.go, oracle |
Timestamp oracle + lock manager for MVCC transactions and TTL-aware reads. |
| Replication | raftstore/* |
Multi-Raft orchestration (regions, peers, router, schedulers, gRPC transport). |
| Tooling | cmd/nokv, cmd/nokv-config, cmd/nokv-redis |
CLI, config helper, Redis-compatible gateway share the same topology file. |
| Observability | stats, hotring, expvar |
Built-in metrics, hot-key analytics, and crash recovery traces. |
graph TD
Client[Client API / Txn] -->|Set/Get| DBCore
DBCore -->|Append| WAL
DBCore -->|Insert| MemTable
DBCore -->|ValuePtr| ValueLog
MemTable -->|Flush Task| FlushMgr
FlushMgr -->|Build SST| SSTBuilder
SSTBuilder -->|LogEdit| Manifest
Manifest -->|Version| LSMLevels
LSMLevels -->|Compaction| Compactor
FlushMgr -->|Discard Stats| ValueLog
ValueLog -->|GC updates| Manifest
DBCore -->|Stats/HotKeys| Observability
Key ideas:
- Durability path β WAL first, memtable second. ValueLog writes occur before WAL append so crash replay can fully rebuild state.
- Metadata β manifest stores SST topology, WAL checkpoints, and vlog head/deletion metadata.
- Background workers β flush manager handles
Prepare β Build β Install β Release, compaction reduces level overlap, and value log GC rewrites segments based on discard stats. - Transactions β MVCC timestamps ensure consistent reads; commit reuses the same write pipeline as standalone writes.
Dive deeper in docs/architecture.md.
| Module | Responsibilities | Source | Docs |
|---|---|---|---|
| WAL | Append-only segments with CRC, rotation, replay (wal.Manager). |
wal/ |
WAL internals |
| LSM | MemTable, flush pipeline, leveled compactions, iterator merging. | lsm/ |
Memtable Flush pipeline Cache |
| Manifest | VersionEdit log + CURRENT handling, WAL/vlog checkpoints, Region metadata. | manifest/ |
Manifest semantics |
| ValueLog | Large value storage, GC, discard stats integration. | vlog.go, vlog/ |
Value log design |
| Transactions | MVCC oracle, managed/unmanaged transactions, iterator snapshots. |
txn.go |
Transactions & MVCC |
| RaftStore | Multi-Raft Region management, hooks, metrics, transport. | raftstore/ |
RaftStore overview |
| HotRing | Hot key tracking, throttling helpers. | hotring/ |
HotRing overview |
| Observability | Periodic stats, hot key tracking, CLI integration. | stats.go, cmd/nokv |
Stats & observability CLI reference |
| Filesystem | mmap-backed file helpers shared by SST/vlog. | file/ |
File abstractions |
Each module has a dedicated document under docs/ describing APIs, diagrams, and recovery notes.
Stats.StartStatspublishes metrics viaexpvar(flush backlog, WAL segments, value log GC stats, txn counters).cmd/nokvgives you:nokv stats --workdir <dir> [--json] [--no-region-metrics]nokv manifest --workdir <dir>nokv regions --workdir <dir> [--json]nokv vlog --workdir <dir>
hotringcontinuously surfaces hot keys in stats + CLI so you can pre-warm caches or debug skewed workloads.
More in docs/cli.md and docs/testing.md.
cmd/nokv-redisexposes a RESP-compatible endpoint. In embedded mode (--workdir) every command runs inside local MVCC transactions; in distributed mode (--raft-config) calls are routed throughraftstore/clientand committed with TwoPhaseCommit so NX/XX, TTL, arithmetic and multi-key writes match the single-node semantics.- TTL metadata is stored under
!redis:ttl!<key>and is automatically cleaned up when reads detect expiration. --metrics-addrpublishesNoKV.Redisstatistics via expvar and--tso-urlcan point to an external TSO service (otherwise a local oracle is used).- A ready-to-use cluster configuration is available at
cmd/nokv-redis/raft_config.example.json, matching bothscripts/run_local_cluster.shand the Docker Compose setup.
For the complete command matrix, configuration and deployment guides, see docs/nokv-redis.md.
| Topic | Document |
|---|---|
| Architecture deep dive | docs/architecture.md |
| WAL internals | docs/wal.md |
| Flush pipeline | docs/flush.md |
| Memtable lifecycle | docs/memtable.md |
| Transactions & MVCC | docs/txn.md |
| Manifest semantics | docs/manifest.md |
| ValueLog manager | docs/vlog.md |
| Cache & bloom filters | docs/cache.md |
| Hot key analytics | docs/hotring.md |
| Stats & observability | docs/stats.md |
| File abstractions | docs/file.md |
| Crash recovery playbook | docs/recovery.md |
| Testing matrix | docs/testing.md |
| CLI reference | docs/cli.md |
| RaftStore overview | docs/raftstore.md |
| Redis gateway | docs/nokv-redis.md |
Apache-2.0. See LICENSE.