Vehicles · Machines · IoT — one OS

A distributed component OS for everything that moves, lifts, or sees.

35 production-ready components. 129 features available. A 28.4 MB steady-state footprint that boots in 1.6 s on modem-class silicon.

rss 28.4 mb
boot 1.6 s
1 core · 5% ovh
sbom: cyclonedx
cra: ready
28.4 MB steady-state RSS modem-class reference profile — steady-state
1.6 s first-app-ready boot modem-class reference profile — first-app-ready
<5% CPU / 10 MB container budget MCM container budget envelope
35 micro services · 129 features production catalog
up to ~100 TOPS AI-class silicon QCS family, binary-compatible with QCS6490

Metrics tagged "modem-class reference profile" measured on a representative modem-class production device. Steady-state RSS at idle; first-app-ready boot is time from kernel handoff to first supervised micro service responding to a service call. Container budget is the MCM overhead envelope per the SDK reference profile.

Four conversations · One platform

One artefact per stakeholder.

Each pillar maps to a page the visitor can forward to the colleague who owns that decision. The CTO usually lands first; the others read what gets sent to them.

CEO & Board 01 / 04

Built for production

A documented SDLC, SBOM, and CRA mapping on every release.

CycloneDX SBOM, cargo audit / geiger / deny, semgrep, and a public PSIRT process. The EU Cyber Resilience Act applies from 2027 — MOS4 ships the compliance artefacts.

CFO & Procurement 02 / 04

Right-size the platform economics

One OS across a modem-class to AI-class silicon family — Munic ports it.

Munic curates the silicon family and ports MOS4 across it; your team picks the tier per product. A practical option alongside RTOS, hobby Linux, Android Automotive, or full ROS2.

CMO & Product 03 / 04

Stand up the AI narrative

Declare the AI in TOML. Camera, GPU, and NPU share memory — no CPU pixel copies.

Run a full DMS + ADAS workload (5 models) plus H.265 encode on a 10-TOPS-class device. Vision adds multi-camera capture, GPU crop and resize, and GDPR live anonymisation before any frame leaves the pipeline.

CTO & Engineering 04 / 04

Reuse the team and tooling

Python, Rust, C, C++, and Go nodes on a single platform.

Config for product managers. No-code engines (MSP, MEP, Multi Stacks, AI Funnel) for embedded engineers. Full code where it matters. Three programming tiers, your call per feature.

Out of the box, together

One Python container drives all four engines.

The four engines are not silos. A single Python container, talking to the in-process MQTT broker, can drive MSP, MEP, Multi Stacks, and AI Funnel from one process — no Rust toolchain, no per-engine SDK, no custom glue.

MSP · push a graph

c.publish(
    "mos/msp/LoadGraph",
    json.dumps({"name": "harsh_brake", "yaml": yaml_str}),
)

MEP · load a policy

c.publish(
    "mos/mep/LoadPolicy",
    json.dumps({"name": "geofence", "yaml": policy_str}),
)

Multi Stacks · load a stack

c.publish(
    "mos/multi-stacks/LoadStack",
    json.dumps({"name": "j1939_truck", "json": stack_json}),
)

AI Funnel · subscribe to detections

c.subscribe("mos/ai-runtime/detections")
c.on_message = lambda _c, _u, m: handle(m.payload)

Same MQTT client, four engines. Any MQTT-capable language — Python, Rust, C, C++, Go, JavaScript — drives the same surface. No language limit.

The pipeline

Four engines · one platform.

Every MOS4 product runs the same four-engine pipeline: bus and IoT decode, signal processing, state-machine policy, and declarative edge AI — then branches to the on-device runtime and the Munic cloud in the same OTA cycle.

Four-engine pipeline. Vehicle bus and industrial-IoT inputs (CAN, CAN-FD, J1939, Modbus) feed Multi Stacks; sensor inputs (camera, IMU, GNSS) feed MSP signal-processing graphs. Multi Stacks decoded signals also feed MSP. MSP outputs feed MEP, the state-machine policy engine (T·C·A primitives under the hood). MEP outputs feed AI Funnel, the declarative edge-AI engine. AI Funnel branches to two destinations in the same OTA cycle: an on-device NPU/GPU/CPU runtime, and the Munic cloud for retraining and OTA delivery.

flowchart LR
  Bus[CAN · CAN-FD · J1939 · Modbus] --> MS[Multi Stacks]
  Sensors[Camera · IMU · GNSS] --> MSP[MSP graphs]
  MS --> MSP
  MSP --> MEP[MEP — state-machine policy]
  MEP --> AI[AI Funnel]
  AI --> RT[On-device NPU / GPU / CPU runtime]
  AI --> Cloud[Munic cloud — OTA + retrain]
  class AI ai-node
  class RT ai-node
Bus + sensors → Multi Stacks → MSP → MEP → AI Funnel. Four engines, two destinations: on-device runtime and Munic cloud share the same OTA cycle.

Before / after — same OS, generation jump.

metric MOS 3.x MOS4 (modem-class reference)
first-app-ready boot ~90 s 1.6 s
steady-state RSS ~60 MB 28.4 MB
minimal micro service — user-written Rust n/a < 30 lines

AI Funnel

Declare your AI. Munic deploys it.

Customers ship a TOML graph plus an ONNX/TFLite model and a COCO dataset. Camera, GPU, and NPU share memory directly — the CPU never copies pixel data. Run multiple concurrent models with H.265 encode on a 10-TOPS-class device.

AI · intelligence layer
— STEP 01

Customer provides

A TOML graph, ONNX/TFLite models, a COCO dataset, and an optional business-logic container.

— STEP 02

Munic cloud does

Retrain, quantise, validate, benchmark, package, and OTA the unified triage model.

— STEP 03

On-device runtime

GPU crop and resize, NPU inference, shared memory end-to-end. No pixel bytes traverse the CPU.

Fan-in diagram: three input streams converge on an amber decision diamond — the on-device AI triage model

35 production components · 129 features

What you do not have to build.

Every supervised component is process-isolated, with explicit typed interfaces, its own CI pipeline, and per-component resource limits. GNSS, modem, OTA, power, vehicle-bus firmware — already there.

Cloud egress

GraphQL mesh as the customer entry point.

The end-to-end cloud path: component in container → MQTT bridge → communication gateway → cloud-connect → cloud microservice → GraphQL mesh → customer application.

Cloud server connected to three vehicles by data links — telematics topology diagram

Cloud egress topology — component to GraphQL mesh

flowchart LR
    A[Component in container] --> B[MQTT bridge]
    B --> C[Communication gateway]
    C --> D[cloud-connect]
    D --> E[cloud microservice]
    E --> F[GraphQL mesh]
    F --> G[Customer app]

Public GraphQL gateway reference: gateway.integration.munic.io/services/graphql_gateway/docs/

Container runtime

10-second hot-swap. Five languages.

From end-of-compile to first successful service call on a real device: 10 seconds. Python, Rust, C, C++, and Go nodes run side-by-side under enforced resource limits, under 5% CPU/RAM overhead. Any MQTT client, any language — no language limit.

Production container isolation

Resource limits per container as a contract, not best-effort — from day one.

5 language SDKs

Python, Rust, C, C++, Go — existing nodes drop in without rewrite. ROS2 nodes ride along via the sidecar pattern.

10 s hot-swap

End-of-compile to first successful service call on a real device.

<5% CPU / ~10 MB RSS overhead

MCM container budget envelope — measured on the SDK reference profile.

SDLC · CI · compliance

63 proto interfaces. SBOM on every release.

63 .proto files in mos-interfaces cover 46 service interfaces. cargo audit, cargo deny, cargo cyclonedx → CycloneDX SBOM. semgrep static analysis. Every micro service, every commit.

cargo audit CVE advisory scan
cargo geiger unsafe Rust audit
cargo deny OSS licence compliance
cargo cyclonedx SBOM generation
semgrep static analysis
ci-gamma shared CI template, one include:
CRA-ready

CRA-ready. Day one. Every release.

Article-by-article CRA mapping, SBOM, security pipeline, PSIRT process.

Read compliance →

Build on MOS4.

A 30-minute discovery call with engineering — no slide deck, no NDA, a direct conversation about fit.