Platform · No-code engines
Four no-code engines. Zero Rust files.
MSP for continuous signal-processing graphs. MEP for state-machine policies (T·C·A primitives under the hood). Multi Stacks for vehicle and industrial-IoT communication. AI Funnel for declarative edge AI. All four are off-target-testable with no hardware in the loop.
Engine comparison
Four engines · one platform.
| Dimension | MSP | MEP | Multi Stacks | AI Funnel |
|---|---|---|---|---|
| Model | Typed node-and-edge dataflow graph | State machine via T·C·A rules | Protocol / Q+R / Broadcast / Strategy | TOML graph + ONNX/TFLite + dataset |
| Execution | Continuous, always-on | Event-driven, reactive | Periodic + composed (MSP/MEP) | Camera → GPU → NPU on-device |
| Authoring | YAML graph + browser Streamlit editor | YAML policy + mep-designer (React-Flow) | JSON stack + default-stacks/ catalogue | TOML graph + cloud retrain pipeline |
| Off-target validation | msp-run CLI with CSV inputs (macOS) | mep-standalone runner + mep-lint | ECU simulator over virtual CAN | AI runtime + GPU ROI shader fakes |
| Hot-reload | LoadGraph service call, no reflash | Policy swapped in-flight, no restart | Stack JSON edit + commit | OTA channel — same as code OTA |
| Catalog | 225 graphs · 20 vehicle domains | 3 trigger types · 5 action types | 12 protocol families · 22 default stacks | Customer model + COCO dataset |
MEP — state-machine policies (T·C·A under the hood)
Policy automation without procedural code.
The product owner reads MEP as state-machine policies on the device. The engineer reads the same YAML as Trigger / Condition / Action primitives. New policy YAML is validated and swapped in with in-flight rule draining — no process restart, no device reboot.
Three trigger types
| Trigger | Example | Typical use |
|---|---|---|
| DB key change | vehicle.speed crosses 90 km/h | Threshold alerts, sampling-rate changes |
| Named event | sos.button.pressed | Hardware interrupt forwarding, micro service events |
| Cron / periodic / one-shot | every 15 min, UTC | Scheduled reporting, heartbeats |
call_interface action
The action call_interface dispatches to MSP, Multi Stacks, any
custom driver, or any micro service through a type-safe proxy with semver version validation
and a configurable timeout (default 3 000 ms). micro service dependencies are declared with
semver ranges and degrade gracefully at runtime.
mep-designer (React/React-Flow) generates valid YAML from a node graph and
auto-discovers requires declarations for engineers who prefer a visual authoring
surface. The same designer renders the state diagram for product review.
MSP — Continuous dataflow
225 graphs. 20 domains. Single-digit-percent CPU budget.
116-kernel browser editor
The msp-editor-streamlit-data-app browser canvas lists all 116 gamma kernel types,
validates the graph structure against the JSON schema in real time, and exports the .msp.yml
the runtime reads directly. Dual beta/gamma mode in one editor.
Runtime injection over a service call
Push new graphs to a running device via service MspService.LoadGraph
without a firmware update. The 20-domain catalog — crash, EV battery, fuel, GNSS, fleet, road,
and more — provides a starting point for vehicle telemetry without authoring from scratch.
Multi Stacks — Vehicle + industrial-IoT communication
Four bullets per stack. Twelve protocol families.
Every Multi Stacks deployment, vehicle bus or Modbus IoT, declares four moving parts: Protocol, Question + Response, Broadcast, Strategy. Stacks are JSON data files; protocol changes are JSON edits.
22 default stacks ship with the OS
OBD-II, UDS, J1939, ISOBUS, OBFCM, Modbus RTU/TCP, CANopen — validated on every CI push via Python/pytest standalone tests. New stacks live in version control, not in firmware builds.
Composes with MSP and MEP
Periodic strategy out of the box. Signal-driven sequences via MSP (a graph triggers a UDS request); event-driven sequences via MEP (a rule fires on a named event and runs a stack action). Advanced strategy stays declarative.
AI Funnel — Declarative edge AI
TOML graph in. OTA out. No on-device toolchain.
The customer ships a TOML graph plus an ONNX/TFLite model and a COCO dataset. Munic cloud retrains, quantises, validates, packages, and OTAs. The on-device runtime executes camera → GPU → NPU with zero pixel-copy.
Same OTA channel as code
A model retrain ships through the same OTA channel as a micro service update — staged rollout, version pinning, fleet rollback all unified across code and models.
Zero CPU pixel reads
The GPU crops and resizes the region of interest; the AI runtime drives the NPU. Camera, GPU, and NPU share memory directly — the handle moves, the pixel data stays in place. The runtime detail is the proof that the declarative model is real.
Off-target validation
CI-level testing without a device — across all four engines.
| Tool | Engine | What it catches |
|---|---|---|
| msp-run | MSP | Graph execution against CSV input on macOS, no device needed |
| mep-standalone | MEP | Full policy replay with scenario YAML files |
| mep-lint | MEP | Schema errors, undefined DB keys, cyclic rule graphs, expression type errors |
| ECU simulator | Multi Stacks | ECU simulation over virtual CAN — UDS, OBD-II, ISO-TP regression suites with no physical bench |
| AI runtime fake | AI Funnel | Inference stub for CI; no NPU hardware required |
Every no-code engine has an off-target runner. A correct YAML/JSON/TOML format and data contract are still required — these tools validate the config surface, not the hardware integration.
Out of the box, together
One Python container drives all four engines.
The four engines are not silos. A single Python container, talking to the in-process MQTT broker, can drive MSP, MEP, Multi Stacks, and AI Funnel from one process — no Rust toolchain, no per-engine SDK, no custom glue.
MSP · push a graph
c.publish(
"mos/msp/LoadGraph",
json.dumps({"name": "harsh_brake", "yaml": yaml_str}),
) MEP · load a policy
c.publish(
"mos/mep/LoadPolicy",
json.dumps({"name": "geofence", "yaml": policy_str}),
) Multi Stacks · load a stack
c.publish(
"mos/multi-stacks/LoadStack",
json.dumps({"name": "j1939_truck", "json": stack_json}),
) AI Funnel · subscribe to detections
c.subscribe("mos/ai-runtime/detections")
c.on_message = lambda _c, _u, m: handle(m.payload) Same MQTT client, four engines, no language limit. Any MQTT-capable runtime — C, C++, Go, Rust, JavaScript — can do the same.
When no-code is not enough
Three programming tiers designed to coexist.
A typical MOS4 product mixes MSP for continuous signal processing, MEP for state-machine policy, Multi Stacks for vehicle or Modbus comms, AI Funnel for edge AI, a custom Rust micro service for the genuinely novel algorithm, and a Python or C++ container for the data-science team's classifier.
See the full SDK →FAQ
Frequently asked questions
-
How do the four engines fit together?
MSP produces continuous named signals from sensor and bus inputs. MEP reacts to discrete triggers (signal threshold, event, cron) with state-machine policies. Multi Stacks talks to vehicle buses and industrial-IoT devices, driven periodically or composed with MSP/MEP. AI Funnel runs declarative edge AI — a TOML graph plus a model and dataset; Munic cloud retrains and OTAs. The four engines share one OS, one OTA channel, one off-target story.
-
Is MEP a state machine or a T·C·A engine?
Both readings ship. The product owner reads MEP as state-machine policies on the device — the policy YAML is the state machine. The engineer reads the same YAML as Trigger / Condition / Action primitives. There is no separate state-machine runtime; the composed rule set is the state machine.
-
Is Multi Stacks the same as OBDStacks?
Yes. OBDStacks is the legacy name; Multi Stacks is the canonical name as of 2026-05-05. Same engine, same JSON DSL, same default-stacks/ catalogue.
-
Do I need hardware to test a policy or graph?
No. The MEP standalone runner replays YAML scenario files off-target; the MEP lint tool catches errors statically. The MSP runner executes any .msp.yml with CSV inputs on macOS. The bundled ECU simulator covers Multi Stacks over virtual CAN. The AI runtime fake stubs inference for AI Funnel CI. All four engines have CI-level off-target runners.
-
Does any of this require Rust?
No. The four engines are configured with YAML, JSON, and TOML. The runtime is Rust, but the authoring surface is data files. A Python container can drive any of the engines over the in-process MQTT broker — see "Out of the box, together" below.
-
When does the SDK path apply?
When the algorithm genuinely cannot be expressed as a graph, a policy, a stack, or a TOML AI graph. Novel detection logic, proprietary model inference, custom hardware integration. The four no-code engines cover the bulk of the device-behaviour surface; Rust micro services, Python containers, and C++ containers cover the rest.
Bring a YAML, a JSON, or a TOML.
Show us the device behaviour, the protocol, the signal extraction, or the inference task; we'll map it to the right engine on a development device.