ns-3 Backend#
OpenOptics can drive the ns-3 discrete-event network
simulator as a third backend alongside Mininet and Tofino. The same Python
API — BaseNetwork, OpticalTopo.*, OpticalRouting.* — is unchanged; the
backend="ns3" switch replaces BMv2/hardware with packet-level simulation.
This document covers:
User guide — how to install and run
System workflow — what happens under the hood
Status — what is supported today
1. User guide#
1.1 Prerequisites#
Requirement |
Details |
|---|---|
ns-3 C++ build deps |
|
Python ≥ 3.8 |
the openoptics base requirement |
|
ns-3.37+ Python bindings are JIT-generated by cppyy; |
~4 GB disk |
for the ns-3 source tree and build artefacts |
~15–30 min one-time build |
dominated by ns-3’s |
No Docker is needed — unlike the Mininet backend, the ns-3 backend is a regular Python process that loads ns-3 via its Python bindings.
1.2 Quick start#
# 1. System deps (Debian/Ubuntu)
sudo apt install -y git g++ cmake pkg-config python3-dev \
python3-setuptools libgsl-dev libxml2-dev
# 2. OpenOptics itself
pip install "openoptics-dcn[ns3]"
pip install cppyy # required by ns-3's Python bindings
# 3. Build ns-3 with the OpenOptics contrib module linked in.
# The default tag is ns-3.44.
openoptics-install-ns3 ~/ns-3-dev
The helper records the selected tree in
$OPENOPTICS_STATE_DIR/ns3_env.json (default ~/.openoptics/ns3_env.json).
Ns3Backend reads that file and prepends the bindings directory to
sys.path, so normal OpenOptics scripts do not need manual shell exports.
The helper also prints the equivalent exports. They are optional — useful
only if you want from ns import ns in a Python session that does not go
through Ns3Backend, or if you want to override the recorded path:
export NS3_DIR=~/ns-3-dev
export PYTHONPATH=$NS3_DIR/build/bindings/python:$PYTHONPATH
Verify the OpenOptics path through the backend:
openoptics-gen-examples # if examples/ is not already present
python3 examples/ns3_routing_direct_perhop.py
To verify the raw ns-3 Python binding outside OpenOptics, first use the exports printed by the helper, or evaluate them explicitly:
eval "$(openoptics-install-ns3 --print-env-only ~/ns-3-dev)"
Then:
from ns import ns
ocs = ns.CreateObject["ns3::openoptics::OcsApp"]()
print(ocs.GetInstanceTypeId().GetName()) # ns3::openoptics::OcsApp
1.3 Helper CLI: openoptics-install-ns3#
The helper is safe to rerun: clone/link steps are idempotent, and configure/build steps rerun against the current ns-3 checkout.
openoptics-install-ns3 # build into ~/ns-3-dev (default)
openoptics-install-ns3 /opt/ns-3-dev # pick a custom location
openoptics-install-ns3 --skip-clone PATH # use an existing ns-3 checkout
openoptics-install-ns3 --skip-build PATH # link the module, don't build
openoptics-install-ns3 --dry-run PATH # print commands, run nothing
openoptics-install-ns3 --print-env-only PATH # print NS3_DIR / PYTHONPATH lines
openoptics-install-ns3 --ns3-version ns-3.45 # override the pinned tag
The minimum supported ns-3 version is ns-3.37 (the first release with cppyy-based Python bindings). Earlier releases use pybindgen, which would require per-class Python binding specs shipped in the contrib module — not supported today.
1.4 BaseNetwork usage#
The same optical-DCN script drives the simulator by changing one string.
For sub-millisecond schedules, prefer guardband_us; otherwise
guardband_ms is fine.
from openoptics import Toolbox, OpticalTopo, OpticalRouting
net = Toolbox.BaseNetwork(
name="ns3_4node_direct",
backend="ns3", # only change from Mininet
nb_node=4,
nb_link=1,
time_slice_duration_us=10_000, # 10 ms per slice
guardband_us=0, # no OCS reconfig time modelled
use_webserver=True, # live dashboard on localhost:8001
simulation_stop_s=1.0,
)
circuits = OpticalTopo.round_robin(nb_node=4)
net.deploy_topo(circuits)
paths = OpticalRouting.routing_direct(net.get_topo())
net.deploy_routing(paths, routing_mode="Per-hop")
net.udp_traffic() \
.flow(src=0, dst=1, rate="10Mbps", size_bytes=1_000_000,
start_s=0.05, stop_s=0.8) \
.install()
net.start()
1.5 Traffic generation#
Install traffic after deploy_routing(...) and before net.start().
The public interfaces are the protocol-specific builders returned by
net.udp_traffic() and net.tcp_traffic(); user code does not need to
reach through net._backend:
# One-way UDP client/server traffic, h0 -> h1.
# Scheduler: use rate, packets_per_second, or interval_s; not more than one.
# End by duration_s or stop_s; size by size_bytes or num_packets.
net.udp_traffic() \
.flow(0, 1, rate="10Mbps", size_bytes=1_000_000,
start_s=0.05, duration_s=0.75) \
.install()
# Two independent one-way flows.
net.udp_traffic() \
.bidirectional(0, 1, rate="5Mbps", duration_s=0.75) \
.install()
# Fan-in workload.
# Scheduler alternative: packets_per_second.
net.udp_traffic() \
.many_to_one([0, 1, 2], dst=3, packets_per_second=1000,
packet_size_bytes=512, duration_s=0.5) \
.install()
# Traffic matrix. Values use the same format as rate=.
# Matrix values provide rates; do not also pass scheduler knobs.
tm = {(0, 1): "10Mbps", (2, 3): "500Kbps"}
net.udp_traffic().from_matrix(tm, duration_s=0.5).install()
# UDP echo request/reply traffic.
# Scheduler alternative: interval_s.
net.udp_traffic() \
.echo(0, 1, num_packets=20, interval_s=0.03) \
.install()
# TCP BulkSend: exact total application bytes, sent as fast as TCP allows.
net.tcp_traffic() \
.bulk(0, 1, size_bytes=10_000_000, chunk_size_bytes=1448,
start_s=0.05, stop_s=0.8) \
.install()
# TCP OnOff: rate-shaped TCP with an optional total-byte cap.
net.tcp_traffic() \
.onoff(0, 1, rate="100Mbps", size_bytes=10_000_000,
packet_size_bytes=1448, duration_s=0.75) \
.install()
udp_traffic().flow(...) installs one-way UDP traffic using ns-3
UdpClientHelper/UdpServerHelper. For UDP flow and echo traffic,
size_bytes is converted to a packet count, and packet_size_bytes is the
per-packet payload size. tcp_traffic().bulk(...) maps size_bytes to
ns-3 BulkSend’s exact MaxBytes, and chunk_size_bytes controls each
application write chunk.
Both TCP and UDP also support rate-shaped OnOff traffic. size_bytes is the
optional OnOff byte cap; if a byte cap and duration are provided without an
explicit rate, the builder derives the rate from bytes * 8 / duration.
Each builder is single-use: after .install(), adding more flows or
installing again raises RuntimeError. .install() returns a list of
InstalledTraffic objects. After net.start() runs the simulator,
installed[i].stats() returns the matching FlowMonitor record:
installed = (
net.tcp_traffic()
.bulk(0, 1, size_bytes=10_000_000, duration_s=0.75)
.install()
)
net.start()
print(installed[0].stats().fct_s)
1.6 BaseNetwork parameters for an ns-3 simulation#
“Common” parameters are shared with other backends; “ns-3” parameters are only
accepted when backend="ns3" (they flow through **backend_kwargs and are
validated against Ns3Backend.accepted_kwargs()).
The table lists the parameters most relevant to an ns-3 run.
Parameter |
Type |
Scope |
Description |
|---|---|---|---|
|
str |
Common |
|
|
int |
Common |
Number of logical ToRs |
|
int |
Common |
OCS uplinks per ToR |
|
int |
Common |
Must be |
|
str |
Common |
Must be |
|
int |
Common |
Optical slice length; specify at most one |
|
int |
Common |
OCS dark-window guardband; specify at most one |
|
bool |
Common |
Enables the dashboard service and metric DB |
|
float |
Common |
ToR↔OCS point-to-point link bandwidth |
|
float |
Common |
Host↔ToR point-to-point link bandwidth |
|
int |
ns-3 |
Shorthand default for both host/ocs link delays |
|
int |
ns-3 |
Host↔ToR propagation delay; overrides |
|
int |
ns-3 |
ToR↔OCS propagation delay; overrides |
|
int |
ns-3 |
Total byte buffer limit for each ToR calendar queue across all slices and uplinks; default |
|
float |
ns-3 |
Simulated-seconds budget before |
|
int |
ns-3 |
Dashboard sampling cadence; default is one sample per slice |
|
bool |
ns-3 |
Opt-in P4-style |
|
bool |
ns-3 |
Per-hop ADM: forward only on a target slot that can drain queued bytes plus this packet; source routing is untouched |
2. System workflow#
If you only want to run simulations, the examples above are enough. This section explains how OpenOptics installs its ns-3 module, loads the Python bindings, and maps OpenOptics routing state into the simulation.
2.1 Where the pieces live#
OpenOptics does not fork ns-3. The optical-DCN primitives (OCS, ToR, calendar queue, slotted scheduler, …) live in this repository as a standard ns-3 contrib module. The installer symlinks that module into the selected ns-3 tree:
openoptics/backends/ns3/
├── backend.py Ns3Backend(BackendBase) — build topology, dispatch tables, run()
├── install.py the openoptics-install-ns3 entry point
└── src/ ← contrib module; symlinked into <NS3_DIR>/contrib/openoptics
├── CMakeLists.txt build_lib(LIBNAME openoptics SOURCE_FILES ...)
└── model/
├── openoptics-ocs-app.{h,cc} OCS: time-gated forwarding
├── openoptics-tor-app.{h,cc} ToR: ingress + calendar queue + per-hop/SR dispatch
├── openoptics-calendar-queue.h templated per-slice queue
├── openoptics-header.{h,cc} per-packet mode + dst_node + arrival_ts
└── openoptics-source-route-header.{h,cc} variable-length SR hop list
ns-3 distinguishes src/ (core modules shipped with ns-3) from contrib/
(external modules). Both are built by ./ns3 build, but contrib/ is the
standard location for third-party extensions. Using a symlink instead of a
copy means edits under openoptics/backends/ns3/src/ are picked up by the
next ns-3 build without a resync step.
2.2 Installation pipeline#
user's machine <NS3_DIR>
┌───────────────────────────┐
│ openoptics-install-ns3 │ git clone ns-3 ┌─────────────────────────┐
│ ├─ check g++/cmake/... ─┼────depth 1, tag─────▶│ ns-3.44 tree │
│ ├─ git clone │ │ ├─ src/ (core) │
│ ├─ symlink contrib ─┼──────────────────────┼─▶contrib/openoptics ──┐ │
│ ├─ ./ns3 configure │ │ (→ OpenOptics pkg) │ │
│ │ --enable-python- │ │ │ │
│ │ bindings │ │ build/ │ │
│ └─ ./ns3 build │ │ ├─ include/ns3/ │ │
└───────────────────────────┘ │ │ openoptics-module.h
│ ├─ lib/ │ │
openoptics package │ │ libns3.44-openoptics-*.so
┌────────────────────────┐ │ └─ bindings/python/ns/ │
│ backends/ns3/src/ │◀─── symlink ─────────┘ │ │
│ ├─ CMakeLists.txt │ │ │
│ └─ model/*.cc, *.h │◀──────────────────────────────────────────────┘ │
└────────────────────────┘ │
│
python3 │
└─ from ns import ns ←────── cppyy JITs bindings ────────┘
└─ ns.openoptics.{OcsApp,TorApp,…} (auto-loaded from lock file)
Check. The helper verifies that the required tools are on
$PATH. It only checks tools needed by the selected flags:--skip-buildskipscmake/g++, and--skip-cloneskipsgit.Clone.
git clone --depth 1 --branch ns-3.44 ...into the target directory. If the directory already exists and is non-empty, the clone step is a no-op;--skip-clonebypasses it entirely.Symlink.
<NS3_DIR>/contrib/openoptics → <openoptics package>/backends/ns3/src. Idempotent: existing symlink to the same path is left alone, collision with an unrelated directory fails loudly instead of overwriting.Configure + build.
./ns3 configure --enable-python-bindings --enable-examples && ./ns3 build. ns-3’s CMake configuration discovers the contrib module automatically and writes a lock file withNS3_ENABLED_CONTRIBUTED_MODULES = ['ns3-openoptics'].Persist + exports. The helper writes the chosen path to
$OPENOPTICS_STATE_DIR/ns3_env.json(default~/.openoptics/ns3_env.json). On the next run,Ns3Backend.__init__resolvesNS3_DIRin this order:$NS3_DIRenv var > recorded JSON file > error. It also prepends<NS3_DIR>/build/bindings/pythontosys.path, so the bindings import works without a manualPYTHONPATHexport. The helper still prints shell-export lines for users who want to runfrom ns import nsoutsideNs3Backend; the same output is available fromopenoptics-install-ns3 --print-env-only <dir>for use witheval.
2.3 Python-binding load path#
ns-3.37+ generates Python bindings with cppyy. There is no pybindgen step and no per-class binding spec. Loading is driven by a build-time lock file and cppyy’s on-demand JIT:
from ns import nsruns<NS3_DIR>/build/bindings/python/ns/__init__.py.The loader reads
<NS3_DIR>/.lock-ns3_*_build, which listsNS3_ENABLED_MODULES+NS3_ENABLED_CONTRIBUTED_MODULES. Our contrib module appears asns3-openopticshere because./ns3 configurediscovered it via the contrib symlink.For each listed module the loader:
cppyy.include("ns3/<module>-module.h")— ns-3’s build system auto-generatesopenoptics-module.hinbuild/include/ns3/from the module’sHEADER_FILESlist inCMakeLists.txt.cppyy.load_library("libns3.X-<module>-<profile>.so")— the resulting.socontains our C++ code plus TypeId registrations.
NS_OBJECT_ENSURE_REGISTERED(Foo)in each.ccfiresFoo::GetTypeId()at library-load time, soTypeId::LookupByName("ns3::openoptics::Foo")succeeds without the user touching the registry. This macro is mandatory for every class derived fromns3::Object— skipping it means the class is still instantiable through cppyy but invisible to ns-3’s TypeId-driven factories.
2.4 From Python to the simulation#
Ns3Backend.setup() lazily runs from ns import ns, so importing the
OpenOptics package does not require a built ns-3 tree. It then constructs
NodeContainers for hosts, ToRs, and the OCS; wires the point-to-point
channels; instantiates openoptics::OcsApp and openoptics::TorApp; and sets
up a FlowMonitor on the host nodes. OCS ports are registered in port-major
order (for link_id: for tor_id:), matching Toolbox’s
cal_node_port_to_ocs_port(node_id, port_id) = port_id * nb_node + node_id
encoding. As a result, schedule entries from gen_ocs_commands() pass through
_apply_entry unchanged.
load_table() dispatches each TableEntry to a C++ setter on the
appropriate app:
Table |
Handler |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
no-op (ns-3 does this in code; gate at |
run() calls Simulator::Run() until simulation_stop_s, prints a counter
and FlowMonitor report, and pauses for Enter in interactive terminals so the
dashboard stays up.
supports_device_manager = False and supports_cli = False because the
simulator has no Thrift/gRPC control plane and no interactive shell. When
use_webserver=True, setup_dashboard() attaches an Ns3MetricSink to
OcsApp::SetSnapshotListener and TorApp::SetSnapshotListener, so the
dashboard receives simulator telemetry without a DeviceManager.
ToR queue snapshots expose packet-count metrics (queue_depth,
queue_peak) and byte metrics (queue_bytes, queue_peak_bytes). The
byte limit configured by cq_buffer_bytes is a buffer-admission limit; the
per-slice transmit budget still comes from link rate, slice duration,
guardband, and OCS propagation delay.
2.5 Upgrade path#
Bumping the ns-3 version is a small, localized change:
Test against the new tag:
openoptics-install-ns3 --ns3-version ns-3.45 /tmp/ns-3-dev-45.Fix any API changes in
openoptics/backends/ns3/src/(ns-3 renames occur between releases —Ptr<Node>constructors, channel helpers, etc.).Bump
DEFAULT_NS3_VERSIONinopenoptics/backends/ns3/install.py.
Because the contrib module lives in this repository and the ns-3 checkout
lives outside it, git pull in the user’s ns-3 tree is non-destructive.
Nothing under contrib/openoptics/ is overwritten because the symlink points
back to the OpenOptics package, and the next configure/build run picks up the
new ns-3 core.
3. Status#
The ns-3 backend runs real OpenOptics simulations end-to-end. The
packaging, contrib module, binding layer, and Ns3Backend
implementation are all live; examples and a skip-unless-ns-3 test
suite ship with the repo.
Area |
Status |
|---|---|
|
✅ |
ns-3 contrib module at |
✅ |
Python-binding discovery via ns-3’s cppyy loader ( |
✅ |
|
✅ |
|
✅ |
C++ models: |
✅ |
Routing modes: per-hop + source routing; direct / HoHo / VLB (random and node-type) |
✅ |
Dashboard sink ( |
✅ |
Public traffic builders: |
✅ |
FlowMonitor stats via |
✅ |
Routing and analysis examples under |
✅ |
Traffic-aware (CONTROL_BASED) calendar queue |
🚧 In-progress |
More than one host per ToR |
🚧 In-progress |
Live CLI / Thrift control plane |
❌ not applicable to a simulator |
Technique Notes:
Guardband semantics.
BaseNetworkaccepts eitherguardband_usorguardband_msand stores the canonical value in microseconds before calling the backend. Whenguardband_us + ocs_link_delay_us >= time_slice_duration_us, aRuntimeWarningis emitted and the simulation can produce zero throughput — the OCS dark window and ToR byte-budget admission both reject candidate packets.Source-route
cur_nodecheck. Off by default for performance. Setverify_sr_cur_node=Trueas a backend kwarg to enable P4-styleverify_desired_nodesemantics; misrouted SR packets then increment the ToR drop counter instead of silently continuing.