ns-3 Backend#

OpenOptics can drive the ns-3 discrete-event network simulator as a third backend alongside Mininet and Tofino. The same Python API — BaseNetwork, OpticalTopo.*, OpticalRouting.* — is unchanged; the backend="ns3" switch replaces BMv2/hardware with packet-level simulation.

This document covers:

  1. User guide — how to install and run

  2. System workflow — what happens under the hood

  3. Status — what is supported today


1. User guide#

1.1 Prerequisites#

Requirement

Details

ns-3 C++ build deps

g++, cmake, pkg-config, python3-dev, libgsl-dev, libxml2-dev (Debian/Ubuntu package names)

Python ≥ 3.8

the openoptics base requirement

cppyy

ns-3.37+ Python bindings are JIT-generated by cppyy; pip install cppyy

~4 GB disk

for the ns-3 source tree and build artefacts

~15–30 min one-time build

dominated by ns-3’s ./ns3 build

No Docker is needed — unlike the Mininet backend, the ns-3 backend is a regular Python process that loads ns-3 via its Python bindings.

1.2 Quick start#

# 1. System deps (Debian/Ubuntu)
sudo apt install -y git g++ cmake pkg-config python3-dev \
                    python3-setuptools libgsl-dev libxml2-dev

# 2. OpenOptics itself
pip install "openoptics-dcn[ns3]"
pip install cppyy                     # required by ns-3's Python bindings

# 3. Build ns-3 with the OpenOptics contrib module linked in.
#    The default tag is ns-3.44.
openoptics-install-ns3 ~/ns-3-dev

The helper records the selected tree in $OPENOPTICS_STATE_DIR/ns3_env.json (default ~/.openoptics/ns3_env.json). Ns3Backend reads that file and prepends the bindings directory to sys.path, so normal OpenOptics scripts do not need manual shell exports.

The helper also prints the equivalent exports. They are optional — useful only if you want from ns import ns in a Python session that does not go through Ns3Backend, or if you want to override the recorded path:

export NS3_DIR=~/ns-3-dev
export PYTHONPATH=$NS3_DIR/build/bindings/python:$PYTHONPATH

Verify the OpenOptics path through the backend:

openoptics-gen-examples  # if examples/ is not already present
python3 examples/ns3_routing_direct_perhop.py

To verify the raw ns-3 Python binding outside OpenOptics, first use the exports printed by the helper, or evaluate them explicitly:

eval "$(openoptics-install-ns3 --print-env-only ~/ns-3-dev)"

Then:

from ns import ns

ocs = ns.CreateObject["ns3::openoptics::OcsApp"]()
print(ocs.GetInstanceTypeId().GetName())  # ns3::openoptics::OcsApp

1.3 Helper CLI: openoptics-install-ns3#

The helper is safe to rerun: clone/link steps are idempotent, and configure/build steps rerun against the current ns-3 checkout.

openoptics-install-ns3                      # build into ~/ns-3-dev (default)
openoptics-install-ns3 /opt/ns-3-dev        # pick a custom location
openoptics-install-ns3 --skip-clone PATH    # use an existing ns-3 checkout
openoptics-install-ns3 --skip-build PATH    # link the module, don't build
openoptics-install-ns3 --dry-run PATH       # print commands, run nothing
openoptics-install-ns3 --print-env-only PATH   # print NS3_DIR / PYTHONPATH lines
openoptics-install-ns3 --ns3-version ns-3.45   # override the pinned tag

The minimum supported ns-3 version is ns-3.37 (the first release with cppyy-based Python bindings). Earlier releases use pybindgen, which would require per-class Python binding specs shipped in the contrib module — not supported today.

1.4 BaseNetwork usage#

The same optical-DCN script drives the simulator by changing one string. For sub-millisecond schedules, prefer guardband_us; otherwise guardband_ms is fine.

from openoptics import Toolbox, OpticalTopo, OpticalRouting

net = Toolbox.BaseNetwork(
    name="ns3_4node_direct",
    backend="ns3",                 # only change from Mininet
    nb_node=4,
    nb_link=1,
    time_slice_duration_us=10_000, # 10 ms per slice
    guardband_us=0,                # no OCS reconfig time modelled
    use_webserver=True,            # live dashboard on localhost:8001
    simulation_stop_s=1.0,
)

circuits = OpticalTopo.round_robin(nb_node=4)
net.deploy_topo(circuits)

paths = OpticalRouting.routing_direct(net.get_topo())
net.deploy_routing(paths, routing_mode="Per-hop")

net.udp_traffic() \
    .flow(src=0, dst=1, rate="10Mbps", size_bytes=1_000_000,
          start_s=0.05, stop_s=0.8) \
    .install()
net.start()

1.5 Traffic generation#

Install traffic after deploy_routing(...) and before net.start(). The public interfaces are the protocol-specific builders returned by net.udp_traffic() and net.tcp_traffic(); user code does not need to reach through net._backend:

# One-way UDP client/server traffic, h0 -> h1.
# Scheduler: use rate, packets_per_second, or interval_s; not more than one.
# End by duration_s or stop_s; size by size_bytes or num_packets.
net.udp_traffic() \
    .flow(0, 1, rate="10Mbps", size_bytes=1_000_000,
          start_s=0.05, duration_s=0.75) \
    .install()

# Two independent one-way flows.
net.udp_traffic() \
    .bidirectional(0, 1, rate="5Mbps", duration_s=0.75) \
    .install()

# Fan-in workload.
# Scheduler alternative: packets_per_second.
net.udp_traffic() \
    .many_to_one([0, 1, 2], dst=3, packets_per_second=1000,
                 packet_size_bytes=512, duration_s=0.5) \
    .install()

# Traffic matrix. Values use the same format as rate=.
# Matrix values provide rates; do not also pass scheduler knobs.
tm = {(0, 1): "10Mbps", (2, 3): "500Kbps"}
net.udp_traffic().from_matrix(tm, duration_s=0.5).install()

# UDP echo request/reply traffic.
# Scheduler alternative: interval_s.
net.udp_traffic() \
    .echo(0, 1, num_packets=20, interval_s=0.03) \
    .install()

# TCP BulkSend: exact total application bytes, sent as fast as TCP allows.
net.tcp_traffic() \
    .bulk(0, 1, size_bytes=10_000_000, chunk_size_bytes=1448,
          start_s=0.05, stop_s=0.8) \
    .install()

# TCP OnOff: rate-shaped TCP with an optional total-byte cap.
net.tcp_traffic() \
    .onoff(0, 1, rate="100Mbps", size_bytes=10_000_000,
          packet_size_bytes=1448, duration_s=0.75) \
    .install()

udp_traffic().flow(...) installs one-way UDP traffic using ns-3 UdpClientHelper/UdpServerHelper. For UDP flow and echo traffic, size_bytes is converted to a packet count, and packet_size_bytes is the per-packet payload size. tcp_traffic().bulk(...) maps size_bytes to ns-3 BulkSend’s exact MaxBytes, and chunk_size_bytes controls each application write chunk.

Both TCP and UDP also support rate-shaped OnOff traffic. size_bytes is the optional OnOff byte cap; if a byte cap and duration are provided without an explicit rate, the builder derives the rate from bytes * 8 / duration.

Each builder is single-use: after .install(), adding more flows or installing again raises RuntimeError. .install() returns a list of InstalledTraffic objects. After net.start() runs the simulator, installed[i].stats() returns the matching FlowMonitor record:

installed = (
    net.tcp_traffic()
    .bulk(0, 1, size_bytes=10_000_000, duration_s=0.75)
    .install()
)
net.start()
print(installed[0].stats().fct_s)

1.6 BaseNetwork parameters for an ns-3 simulation#

“Common” parameters are shared with other backends; “ns-3” parameters are only accepted when backend="ns3" (they flow through **backend_kwargs and are validated against Ns3Backend.accepted_kwargs()). The table lists the parameters most relevant to an ns-3 run.

Parameter

Type

Scope

Description

backend

str

Common

"ns3"

nb_node

int

Common

Number of logical ToRs

nb_link

int

Common

OCS uplinks per ToR

nb_host_per_tor

int

Common

Must be 1 for ns-3

arch_mode

str

Common

Must be "TO"; traffic-aware mode is not wired

time_slice_duration_us / time_slice_duration_ms

int

Common

Optical slice length; specify at most one

guardband_us / guardband_ms

int

Common

OCS dark-window guardband; specify at most one

use_webserver

bool

Common

Enables the dashboard service and metric DB

ocs_tor_link_bw_gbps

float

Common

ToR↔OCS point-to-point link bandwidth

tor_host_link_bw_gbps

float

Common

Host↔ToR point-to-point link bandwidth

link_delay_us

int

ns-3

Shorthand default for both host/ocs link delays

host_link_delay_us

int

ns-3

Host↔ToR propagation delay; overrides link_delay_us

ocs_link_delay_us

int

ns-3

ToR↔OCS propagation delay; overrides link_delay_us

cq_buffer_bytes

int

ns-3

Total byte buffer limit for each ToR calendar queue across all slices and uplinks; default 1_048_576

simulation_stop_s

float

ns-3

Simulated-seconds budget before run() returns

snapshot_interval_us

int

ns-3

Dashboard sampling cadence; default is one sample per slice

verify_sr_cur_node

bool

ns-3

Opt-in P4-style verify_desired_node for source routing

admission_control

bool

ns-3

Per-hop ADM: forward only on a target slot that can drain queued bytes plus this packet; source routing is untouched


2. System workflow#

If you only want to run simulations, the examples above are enough. This section explains how OpenOptics installs its ns-3 module, loads the Python bindings, and maps OpenOptics routing state into the simulation.

2.1 Where the pieces live#

OpenOptics does not fork ns-3. The optical-DCN primitives (OCS, ToR, calendar queue, slotted scheduler, …) live in this repository as a standard ns-3 contrib module. The installer symlinks that module into the selected ns-3 tree:

openoptics/backends/ns3/
├── backend.py              Ns3Backend(BackendBase) — build topology, dispatch tables, run()
├── install.py              the openoptics-install-ns3 entry point
└── src/                    ← contrib module; symlinked into <NS3_DIR>/contrib/openoptics
    ├── CMakeLists.txt      build_lib(LIBNAME openoptics SOURCE_FILES ...)
    └── model/
        ├── openoptics-ocs-app.{h,cc}               OCS: time-gated forwarding
        ├── openoptics-tor-app.{h,cc}               ToR: ingress + calendar queue + per-hop/SR dispatch
        ├── openoptics-calendar-queue.h             templated per-slice queue
        ├── openoptics-header.{h,cc}                per-packet mode + dst_node + arrival_ts
        └── openoptics-source-route-header.{h,cc}   variable-length SR hop list

ns-3 distinguishes src/ (core modules shipped with ns-3) from contrib/ (external modules). Both are built by ./ns3 build, but contrib/ is the standard location for third-party extensions. Using a symlink instead of a copy means edits under openoptics/backends/ns3/src/ are picked up by the next ns-3 build without a resync step.

2.2 Installation pipeline#

user's machine                                     <NS3_DIR>
┌───────────────────────────┐
│ openoptics-install-ns3    │    git clone ns-3    ┌─────────────────────────┐
│   ├─ check g++/cmake/... ─┼────depth 1, tag─────▶│  ns-3.44 tree           │
│   ├─ git clone            │                      │  ├─ src/  (core)        │
│   ├─ symlink contrib     ─┼──────────────────────┼─▶contrib/openoptics ──┐ │
│   ├─ ./ns3 configure      │                      │    (→ OpenOptics pkg) │ │
│   │   --enable-python-    │                      │                       │ │
│   │     bindings          │                      │  build/                │ │
│   └─ ./ns3 build          │                      │  ├─ include/ns3/       │ │
└───────────────────────────┘                      │  │   openoptics-module.h
                                                   │  ├─ lib/               │ │
   openoptics package                              │  │   libns3.44-openoptics-*.so
   ┌────────────────────────┐                      │  └─ bindings/python/ns/  │
   │ backends/ns3/src/      │◀─── symlink ─────────┘                        │ │
   │  ├─ CMakeLists.txt     │                                               │ │
   │  └─ model/*.cc, *.h    │◀──────────────────────────────────────────────┘ │
   └────────────────────────┘                                                 │
                                                                              │
   python3                                                                    │
   └─ from ns import ns                   ←────── cppyy JITs bindings ────────┘
      └─ ns.openoptics.{OcsApp,TorApp,…} (auto-loaded from lock file)
  1. Check. The helper verifies that the required tools are on $PATH. It only checks tools needed by the selected flags: --skip-build skips cmake/g++, and --skip-clone skips git.

  2. Clone. git clone --depth 1 --branch ns-3.44 ... into the target directory. If the directory already exists and is non-empty, the clone step is a no-op; --skip-clone bypasses it entirely.

  3. Symlink. <NS3_DIR>/contrib/openoptics <openoptics package>/backends/ns3/src. Idempotent: existing symlink to the same path is left alone, collision with an unrelated directory fails loudly instead of overwriting.

  4. Configure + build. ./ns3 configure --enable-python-bindings --enable-examples && ./ns3 build. ns-3’s CMake configuration discovers the contrib module automatically and writes a lock file with NS3_ENABLED_CONTRIBUTED_MODULES = ['ns3-openoptics'].

  5. Persist + exports. The helper writes the chosen path to $OPENOPTICS_STATE_DIR/ns3_env.json (default ~/.openoptics/ns3_env.json). On the next run, Ns3Backend.__init__ resolves NS3_DIR in this order: $NS3_DIR env var > recorded JSON file > error. It also prepends <NS3_DIR>/build/bindings/python to sys.path, so the bindings import works without a manual PYTHONPATH export. The helper still prints shell-export lines for users who want to run from ns import ns outside Ns3Backend; the same output is available from openoptics-install-ns3 --print-env-only <dir> for use with eval.

2.3 Python-binding load path#

ns-3.37+ generates Python bindings with cppyy. There is no pybindgen step and no per-class binding spec. Loading is driven by a build-time lock file and cppyy’s on-demand JIT:

  1. from ns import ns runs <NS3_DIR>/build/bindings/python/ns/__init__.py.

  2. The loader reads <NS3_DIR>/.lock-ns3_*_build, which lists NS3_ENABLED_MODULES + NS3_ENABLED_CONTRIBUTED_MODULES. Our contrib module appears as ns3-openoptics here because ./ns3 configure discovered it via the contrib symlink.

  3. For each listed module the loader:

    • cppyy.include("ns3/<module>-module.h") — ns-3’s build system auto-generates openoptics-module.h in build/include/ns3/ from the module’s HEADER_FILES list in CMakeLists.txt.

    • cppyy.load_library("libns3.X-<module>-<profile>.so") — the resulting .so contains our C++ code plus TypeId registrations.

  4. NS_OBJECT_ENSURE_REGISTERED(Foo) in each .cc fires Foo::GetTypeId() at library-load time, so TypeId::LookupByName("ns3::openoptics::Foo") succeeds without the user touching the registry. This macro is mandatory for every class derived from ns3::Object — skipping it means the class is still instantiable through cppyy but invisible to ns-3’s TypeId-driven factories.

2.4 From Python to the simulation#

Ns3Backend.setup() lazily runs from ns import ns, so importing the OpenOptics package does not require a built ns-3 tree. It then constructs NodeContainers for hosts, ToRs, and the OCS; wires the point-to-point channels; instantiates openoptics::OcsApp and openoptics::TorApp; and sets up a FlowMonitor on the host nodes. OCS ports are registered in port-major order (for link_id: for tor_id:), matching Toolbox’s cal_node_port_to_ocs_port(node_id, port_id) = port_id * nb_node + node_id encoding. As a result, schedule entries from gen_ocs_commands() pass through _apply_entry unchanged.

load_table() dispatches each TableEntry to a C++ setter on the appropriate app:

Table

Handler

ocs_schedule

OcsApp::AddScheduleEntry(ingress, slice, egress)

ip_to_dst_node

TorApp::AddIpToDst

per_hop_routing

TorApp::AddPerHopEntry

arrive_at_dst

TorApp::AddArriveAtDst

cal_port_slice_to_node

TorApp::AddCalPortSliceToNode

add_source_routing_entries

TorApp::AddSourceRoutingEntry

verify_desired_node

no-op (ns-3 does this in code; gate at TorApp::SetVerifySrCurNode)

run() calls Simulator::Run() until simulation_stop_s, prints a counter and FlowMonitor report, and pauses for Enter in interactive terminals so the dashboard stays up.

supports_device_manager = False and supports_cli = False because the simulator has no Thrift/gRPC control plane and no interactive shell. When use_webserver=True, setup_dashboard() attaches an Ns3MetricSink to OcsApp::SetSnapshotListener and TorApp::SetSnapshotListener, so the dashboard receives simulator telemetry without a DeviceManager.

ToR queue snapshots expose packet-count metrics (queue_depth, queue_peak) and byte metrics (queue_bytes, queue_peak_bytes). The byte limit configured by cq_buffer_bytes is a buffer-admission limit; the per-slice transmit budget still comes from link rate, slice duration, guardband, and OCS propagation delay.

2.5 Upgrade path#

Bumping the ns-3 version is a small, localized change:

  1. Test against the new tag: openoptics-install-ns3 --ns3-version ns-3.45 /tmp/ns-3-dev-45.

  2. Fix any API changes in openoptics/backends/ns3/src/ (ns-3 renames occur between releases — Ptr<Node> constructors, channel helpers, etc.).

  3. Bump DEFAULT_NS3_VERSION in openoptics/backends/ns3/install.py.

Because the contrib module lives in this repository and the ns-3 checkout lives outside it, git pull in the user’s ns-3 tree is non-destructive. Nothing under contrib/openoptics/ is overwritten because the symlink points back to the OpenOptics package, and the next configure/build run picks up the new ns-3 core.


3. Status#

The ns-3 backend runs real OpenOptics simulations end-to-end. The packaging, contrib module, binding layer, and Ns3Backend implementation are all live; examples and a skip-unless-ns-3 test suite ship with the repo.

Area

Status

openoptics-install-ns3 helper (clone + configure + build ns-3 with our contrib module)

ns-3 contrib module at openoptics/backends/ns3/src/ (CMake, builds into libns3.X-openoptics-*.so)

Python-binding discovery via ns-3’s cppyy loader (from ns import ns; ns.openoptics.…)

TypeId registration for contrib classes (NS_OBJECT_ENSURE_REGISTERED)

Ns3Backend.setup/load_table/run/… (the BackendBase interface)

C++ models: OcsApp, TorApp, CalendarQueue, OpenOpticsHeader, OpenOpticsSourceRouteHeader

Routing modes: per-hop + source routing; direct / HoHo / VLB (random and node-type)

Dashboard sink (Ns3MetricSink) — live OCS + ToR counters into the FastAPI/SQLite dashboard

Public traffic builders: net.udp_traffic() / net.tcp_traffic()

FlowMonitor stats via InstalledTraffic.stats() after net.start()

Routing and analysis examples under examples/ (ns3_routing_*.py, ns3_rtt_comparison.py, ns3_tcp_circle_long_flows.py)

Traffic-aware (CONTROL_BASED) calendar queue

🚧 In-progress

More than one host per ToR

🚧 In-progress

Live CLI / Thrift control plane

❌ not applicable to a simulator

Technique Notes:

  • Guardband semantics. BaseNetwork accepts either guardband_us or guardband_ms and stores the canonical value in microseconds before calling the backend. When guardband_us + ocs_link_delay_us >= time_slice_duration_us, a RuntimeWarning is emitted and the simulation can produce zero throughput — the OCS dark window and ToR byte-budget admission both reject candidate packets.

  • Source-route cur_node check. Off by default for performance. Set verify_sr_cur_node=True as a backend kwarg to enable P4-style verify_desired_node semantics; misrouted SR packets then increment the ToR drop counter instead of silently continuing.