Inert Architecture

Security by construction,
not by correction.

Most systems are secured after they are built. SignaVision systems are structured so unsafe behaviour is not possible to begin with. Every interface, workflow, and AI processing layer is designed to resist uncontrolled execution, hidden state, and unaudited access — not through patches, but through structure.

Design Doctrine

What Inert means.

In physics, an inert substance does not spontaneously react. It remains stable under pressure. That principle translates directly into software design: an inert system does not spontaneously misbehave. It stays structurally stable, observable, and policy-governed even when handling dynamic inputs, AI inference, and external platform integrations.

Most failures are not attacks. They are consequences of systems doing more than they should. Interfaces expose too much. Code paths do too much. AI executes without constraint. Inert architecture removes this by reducing what the system can do.

Structure is the security primitive — not the firewall
Interfaces are bounded, not open-ended
AI inference operates inside a defined policy envelope
Every layer is auditable by design, not by accident

Inert Architecture — Formal Definition

A structured system design approach in which interfaces, workflows, and AI processing operate through controlled, auditable layers rather than uncontrolled direct execution.

The attack surface is minimized not by hardening every edge but by reducing the number of edges that exist. Every integration point, session boundary, and processing step is a deliberate, inspectable unit — not a consequence of convenience.

Active design principle across all SignaVision systems

Methodology

What construction actually means.

Most systems treat security as a layer added on top of working code. Construction means security is a property of the code's shape — present because the wrong thing is structurally impossible, not conditionally blocked.

Conventional AI System

Client
API
Model
Storage
Response
Data written to disk
Token issued and reusable
Shared compute
Multiple execution paths
Controlled by policy

Inert Architecture

Client
Gateway
Encrypted Tunnel
GPU — RAM only
Response
No storage layer exists
No transferable authority exists
Isolated compute
Single execution path
Constrained by structure

Security by Correction

The conventional approach

Build first. Harden later.

Firewall rules, ACLs, and rate limits are applied to a system that already works — added because problems appeared, not because the design prevented them.

Security lives in the checks.

Authentication and authorization are conditional runtime checks. A missed if statement leaves the wrong path open. The structure itself is not safe.

Audit logs are instrumentation.

Logging is added after the fact to satisfy compliance requirements. It is optional and can be disabled. Audit coverage depends on remembering to add it everywhere.

Each integration adds new surface.

Every new integration point is a new general-purpose endpoint with its own attack profile. The system grows larger and more difficult to reason about.

Security by Construction

The SignaVision approach

The wrong path does not exist.

Interfaces are typed and narrow by design. An unauthenticated grade-write is not blocked — it is structurally absent. There is no endpoint to find, no logic to bypass.

Security lives in the structure.

A session cannot access another session's data because the data model does not create that relationship. No check is required because no ambiguity was introduced.

Audit trails are structural.

Telemetry is a required part of every attempt record, not optional instrumentation. The system cannot complete a grading transaction without producing an auditable artifact.

Integrations are constrained by definition.

Each integration point has one permitted input shape and one permitted output. No additional surface is created. The system does not grow harder to reason about as it grows larger.

The critical distinction: Security by correction turns security into a maintenance burden — every change requires re-auditing every check. Security by construction makes security a permanent property of the design. Refactoring the system does not break its security because security was never implemented as a layer on top of it. Our platform is built where security is inherited from the design, not as afterthought or correction.

Verifiable

The code says so.

These properties are not policies. They are consequences of how the system is written. Every security property listed on this page is present because the code's structure makes it present — not because a check was added.

Session isolation

session_key = uuid.uuid4().hex
LTISession.objects.create(
session_key = session_key,
question = question,
student_id = params.get('user_id'),
)

The session key is generated fresh from a UUID at every launch — never derived from student identity or any predictable input. You cannot enumerate or guess another student's session.

No cross-session access

# Every operation requires the UUID key
lti_session =
get_object_or_404(
LTISession,
session_key=session_key,
)

There is no endpoint that lists sessions or accepts a student ID to retrieve one. The only access path requires the UUID. Cross-session reading is structurally absent. Not restricted — not possible.

Grade passback gated

# Only runs if LMS provided the channel
if (
lti_session.lis_outcome_url
and lti_session.lis_result_sourcedid
):
_send_lti_grade(
outcome_url, sourcedid, score
)

Grades can only be written back to the LMS that created the session using the exact endpoint it provided at launch. There is no general grade-write surface. The channel cannot be fabricated.

Audit trail structural

# telemetry is a required model field
attempt = StudentAttempt.objects.create(
lti_session = lti_session,
is_correct = is_correct,
score = score,
telemetry = telemetry,
)

Telemetry is stored as part of every attempt record — not optional logging. A grading transaction cannot complete without producing an auditable artifact containing the classifier's frame data.

Bounded values & methods

# Score clamped before any passback
score_val = max(0.0, min(1.0, score))
# HTTP method enforced by decorator
@require_POST
def submit(request, session_key):
...
@require_GET
def question(request, session_key):

Scores are structurally clamped before leaving the system. HTTP methods are enforced at the decorator level — a GET to the submit endpoint returns 405 as a structural consequence, not a runtime guard.

Answer space by schema

# Valid answers defined by the question
@property
def expected_letters(self):
if self.question_type == 'spell':
return list(self.correct_answer.upper())
return [self.correct_answer.upper()]

What constitutes a correct response is defined by the question schema — not computed at grading time. The AI classifier can only produce a match against an answer space the data model defines.

None of these are policies enforced at runtime. They are shapes the code takes. A session key that is a UUID cannot be a predictable integer. A grade passback that requires an LMS-provided URL cannot be called without one. A telemetry field that is part of the record schema cannot be omitted. The properties hold because the structure enforces them — not because a developer remembered to check.

The Framework

Three properties every inert system must hold.

Structured

Every component is a defined unit with a known interface. There are no implicit code paths, no hidden execution chains, no ambient authority. The system does only what its structure permits.

  • Typed inputs and outputs at every boundary
  • No open-ended execution surfaces
  • Isolated sessions — no cross-contamination

Observable

What the system does must be visible. Every AI inference, session event, and grade transaction is logged and retrievable. Observability is not optional instrumentation — it is a structural requirement.

  • Per-attempt telemetry with confidence data
  • Auditable LTI grade passback chain
  • Session states inspectable without system disruption

Policy-governed

Behaviour is determined by declared policy, not emergent runtime decisions. What can happen, when it can happen, and who can trigger it is defined before the system runs — not negotiated during execution.

  • LTI session permissions scoped to the launch context
  • AI outputs constrained to defined response types
  • Grade passback requires explicit authorization chain
Surface Minimization

Less surface.
Less risk. By design.

The traditional approach to security is to build the system first and harden it afterward. That produces complexity layered on top of complexity — more rules, more exceptions, more configuration to get wrong.

The inert approach eliminates attack surface by never exposing it. Interfaces are purpose-built and narrow. AI models receive structured inputs, not freeform requests. Sessions are isolated. Execution paths are finite and known. There is no general-purpose API to exploit because no general-purpose API was built.

Surface reduction principles

Bounded execution

Every process has a defined start, defined inputs, and a defined termination condition. No open-ended loops, no ambient execution.

Constrained interfaces

Integration points accept exactly what they need. No wildcard parameters, no arbitrary payloads, no passthrough execution.

Session isolation

Each LTI session is a sealed context. Credentials, question state, and telemetry belong to that session and cannot bleed into adjacent sessions.

No ambient authority

A component can only do what its current context explicitly permits. Permissions are scoped to the action, not granted globally.

Structured telemetry, not raw logging

System events are captured as typed data — confidence scores, frame timestamps, outcomes — not unstructured log strings that leak sensitive context.

LTI Assessment — Constrained Flow

LMS origin (Canvas / Blackboard)
Boundary: LTI 1.3 signed POST only
Isolated session — UUID keyed, no cross-session access
Boundary: Structured JSON payload only
AI classifier — landmark input, letter output, no side effects
Grading engine — declared schema, stored telemetry
Boundary: Grade passback via explicit outcome URL
LMS gradebook updated — session sealed
Applied Inert Design

A sign-language assessment that cannot misbehave.

The LTI assessment flow is an inert pipeline. A student launches from their LMS through a signed LTI handshake. That handshake establishes a session boundary — an isolated context that governs everything that follows.

The browser AI classifier receives nothing but hand landmark coordinates. It outputs nothing but a letter string. It cannot read the question bank, access session credentials, or trigger grade passback. Those connections were not built into the interface. This is not an access restriction — there is no path to restrict.

Grade passback requires the exact outcome URL and source ID provided at launch. There is no general grade-write endpoint. The channel exists only within the context that created it.

Each integration point has exactly one permitted action
AI inference cannot escape its structural container
Telemetry is captured as typed, auditable data

System Architecture

One coherent platform. Three inert layers.

Every layer of the SignaVision stack is designed to the same inert standard. No layer can be coerced into bypassing the constraints of the layer beneath it.

Layer 1

Private Compute Infrastructure

Dedicated compute and processing infrastructure. Workloads run in isolated environments with no shared execution context. Policy-governed resource allocation — AI inference cannot access storage, storage cannot initiate compute.

Layer 2

Inert Builder — e-Forger Chamber Builder

Structured system generation using IR nodes — discrete, inspectable units rather than freeform code generation. The builder cannot produce unconstrained outputs. Every artifact it generates is a composed inert structure.

Layer 3

Inert Assessment Engine

LMS-integrated sign-language assessment with bounded session management, constrained AI inference, and auditable grade passback. Institutional-grade reliability because the pipeline permits no informal paths.

IR Nodes

Inert philosophy
in code form.

IR nodes prevent the natural entropy of software systems. Instead of allowing arbitrary code structures to accumulate, every component is expressed as a discrete, composable, inspectable unit — with known type, known inputs, known outputs, and no hidden side effects.

An IR node has a known type, known inputs, known outputs, and no hidden side effects. When systems are built from IR nodes, the resulting architecture inherits all three inert properties automatically: it is structured by construction, observable by definition, and policy-governed because the node schema enforces it.

Composable — nodes connect only through typed edges
Inspectable — every node reveals its contract
Replaceable — swap implementation without breaking structure

IR Node — Structure

Node definition
type : "gesture_classifier"
inputs : [ "landmark_coords" ]
outputs : [ "letter: A–Y" ]
side_effects : none
policy : "read landmarks, emit letter, terminate"
Node is inert — cannot influence adjacent context

Built on inert architecture

Systems worth trusting are
designed to be trustworthy.

Institutions using SignaVision are not relying on policy, process, or promise. They are relying on systems where trust is a property of the architecture itself.