Automating Worst‑Case Execution Time (WCET) Checks in CI: Practical Guide with VectorCAST + RocqStat
embeddedCI/CDverification

Automating Worst‑Case Execution Time (WCET) Checks in CI: Practical Guide with VectorCAST + RocqStat

UUnknown
2026-03-02
11 min read
Advertisement

Step-by-step 2026 guide to integrate RocqStat + VectorCAST WCET checks into CI, with containerized runners, gating logic and report templates.

Automating Worst‑Case Execution Time (WCET) Checks in CI: Practical Guide with VectorCAST + RocqStat

Hook: If your embedded or safety‑critical builds still treat timing verification as a late-stage manual activity, you’re courting late project surprises, audit findings and wasted hardware cycles. In 2026 the tooling and industry expectations have converged: WCET must be reproducible, automated and gated in CI just like unit tests. This guide shows exactly how to do that with VectorCAST and RocqStat (now part of the Vector toolchain), including containerized runners, CI pipeline examples, gating logic and report templates you can drop into your pipelines today.

Why this matters in 2026

In January 2026 Vector Informatik acquired StatInf's RocqStat technology and team to bring timing analysis and WCET estimation into the VectorCAST ecosystem. That deal reflects a wider trend: regulators and program managers now demand continuous, auditable timing evidence alongside functional tests—especially for automotive (ISO 26262), avionics (DO‑178C) and industrial systems (IEC 61508). Integrating timing analysis into CI/CD is no longer optional; it’s part of a modern safety‑case and a FinOps-like practice for real‑time resource usage.

“Timing safety is becoming a critical verification axis—tools must provide reproducible WCET results that integrate with existing test automation.” — industry reporting on the Vector/RocqStat integration (Automotive World, Jan 2026)

Scope and assumptions

This guide assumes you have:

  • Access to VectorCAST and RocqStat (or the RocqStat CLI) under license; you’ll need valid licenses for non‑trial CI runs.
  • An existing CI system (GitLab CI, GitHub Actions, Jenkins, Azure DevOps or similar) and ability to run containerized runners or dedicated agents.
  • Familiarity with your target board/emulator and build system (Make/CMake, Yocto, etc.).

High-level automation pattern

Embed WCET analysis into CI the same way you run unit tests or static analysis. The components are:

  • Reproducible runner image: container with VectorCAST, RocqStat CLI, compilers and toolchain wrappers.
  • Instrumented build: consistent build flags, map files, and binary layout matching the analysis assumptions.
  • Analysis step: run RocqStat to produce machine‑readable WCET outputs (JSON/JUnit/XML).
  • Gating logic: fail pipeline or block merge when WCET exceeds deadlines or regressions exceed thresholds.
  • Artifact storage & trend: store results for audits and plot trending (Prometheus/Grafana or long‑term storage like S3/MinIO).

Step-by-step: Build a reproducible Docker image

Containerizing the analysis guarantees identical environments across CI runners and developers’ desktops. Your container will include the tool CLIs and any helper scripts.

Example Dockerfile (conceptual)

FROM ubuntu:22.04

# Install build essentials and runtime dependencies
RUN apt-get update && apt-get install -y build-essential python3 jq curl unzip \
    && rm -rf /var/lib/apt/lists/*

# Copy VectorCAST and RocqStat installers (licensed). In CI you'll mount license files or use a license server.
COPY installers/vectorcast-installer.sh /tmp/
COPY installers/rocqstat-cli.tar.gz /tmp/
RUN bash /tmp/vectorcast-installer.sh --accept-license --install-dir /opt/vectorcast \
    && tar -xzf /tmp/rocqstat-cli.tar.gz -C /opt/rocqstat

ENV PATH="/opt/vectorcast/bin:/opt/rocqstat/bin:${PATH}"

# Add helper scripts
COPY ci/run_rocqstat.sh /usr/local/bin/run_rocqstat.sh
RUN chmod +x /usr/local/bin/run_rocqstat.sh

WORKDIR /work
ENTRYPOINT ["/bin/bash"]

Notes:

  • Licensing: store license keys safely (license server, Vault, or secure mounted volume). Avoid baking licenses into images.
  • Image size: VectorCAST and RocqStat are large; use layered builds and private registries.

Step 1 — Ensure a deterministic build

WCET analysis depends on instruction layout and binary artifacts. Your CI build must be deterministic:

  • Use fixed compiler versions and flags (record them in the artifact).
  • Pin linker scripts and memory maps; store the final ELF/map files as artifacts.
  • Use reproducible timestamps (SOURCE_DATE_EPOCH) to avoid accidental diffs.

Step 2 — Run the analysis in CI

Execute RocqStat from the container or agent after the build step. The command depends on the CLI interface available in your version; below is a conceptual wrapper script that you can adapt.

run_rocqstat.sh (example)

#!/bin/bash
set -euo pipefail

BUILD_DIR=${1:-build}
PROJECT_NAME=${2:-my_project}
OUT_DIR=${3:-artifacts/rocqstat}
mkdir -p ${OUT_DIR}

# 1) Prepare map and binary
cp ${BUILD_DIR}/firmware.elf ${OUT_DIR}/firmware.elf
cp ${BUILD_DIR}/firmware.map ${OUT_DIR}/firmware.map

# 2) Run static WCET estimator
# Replace with actual RocqStat CLI invocation after checking your license and args
rocqstat analyze --elf ${OUT_DIR}/firmware.elf --map ${OUT_DIR}/firmware.map \
  --project ${PROJECT_NAME} --output ${OUT_DIR}/wcet.json --format json

# 3) Convert to JUnit for CI test reporting
python3 /opt/rocqstat/tools/to_junit.py ${OUT_DIR}/wcet.json > ${OUT_DIR}/wcet.junit.xml

# 4) Exit non-zero if deadline or regression exceeded (gating)
python3 /usr/local/bin/wcet_gate.py ${OUT_DIR}/wcet.json || exit 2

Key points:

  • Produce machine‑readable output (JSON + JUnit XML) for gating and dashboards.
  • Separate analysis and gating: analysis always produces artifacts; gating enforces policy.

Step 3 — Gating logic: how and when to fail a build

Gating must be defensible and configurable. Typical gating rules:

  • Hard deadline: fail if any measured/static WCET > safety deadline.
  • Regression threshold: fail if WCET increases more than X% (or Y microseconds) against baseline.
  • Noise/uncertainty handling: allow small fluctuations (statistical confidence intervals) or require multiple runs.
  • Contextual gating: for feature branches use softer thresholds; for release branches use strict fails.

Example gating script logic (wcet_gate.py simplified)

#!/usr/bin/env python3
import json, sys

f = sys.argv[1]
with open(f) as fh:
    data = json.load(f)

# Expected structure: {"functions": [{"name":..., "wcet_us":...}, ...], "timestamp":...}
DEADLINE_US = int(os.getenv('WCET_DEADLINE_US', '5000'))
REGRESSION_PCT = float(os.getenv('WCET_REGRESSION_PCT', '5.0'))

failed = False
for fn in data.get('functions', []):
    if fn['wcet_us'] > DEADLINE_US:
        print(f"FAIL: {fn['name']} WCET {fn['wcet_us']}us > deadline {DEADLINE_US}us")
        failed = True

# Compare to baseline stored in artifacts/baseline.json (fetch from artifact store in CI)
# Simplified: load baseline if exists and check percent increase
try:
    with open('artifacts/baseline/wcet.json') as bh:
        base = json.load(bh)
    for fn in data.get('functions', []):
        b = next((x for x in base.get('functions',[]) if x['name']==fn['name']), None)
        if b:
            delta = fn['wcet_us'] - b['wcet_us']
            pct = (delta / b['wcet_us']) * 100 if b['wcet_us'] else 0
            if pct > REGRESSION_PCT:
                print(f"REGRESSION: {fn['name']} +{pct:.1f}%")
                failed = True
except FileNotFoundError:
    print('No baseline present; skipping regression checks')

sys.exit(1 if failed else 0)

CI examples: GitLab CI and GitHub Actions

Below are compact, production‑oriented CI snippets. Adapt to your runner and artifact store.

GitLab CI (gitlab-ci.yml snippet)

stages:
  - build
  - wcet

build:
  stage: build
  image: registry.mycompany.com/tooling:build-ubuntu
  script:
    - make all
  artifacts:
    paths:
      - build/firmware.elf
      - build/firmware.map
    expire_in: 7d

wcet_analysis:
  stage: wcet
  image: registry.mycompany.com/tooling:vector-rocqstat:2026
  needs:
    - job: build
      artifacts: true
  script:
    - /usr/local/bin/run_rocqstat.sh build my_project artifacts/rocqstat
  artifacts:
    when: always
    paths:
      - artifacts/rocqstat/wcet.json
      - artifacts/rocqstat/wcet.junit.xml
    expire_in: 90d
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'   # enforce strict gating on main

GitHub Actions (workflow snippet)

name: CI
on: [push, pull_request]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Build
        run: make all
      - name: Upload build artifacts
        uses: actions/upload-artifact@v4
        with:
          name: build-artifacts
          path: build/

  wcet:
    needs: build
    runs-on: self-hosted-rocqstat
    container:
      image: registry.mycompany.com/tooling/vector-rocqstat:2026
    steps:
      - uses: actions/checkout@v4
      - name: Download build artifacts
        uses: actions/download-artifact@v4
        with:
          name: build-artifacts
      - name: Run RocqStat
        run: /usr/local/bin/run_rocqstat.sh build my_project artifacts/rocqstat
      - name: Upload results
        uses: actions/upload-artifact@v4
        with:
          name: wcet-results
          path: artifacts/rocqstat/

Reporting templates: JSON and JUnit

Provide two artifacts: a detailed JSON for automated trend analysis and a JUnit XML for native CI test reporting. Below is a recommended JSON schema and a tiny JUnit mapping.

{
  "project": "my_project",
  "commit": "",
  "timestamp": "2026-01-17T12:34:56Z",
  "platform": {
    "arch": "armv7-m",
    "cpu_freq_hz": 48000000,
    "compiler": "gcc-10",
    "linker_script": "layout.ld"
  },
  "functions": [
    {
      "name": "Sensor_Read",
      "wcet_us": 1200,
      "best_estimate_us": 800,
      "confidence": 0.995,
      "source_file": "src/sensor.c:42"
    }
  ],
  "notes": "RocqStat static analysis v1.2.3"
}

Minimum JUnit mapping

<testsuites>
  <testsuite name="wcet" tests="1" failures="0" time="0">
    <testcase classname="Sensor" name="Sensor_Read">
      <system-out>WCET 1200us (deadline 5000us)</system-out>
    </testcase>
  </testsuite>
</testsuites>

CI systems will display JUnit results; for deeper analysis ingest JSON into your trend DB (InfluxDB/Prometheus) and plot percentiles and deltas.

Trend analysis and dashboards

WCET is inherently a time series problem. A single pass is useful for gating, but teams need long‑term trends:

  • Export key metrics (max WCET per function, percent changes) to Prometheus or InfluxDB.
  • Create Grafana dashboards showing per‑function trends, recent commits causing regressions, and distribution bands.
  • Alerting: trigger Slack/email when a regression exceeds thresholds or when confidence drops.

Practical tips — CI reliability and speed

  • Split analysis scope: Run full, slow global WCET analysis nightly but run targeted per‑commit checks on changed modules/functions to keep feedback fast.
  • Cache intermediate results: Use CI caching for symbol maps and intermediate analysis graphs to reduce runtime.
  • Use dedicated runners for deterministic performance: for measurement-based WCET runs, prefer pinned bare‑metal runners or cloud instances with reserved CPU to avoid noisy neighbours.
  • Parallelize safely: static analysis can often be parallelized by compilation unit; ensure licensing allows concurrent tool instances.
  • Reproducibility: log environment, tool versions, seed values and store them with artifacts for audits.

Handling measurement‑based vs static analysis

RocqStat and related technologies emphasize static, sound WCET estimations that provide upper bounds. Many teams also use measurement-based (profiling) approaches; treat them as complementary:

  • Static analysis provides conservative, analyzable upper bounds useful for certification and gating.
  • Measurement-based (HIL, QEMU) uncovers realistic hotspots, but cannot replace sound upper bounds.
  • Combine both: use measurement runs in CI to validate the estimators and to catch environment‑specific regressions.

Auditability and safety cases

Automating WCET in CI is meaningless unless outputs are auditable:

  • Store JSON/JUnit outputs in immutable artifact stores (S3 with versioning, Nexus/Artifactory).
  • Record licenses, tool versions, and build metadata in each artifact.
  • Link WCET evidence to your requirements management and test management systems (VectorCAST integrations or ALM links).

Security and supply‑chain considerations (2026 focus)

By 2026, security and SBOM are mandatory in many programs. For WCET pipelines:

  • Generate SBOMs for the analysis image and store them with results.
  • Scan containers and artifacts for CVEs and outdated components—especially toolchains that affect binary layout.
  • Use ephemeral credentials and hardware security modules (HSM) or Vault to serve licenses securely to CI agents.

Example: gating policy you can adopt

  1. On PR: run targeted RocqStat checks on changed functions. Warning on >2% increase, block merge on >10% increase.
  2. On main: run full project static WCET; block merge for any function exceeding documented deadlines.
  3. Nightly: run exhaustive WCET with extended settings (deep cache analysis) and update baselines after triage and approval.

Common pitfalls and how to avoid them

  • Non‑deterministic builds: failing to pin compilers or linker scripts creates noisy WCET deltas—fix by pinning and recording versions.
  • Using flawed baselines: always validate a baseline's provenance before using it as comparison—track baseline approval in CI metadata.
  • Ignoring confidence/uncertainty: use statistical thresholds, not absolute pass/fail for small variations.
  • Tool licensing surprises: confirm concurrent runner limits and automation licensing with your tool vendor (Vector's RocqStat integration may alter licensing models—check 2026 product notes).

Real-world example: incremental adoption

One automotive supplier we worked with adopted a three‑phase rollout in 2025–2026:

  1. Phase 1 (pilot): integrated RocqStat CLI in nightly CI for a single ECU project; produced JSON artifacts and manual review dashboards.
  2. Phase 2 (PR gating): added targeted per‑function checks for PRs with soft warnings and a weekly report to developers showing hotspots.
  3. Phase 3 (release): enforced hard deadlines on the release branch and used artifacts as part of the ISO 26262 safety case. Combined static and measurement runs for final sign‑off.

Future directions and predictions for 2026–2028

Expect these developments through 2028:

  • Tighter VectorCAST + RocqStat integration produces single‑click WCET evidence exports linked to Vector test cases.
  • Cloud vendors offering certified embedded CI runners (hardware speed profiles) to support measurement‑based pipelines as a service.
  • Regulatory guidelines moving toward continuous verification—automated WCET evidence will be required in many certification artifacts.

Checklist: What to implement in your repo this week

  • Create a container image with RocqStat CLI and VectorCAST tools (or confirm vendor image availability).
  • Add a CI job to produce WCET JSON and JUnit artifacts.
  • Implement a gating script that compares against baseline and deadlines.
  • Store artifacts in immutable storage and add a Grafana dashboard for trends.
  • Document the pipeline and link outputs to your safety case or release checklist.

Closing thoughts

Automating WCET checks in CI turns what used to be an adversarial, late-stage verification step into a continuous, auditable part of your development lifecycle. With the Vector/RocqStat combination maturing in 2026, teams have the opportunity to integrate timing safety into day‑to‑day workflows—reducing surprises, improving traceability for certification, and accelerating delivery of reliable embedded systems.

Call to action: Start small: add a containerized RocqStat job that produces JSON/JUnit artifacts this week. If you need a jumpstart, contact a managed tooling partner to help build the CI runner, migration plan, and gating policies tailored to your safety domain.

Advertisement

Related Topics

#embedded#CI/CD#verification
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-03-02T01:12:24.283Z