Using StormForger with GitHub Actions

This guide shows an example GitHub Actions workflow that utilisies StormForger to perform a load test after every deployment as well as once a week. A validation run is also performed against the staging environment for every push to ensure the load test definition is up to date.

GitHub Actions Successful Job Execution

For this guide we assume you have general knowledge of GitHub Actions and how it works. You also need the permissions to configure secrets in your repository or organisation.

The code is available on GitHub at stormforger/example-github-actions. Our example service is written in Go, but you don't need to know Go, as we will only discuss the StormForger related steps.

Note that this is just an example and your actual development workflow may differ. Please take this as an inspiration how to use StormForger with GitHub Actions.


Please follow the Getting Started with the Forge CLI guide to create an API Token and configure it as the STORMFORGER_JWT secret in your repository.

Follow the steps described in GitHub's Creating and storing encrypted secrets guide. It should look like this, when you are done:

GitHub Actions Secrets

Note: You can also configure secrets on the organisation level. If you have multiple repositories using StormForger, this might be easier to manage.

Inside the workflow this secret can now be referenced as ${{ secrets.STORMFORGER_JWT }}. This is not an environment variable, so we need to explicitly pass it along to every step where we use the forge CLI.


Our workflow file cicd.yml consists of three jobs: build, test and deploy. The first two are run for every push while deploy is only run for changes on the master branch. Our goal is to run the load test in validation mode against the staging target in the test job and against the production environment in the deploy job. For both jobs we follow the same steps:

  1. Setup the StormForger CLI
  2. Manage data sources
  3. Build the Test Run and Launch It

Let's go through each step one by one.

Note: Below we show each step with all relevant environment variables. Since there are certain repetitions, the real workflow deduplicates some of them by moving them to the job and workflow level.

Setup the StormForger CLI

First, we need to install the forge CLI:

- name: StormForger | Install latest forge CLI
  run: |
    wget -O forge_linux_amd64.tar.gz
    tar -xzf forge_linux_amd64.tar.gz
    ./forge ping

GitHub Actions runs every job in a separate VM and by default does not share any data. Thus we need to reinstall the CLI for every job where we want to use it. We pass along the STORMFORGER_JWT secret so we can run the ./forge ping command: This performs an authorized ping against the StormForger API and verifies that the token is valid and usable.

Manage Data Sources

Data Sources allow a test run to pick random data out of a predefined pool, e.g. a product out of all the available inventory. If you don't use data sources, you can skip this step.

In our workflow, we use the script ./scripts/ to generate a CSV file that we can use in our test, but this can be easily changed or extended to download the latest inventory data from a database.

- name: StormForger | Upload data-sources
  run: |
    ./scripts/ "${TARGET_ENV}"
    ./forge datasource push demo *.csv --name-prefix-path="${{ }}/${TARGET_ENV}/" --auto-field-names
    TARGET_ENV: "production"

We prefix all uploaded CSV files with our repository name and the target environment to make them easily distinguishable in the data source management of

This step is the same for both the staging and production environment, except for the value of the TARGET_ENV variable.

Build the Test Run and Launch It

Heads Up! Testing in production can be dangerous. Make sure the configured arrival rate/load in your test case repo is low enough so you don't trigger any incidents. The goal here is not to run stress tests regularly but to ensure we do not experience any mayor performance regression with nominal load.

As the last step we launch the test run. We use one script here: scripts/ It combines the load test (in loadtest/loadtest.js) with an environment specific prefix (loadtest/staging.js or loadtest/production.js). Building the test case dynamically for each environment allows us to modify the target system urls, specify global variables and even define different arrival phases as needed. Afterwards we use forge test-case launch to launch the test case.

This step can be more or less sophisticated, like generating more complex test cases, compiling them from TypeScript or use a bundler like gulp/webpack.

- name: StormForger | Launch test run
  run: |
    ./scripts/ "${TARGET_ENV}" "/tmp/testcase.js"
    ./forge test-case launch "${TESTCASE}" --test-case-file="/tmp/testcase.js" \
      --title="${TITLE}" --notes="${NOTES}" ${LAUNCH_ARGS}
    LAUNCH_ARGS: "--validate"
    NOTES: |
      Name | Value
      ---- | -----
      Ref | ${{github.ref}}
      git SHA | [${{}}](${{github.event.head_commit.url}})
      Workflow | ${{github.workflow}}
      Run#     | ${{github.run_number}}
      RunID    | [${{github.run_id}}](${{github.repository}}/actions/runs/${{github.run_id}})
      Actor    | ${{}}

    TITLE: "${{github.workflow}}#${{github.run_number}} (${{github.ref}})"
    TARGET_ENV: "production"
    TESTCASE: "demo/${{ }}-production"

In the previous step we use a lot of environment variables. This is mainly for formatting reasons. NOTES and TITLE are stored in the test run and provide metadata based on the current commit. This allows easier retracing and comparing later on, as we also link back to the current github action job. With LAUNCH_ARGS: "--validate" we are launching the test run only in validation mode for our staging environment. For the production environment we are instead passing in LAUNCH_ARGS: "--nfr-check-file=./loadtest/loadtest.nfr.yaml" which performs the Non-Functional Requirement checks after the test run has finished.

Formatted the notes and title look like this:

GitHub Actions Secrets

Scheduled Test-Run Execution

Finally, we want to run the loadtest once a week. GitHub Actions allows this via scheduled events:

    # Do a run every sunday night
    - cron: 12 5 * * 0

Since we don't want to run through all phases (build, test, deploy) nor do we want to redeploy to production for this, we use a separate workflow that runs every week and only contains the already discussed steps to launch a test run.


To summarize, we used GitHub Actions to download our CLI, upload data sources and launch a test run for every environment in our development cycle. A weekly job verifies that the loadtest continuously works and no other factors introduce regressions. By using NFR checks, we automatically verify that our non-functional requirements are fulfilled.

Icon Support Are you stuck? Talk to us! We're humans.
Icon Schedule a demo Schedule a personal, customized demo. We'll show you around and introduce you to StormForger.
Icon Talk to a human To build and run reliable applications is complex – we know. Schedule a call and we’ll figure things out.

We are using cookies to give you the best online experience. If you continue to use this site, you agree to our use of cookies. By declining we will disable all but strictly required cookies. Please see our privacy policy for more details.

Accept Decline