GitLab CI/CD

This guide explains how to integrate the LoadFocus JMeter API Client with GitLab CI/CD for automated performance testing.

Setup Steps

1. Store Credentials as GitLab CI/CD Variables

First, store your LoadFocus API credentials as GitLab CI/CD variables:

  1. Go to your GitLab project
  2. Navigate to Settings > CI/CD > Variables
  3. Add the following variables:
    • LOADFOCUS_API_KEY: Your LoadFocus API key (mark as "Masked")
    • LOADFOCUS_TEAM_ID: Your LoadFocus team ID

2. Create a GitLab CI/CD Pipeline

Create or update your .gitlab-ci.yml file in your repository:

stages:
- build
- test
- performance
- deploy
variables:
NODE_VERSION: "16"
build:
stage: build
image: node:${NODE_VERSION}
script:
- npm install
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 week
test:
stage: test
image: node:${NODE_VERSION}
script:
- npm install
- npm test
performance_test:
stage: performance
image: node:${NODE_VERSION}
script:
# Install LoadFocus JMeter API Client
- npm install -g @loadfocus/loadfocus-api-client
# Configure LoadFocus API Client
- loadfocus-api config set apikey $LOADFOCUS_API_KEY
- loadfocus-api config set teamid $LOADFOCUS_TEAM_ID
# Run Performance Tests
- |
loadfocus-api jmeter run-test \
--name "GitLab_${CI_PROJECT_NAME}_${CI_COMMIT_REF_NAME}" \
--thresholds "avgresponse<=200,errors==0,p95<=250" \
--format json > performance_results.json
artifacts:
paths:
- performance_results.json
expire_in: 1 week
when: always
# Optional: Only run on specific branches
only:
- main
- develop
deploy:
stage: deploy
script:
- echo "Deploying application..."
only:
- main
# Only deploy if all previous stages succeeded
when: on_success

3. View Test Results

After the pipeline runs:

  1. Go to your GitLab project
  2. Navigate to CI/CD > Pipelines
  3. Find your pipeline and click on it
  4. Go to the "performance_test" job
  5. Click on "Browse" in the right sidebar to view artifacts
  6. Download and view the performance_results.json file

Advanced Configuration

Environment-specific Testing

Run different performance tests for different environments:

.performance_test_template: &performance_test_definition
stage: performance
image: node:${NODE_VERSION}
script:
- npm install -g @loadfocus/loadfocus-api-client
- loadfocus-api config set apikey $LOADFOCUS_API_KEY
- loadfocus-api config set teamid $LOADFOCUS_TEAM_ID
- |
loadfocus-api jmeter run-test \
--name "${TEST_NAME}" \
--thresholds "${THRESHOLDS}" \
--format json > performance_results.json
artifacts:
paths:
- performance_results.json
expire_in: 1 week
when: always
performance_test_develop:
<<: *performance_test_definition
variables:
TEST_NAME: "API_Test_Develop"
THRESHOLDS: "avgresponse<=300,errors==0,p95<=500"
only:
- develop
performance_test_staging:
<<: *performance_test_definition
variables:
TEST_NAME: "API_Test_Staging"
THRESHOLDS: "avgresponse<=250,errors==0,p95<=350"
only:
- staging
performance_test_production:
<<: *performance_test_definition
variables:
TEST_NAME: "API_Test_Production"
THRESHOLDS: "avgresponse<=200,errors==0,p95<=250"
only:
- main

Parallel Testing

Run multiple performance tests in parallel:

performance_test_api:
stage: performance
image: node:${NODE_VERSION}
script:
- npm install -g @loadfocus/loadfocus-api-client
- loadfocus-api config set apikey $LOADFOCUS_API_KEY
- loadfocus-api config set teamid $LOADFOCUS_TEAM_ID
- |
loadfocus-api jmeter run-test \
--name "API_Performance_Test" \
--thresholds "avgresponse<=150,errors==0" \
--format json > api_performance_results.json
artifacts:
paths:
- api_performance_results.json
expire_in: 1 week
performance_test_ui:
stage: performance
image: node:${NODE_VERSION}
script:
- npm install -g @loadfocus/loadfocus-api-client
- loadfocus-api config set apikey $LOADFOCUS_API_KEY
- loadfocus-api config set teamid $LOADFOCUS_TEAM_ID
- |
loadfocus-api jmeter run-test \
--name "UI_Performance_Test" \
--thresholds "avgresponse<=300,errors==0" \
--format json > ui_performance_results.json
artifacts:
paths:
- ui_performance_results.json
expire_in: 1 week

Then create a pipeline schedule in GitLab:

  1. Go to your GitLab project
  2. Navigate to CI/CD > Schedules
  3. Click "New schedule"
  4. Set up a schedule (e.g., every day at midnight)

Creating Performance Reports

Generate HTML reports from JSON results:

performance_test:
stage: performance
image: node:${NODE_VERSION}
script:
# Run performance test
- npm install -g @loadfocus/loadfocus-api-client
- loadfocus-api config set apikey $LOADFOCUS_API_KEY
- loadfocus-api config set teamid $LOADFOCUS_TEAM_ID
- |
loadfocus-api jmeter run-test \
--name "GitLab_${CI_PROJECT_NAME}_${CI_COMMIT_REF_NAME}" \
--thresholds "avgresponse<=200,errors==0,p95<=250" \
--format json > performance_results.json
# Generate HTML report
- npm install -g performance-report-generator # Replace with actual report generator
- performance-report-generator --input performance_results.json --output performance_report.html
artifacts:
paths:
- performance_results.json
- performance_report.html
expire_in: 1 week
when: always
# Optional: Publish report as GitLab Pages
pages:
stage: deploy
dependencies:
- performance_test
script:
- mkdir -p public/performance-reports
- cp performance_report.html public/performance-reports/index.html
artifacts:
paths:
- public
only:
- main

Integration with GitLab Features

Merge Request Widgets

Display performance test results in merge requests by using GitLab's JUnit report feature:

performance_test:
stage: performance
image: node:${NODE_VERSION}
script:
# Run performance test
- npm install -g @loadfocus/loadfocus-api-client
- loadfocus-api config set apikey $LOADFOCUS_API_KEY
- loadfocus-api config set teamid $LOADFOCUS_TEAM_ID
- |
loadfocus-api jmeter run-test \
--name "GitLab_${CI_PROJECT_NAME}_${CI_COMMIT_REF_NAME}" \
--thresholds "avgresponse<=200,errors==0,p95<=250" \
--format json > performance_results.json
# Convert to JUnit format (using a custom script)
- node convert-to-junit.js performance_results.json junit-report.xml
artifacts:
reports:
junit: junit-report.xml
paths:
- performance_results.json
expire_in: 1 week

GitLab Metrics

Use GitLab's metrics feature to track performance over time:

performance_test:
stage: performance
image: node:${NODE_VERSION}
script:
# Run performance test
- npm install -g @loadfocus/loadfocus-api-client
- loadfocus-api config set apikey $LOADFOCUS_API_KEY
- loadfocus-api config set teamid $LOADFOCUS_TEAM_ID
- |
loadfocus-api jmeter run-test \
--name "GitLab_${CI_PROJECT_NAME}_${CI_COMMIT_REF_NAME}" \
--thresholds "avgresponse<=200,errors==0,p95<=250" \
--format json > performance_results.json
# Extract metrics and report them to GitLab
- |
# Parse JSON and extract metrics
RESPONSE_TIME=$(jq '.labels[0].metrics.avgresponse' performance_results.json)
ERROR_RATE=$(jq '.labels[0].metrics.errors' performance_results.json)
# Report metrics
echo "performance_avg_response_time ${RESPONSE_TIME}" >> metrics.txt
echo "performance_error_rate ${ERROR_RATE}" >> metrics.txt
artifacts:
reports:
metrics: metrics.txt
paths:
- performance_results.json
expire_in: 1 week

GitLab Environments

Associate performance tests with specific environments:

performance_test_staging:
stage: performance
image: node:${NODE_VERSION}
environment:
name: staging
script:
- npm install -g @loadfocus/loadfocus-api-client
- loadfocus-api config set apikey $LOADFOCUS_API_KEY
- loadfocus-api config set teamid $LOADFOCUS_TEAM_ID
- |
loadfocus-api jmeter run-test \
--name "Staging_Performance_Test" \
--thresholds "avgresponse<=250,errors==0" \
--format json > performance_results.json
artifacts:
paths:
- performance_results.json
expire_in: 1 week
only:
- staging

Tips for GitLab CI/CD Integration

  1. Caching: Cache npm dependencies to speed up pipeline runs:

    performance_test:
    stage: performance
    image: node:${NODE_VERSION}
    cache:
    key: ${CI_COMMIT_REF_SLUG}
    paths:
    - node_modules/
    script:
    - npm install -g @loadfocus/loadfocus-api-client
    # Rest of the script...
  2. Timeout Settings: Set timeouts for long-running performance tests:

    performance_test:
    stage: performance
    image: node:${NODE_VERSION}
    timeout: 2h # Set a 2-hour timeout
    script:
    # Performance test script...
  3. Manual Triggers: Allow performance tests to be triggered manually:

    performance_test:
    stage: performance
    image: node:${NODE_VERSION}
    script:
    # Performance test script...
    when: manual
  4. Dynamic Test Configuration: Use GitLab predefined variables to dynamically configure tests:

    performance_test:
    stage: performance
    image: node:${NODE_VERSION}
    script:
    - |
    # Set thresholds based on branch
    if [ "$CI_COMMIT_REF_NAME" = "main" ]; then
    THRESHOLDS="avgresponse<=200,errors==0,p95<=250"
    else
    THRESHOLDS="avgresponse<=300,errors==0,p95<=500"
    fi
    loadfocus-api jmeter run-test \
    --name "GitLab_${CI_PROJECT_NAME}_${CI_COMMIT_REF_NAME}" \
    --thresholds "$THRESHOLDS" \
    --format json > performance_results.json
  5. Notifications: Send notifications when performance tests fail:

    performance_test:
    stage: performance
    image: node:${NODE_VERSION}
    script:
    # Performance test script...
    after_script:
    - |
    if [ $? -ne 0 ]; then
    # Send notification using curl, GitLab API, or other method
    curl -X POST -H "Content-Type: application/json" \
    -d "{\"text\":\"Performance test failed for ${CI_PROJECT_NAME}\"}" \
    $WEBHOOK_URL
    fi

For more information, refer to the GitLab CI/CD documentation and the LoadFocus API Client documentation.