Introduction
Load testing is crucial for ensuring your applications can handle expected load volumes. In this guide, we'll set up a complete load testing environment using k6 for testing, Prometheus for metrics collection, and Grafana for visualization—all orchestrated with Docker.
Although there are paid versions of these products, this guide will focus exclusively on a basic setup with their open source Docker images.
Prerequisites
- Docker and Docker Compose installed
- Basic understanding of load testing concepts
- Familiarity with Docker
Architecture Overview
Our setup consists of four main components:
k6: Executes load tests and exports metrics Application: A simple API-based application to test Prometheus: Collects and stores metrics from k6 Grafana: Visualizes metrics from Prometheus
These components will be implemented with 4 Docker containers. Here's how these components interact:
Data Flow:
- Load generation: our k6 script sends HTTP requests to the Sample API to simulate user traffic
- Metrics Export: as the test runs, performance metrics from k6 are exported to Prometheus via remote write
- Data Query: Grafana uses PromQL to query Prometheus for metrics
All components run within the same Docker network, enabling seamless communication between services.
Project Structure
k6-prometheus-grafana/
├── docker-compose.yml
├── prometheus/
│ └── prometheus.yml
├── grafana/
│ └── dashboards/
│ └── k6-dashboard.json
├── k6/
│ └── script.js
└── sample-api/
└── Dockerfile
└── server.js
Step 1: Create the Sample API
First, let's create a simple Node.js API to test against:
sample-api/server.js
const express = require('express');
const app = express();
app.get('/health', (req, res) => {
res.json({ status: 'healthy', timestamp: new Date().toISOString() });
});
app.get('/api/users/:id', (req, res) => {
const { id } = req.params;
// Simulate some processing delay
setTimeout(() => {
res.json({ id, name: `User ${id}`, timestamp: new Date().toISOString() });
}, Math.random() * 100);
});
app.post('/api/users', (req, res) => {
// Simulate user creation
setTimeout(() => {
res.status(201).json({
id: Math.floor(Math.random() * 1000),
message: 'User created successfully'
});
}, Math.random() * 200);
});
app.listen(3000, () => {
console.log('Server running on port 3000');
});
And add a docker file that will start the app:
sample-api/Dockerfile
FROM node:16-alpine
WORKDIR /app
RUN npm init -y && npm install express
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Step 2: Create k6 Test Script
This JavaScript test script defines how k6 will interact with our sample API during the load test.
k6/script.js
import http from 'k6/http';
import { check, sleep } from 'k6';
import { Rate, Counter, Trend } from 'k6/metrics';
// Custom metrics - these allow us to track specific aspects of our test
export const errorRate = new Rate('errors'); // Tracks percentage of errors
export const myCounter = new Counter('my_counter'); // Simple incrementing counter
export const responseTime = new Trend('response_time'); // Tracks response time distribution
export const options = {
stages: [
{ duration: '30s', target: 5 }, // Ramp up to 5 virtual users over 30 seconds
{ duration: '90s', target: 20 }, // Ramp to from 5 to 20 virtual users over 90 seconds
{ duration: '3m', target: 20 }, // Stay at 20 virtual users for 3 minutes
{ duration: '30s', target: 0 }, // Gradually ramp down to 0 over 30 seconds
],
thresholds: {
http_req_duration: ['p(95)<500'], // 95% of requests must complete in less than 500ms for the test to pass
http_req_failed: ['rate<0.1'], // Test fails if more than 10% of requests fail
},
};
export default function () {
const baseUrl = 'http://sample-api:3000';
// Test GET endpoint - fetches a random user
let getResponse = http.get(`${baseUrl}/api/users/${Math.floor(Math.random() * 100)}`);
check(getResponse, {
'GET status is 200': (r) => r.status === 200,
'GET response time < 500ms': (r) => r.timings.duration < 500,
});
// Track custom metrics for this request
errorRate.add(getResponse.status !== 200);
responseTime.add(getResponse.timings.duration);
myCounter.add(1);
sleep(1); // Pause for 1 second between requests
// Test POST endpoint - creates a new user
let postResponse = http.post(`${baseUrl}/api/users`, JSON.stringify({
name: `TestUser_${Date.now()}`,
email: `test_${Date.now()}@example.com`
}), {
headers: { 'Content-Type': 'application/json' },
});
check(postResponse, {
'POST status is 201': (r) => r.status === 201,
'POST response time < 1000ms': (r) => r.timings.duration < 1000,
});
errorRate.add(postResponse.status !== 201);
myCounter.add(1);
sleep(1);
}
Step 3: Configure Prometheus
Prometheus is an open-source monitoring and alerting toolkit that collects and stores time-series metrics. The configuration below sets up Prometheus to scrape metrics from both itself and the k6 load testing tool.
prometheus/prometheus.yml
global:
scrape_interval: 15s # How frequently to scrape targets by default
evaluation_interval: 15s # How frequently to evaluate rules
scrape_configs:
- job_name: 'prometheus' # Self-monitoring configuration
static_configs:
- targets: ['localhost:9090'] # Prometheus's own metrics endpoint
- job_name: 'k6' # Configuration to scrape k6 metrics
static_configs:
- targets: ['k6:6565'] # k6's metrics endpoint (using Docker service name)
scrape_interval: 5s # More frequent scraping for k6 during tests
metrics_path: /metrics # Path where metrics are exposed
Once Prometheus is collecting metrics, we'll be able to query this data directly or visualize it through Grafana in the next steps.
Step 4: Grafana Dashboard Configuration
Create a dashboard provisioning file for automatic setup:
grafana/dashboards/dashboard.yml
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /etc/grafana/provisioning/dashboards
Step 5: Docker Compose Configuration
docker-compose.yml
services:
# Sample API service to be load tested by k6
sample-api:
build: ./sample-api
ports:
- "3000:3000" # Exposes API on localhost:3000
networks:
- k6-net
# Prometheus for metrics collection
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090" # Prometheus UI available at localhost:9090
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml # Custom config
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--web.enable-lifecycle' # Allows config reloads without restart
- '--web.enable-remote-write-receiver' # Enables remote write endpoint for k6
networks:
- k6-net
# Grafana for dashboarding and visualization
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3001:3000" # Grafana UI available at localhost:3001
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin # Default admin password
volumes:
- grafana-storage:/var/lib/grafana # Persistent storage for Grafana data
- ./grafana/dashboards:/etc/grafana/provisioning/dashboards # Pre-provisioned dashboards
networks:
- k6-net
depends_on:
- prometheus # Waits for Prometheus to be ready
# k6 load testing tool with Prometheus remote write output
k6:
image: grafana/k6:latest
container_name: k6
ports:
- "6565:6565"
environment:
- K6_PROMETHEUS_RW_SERVER_URL=http://prometheus:9090/api/v1/write # Prometheus remote write endpoint
- K6_PROMETHEUS_RW_TREND_STATS=p(95),p(99),min,max # Custom trend stats
volumes:
- ./k6:/scripts # Mounts local k6 scripts
command: run --out experimental-prometheus-rw /scripts/script.js # Runs the main k6 script
networks:
- k6-net
depends_on:
- sample-api
- prometheus
volumes:
grafana-storage: # Named volume for Grafana data
networks:
k6-net:
driver: bridge # Isolated network for all services
Step 6: Start the stack
- Start all services:
bash docker-compose up -d
Step 7: Setting Up a Pre-built K6 Dashboard
- Access Grafana: Navigate to http://localhost:3001
- Login: Use admin/admin (you'll be prompted to change the password)
- Add Prometheus Data Source First:
- Go to Configuration → Data Sources
- Click “Add data source”
- Select “Prometheus”
- Set URL to:
http://prometheus:9090
- Click “Save & Test”
- Import K6 Dashboard:
- Click the “+” icon in the left sidebar
- Select “Import”
- Use one of these dashboard IDs for Prometheus:
- 19665 – K6 Prometheus (recommended)
- 10660 – K6 Load Testing Results (Prometheus)
- 19634 – K6 Performance Test Dashboard
- Click “Load”
- Select your Prometheus data source
- Click “Import”
Step 8: Run the load test
- Run the k6 test:
bash docker-compose run --rm k6 run --out experimental-prometheus-rw /scripts/script.js
As the test runs, k6 will send API requests to the sample API, and metrics will be collected and sent to Prometheus. You can monitor the test progress in the terminal.
Step 9: Monitor your test run in Grafana
Navigate to to the Grana http://localhost:3001, select your Dashboard from the left nav, and you can monitor your test real time with the Grafana dashboard, which should look something like this:
Cleanup
Stop and remove all containers and volumes:
docker-compose down -v
Conclusion
The point of this post was just to provide awareness of the open source options available to you and you consider k6 for load testing. I skimmed over a lot of detail and explanation about k6, Prometheus, and Grafana. I will likely fill in some detail with future posts. Until then, this setup provides a complete observability stack for K6 load testing.
The Docker-based approach ensures consistency across environments and makes it easy to integrate into CI/CD pipelines. And FYI, you can fin all the code from this blog post here.
Thanks for reading and let me know if you have any questions or suggestions for future posts!