π Comprehensive performance benchmarking suite comparing Go web frameworks with atomic, deterministic, and resumable test execution.
- π― Overview
- ποΈ Framework Comparison
- π Benchmark Scenarios
- π§ͺ Test Environment
- π Results
- π Quick Start
- βοΈ Configuration
- π Documentation
- π€ Contributing
This repository contains a comprehensive benchmarking suite designed to evaluate the performance of Go web frameworks with a focus on atomic, deterministic, and resumable test execution. Our goal is to provide accurate, reproducible, and meaningful performance comparisons across various real-world scenarios.
| Framework | Version | Description |
|---|---|---|
| π₯ GoFlash | Latest | High-performance, minimalist Go web framework |
| πΈ Gin | Latest | Fast HTTP web framework with martini-like API |
| π·οΈ Fiber | v2.52.0 | Express-inspired web framework built on Fasthttp |
| π’ Echo | v4.11.4 | High performance, extensible, minimalist Go web framework |
| π Chi | v5.0.11 | Lightweight, expressive and scalable HTTP router |
- GoFlash: Optimized for speed with minimal overhead
- Gin: Battle-tested with excellent middleware ecosystem
- Fiber: Express.js-like API with high performance
- Echo: High performance with extensible middleware
- Chi: Lightweight and expressive routing
Each framework excels in different scenarios, making this benchmark crucial for informed decision-making in your next Go project.
Our benchmark suite covers 9 comprehensive scenarios that represent common web application patterns:
π Click to expand scenario details
| # | Scenario | Description | Real-world Impact |
|---|---|---|---|
| 1οΈβ£ | Simple Ping/Pong | Basic endpoint response | Foundation performance |
| 2οΈβ£ | URL Path Parameter | Dynamic route parsing | RESTful API endpoints |
| 3οΈβ£ | Request Context | Context read/write operations | State management |
| 4οΈβ£ | JSON Binding | Request deserialization + validation | API data processing |
| 5οΈβ£ | Wildcard Routing | Trailing wildcard route matching | File serving, catch-all routes |
| 6οΈβ£ | Route Groups | Basic route organization | API versioning |
| 7οΈβ£ | Deep Route Groups | 10-level nested groups | Complex routing hierarchies |
| 8οΈβ£ | Single Middleware | Basic middleware processing | Authentication, logging |
| 9οΈβ£ | Middleware Chain | 10-middleware processing chain | Complex request pipelines |
- Machine: Apple MacBook Pro (M3 chip)
- Memory: 32 GB RAM
- Architecture: ARM64
- Load Generator: wrk HTTP benchmarking tool
- Threads: 4 concurrent threads
- Connections: 50 concurrent connections
- Protocol: HTTP/1.1 with keep-alive
- β Functionally equivalent handlers across all frameworks
- β Production/release build settings enabled
- β Consistent routing patterns and middleware implementation
- β Multiple test runs for statistical significance
- β Isolated server processes to prevent interference
- β Atomic and deterministic test execution
- β Resume capability from failed runs
β οΈ Note: Results are indicative and may vary based on workload, configuration, and environment. Always benchmark in your specific use case.
π Complete dataset available: Detailed CSV files and additional metrics can be found in the
results/2025-08-26/directory.
Our comprehensive benchmarks reveal significant performance differences across frameworks and scenarios. Below are the key findings from 54 total benchmark tests:
| π Rank | Framework | Avg RPS | Min RPS | Max RPS | Tests | Performance |
|---|---|---|---|---|---|---|
| π₯ | Fiber v3 | 283,816 | 240,999 | 303,030 | 9 | π₯ Excellent |
| π₯ | Fiber | 280,118 | 250,018 | 290,845 | 9 | β‘ Very Good |
| π₯ | Gin | 221,379 | 197,620 | 232,125 | 9 | β Good |
| #4 | Chi | 220,362 | 200,062 | 235,505 | 9 | π Baseline |
| #5 | Echo | 215,519 | 196,234 | 231,230 | 9 | π Baseline |
| #6 | GoFlash | 212,779 | 162,382 | 225,325 | 9 | π Baseline |
| π Rank | Framework | Avg RPS | Performance vs Leader |
|---|---|---|---|
| π₯ | Fiber v3 | 303,030 | 100% (Leader) |
| π₯ | Fiber | 280,031 | 92.4% of leader |
| π₯ | Chi | 235,505 | 77.7% of leader |
| #4 | Gin | 232,125 | 76.6% of leader |
| #5 | GoFlash | 225,325 | 74.4% of leader |
| #6 | Echo | 219,567 | 72.5% of leader |
| π Rank | Framework | Avg RPS | Performance vs Leader |
|---|---|---|---|
| π₯ | Fiber v3 | 293,635 | 100% (Leader) |
| π₯ | Fiber | 280,867 | 95.7% of leader |
| π₯ | Gin | 223,626 | 76.2% of leader |
| #4 | GoFlash | 222,000 | 75.6% of leader |
| #5 | Chi | 221,712 | 75.5% of leader |
| #6 | Echo | 214,485 | 73.0% of leader |
| π Rank | Framework | Avg RPS | Performance vs Leader |
|---|---|---|---|
| π₯ | Fiber v3 | 283,458 | 100% (Leader) |
| π₯ | Fiber | 282,358 | 99.6% of leader |
| π₯ | Gin | 218,490 | 77.1% of leader |
| #4 | Chi | 216,871 | 76.5% of leader |
| #5 | GoFlash | 214,172 | 75.6% of leader |
| #6 | Echo | 208,281 | 73.5% of leader |
| π Rank | Framework | Avg RPS | Performance vs Leader |
|---|---|---|---|
| π₯ | Fiber v3 | 289,274 | 100% (Leader) |
| π₯ | Fiber | 283,202 | 97.9% of leader |
| π₯ | Echo | 231,230 | 79.9% of leader |
| #4 | Chi | 228,242 | 78.9% of leader |
| #5 | Gin | 225,711 | 78.0% of leader |
| #6 | GoFlash | 219,170 | 75.8% of leader |
| π Rank | Framework | Avg RPS | Performance vs Leader |
|---|---|---|---|
| π₯ | Fiber v3 | 292,412 | 100% (Leader) |
| π₯ | Fiber | 288,075 | 98.5% of leader |
| π₯ | Gin | 226,190 | 77.4% of leader |
| #4 | GoFlash | 219,780 | 75.2% of leader |
| #5 | Echo | 214,402 | 73.3% of leader |
| #6 | Chi | 211,424 | 72.3% of leader |
| π Rank | Framework | Avg RPS | Performance vs Leader |
|---|---|---|---|
| π₯ | Fiber | 250,018 | 100% (Leader) |
| π₯ | Fiber v3 | 240,999 | 96.4% of leader |
| π₯ | Chi | 200,062 | 80.0% of leader |
| #4 | Gin | 197,620 | 79.0% of leader |
| #5 | Echo | 196,234 | 78.5% of leader |
| #6 | GoFlash | 162,382 | 64.9% of leader |
| π Rank | Framework | Avg RPS | Performance vs Leader |
|---|---|---|---|
| π₯ | Fiber | 286,427 | 100% (Leader) |
| π₯ | Fiber v3 | 282,417 | 98.6% of leader |
| π₯ | Gin | 228,309 | 79.7% of leader |
| #4 | Chi | 221,085 | 77.2% of leader |
| #5 | Echo | 220,603 | 77.0% of leader |
| #6 | GoFlash | 219,793 | 76.7% of leader |
| π Rank | Framework | Avg RPS | Performance vs Leader |
|---|---|---|---|
| π₯ | Fiber v3 | 284,418 | 100% (Leader) |
| π₯ | Fiber | 279,242 | 98.2% of leader |
| π₯ | Gin | 226,506 | 79.6% of leader |
| #4 | Chi | 226,059 | 79.5% of leader |
| #5 | GoFlash | 219,438 | 77.2% of leader |
| #6 | Echo | 212,276 | 74.6% of leader |
| π Rank | Framework | Avg RPS | Performance vs Leader |
|---|---|---|---|
| π₯ | Fiber | 290,845 | 100% (Leader) |
| π₯ | Fiber v3 | 284,705 | 97.9% of leader |
| π₯ | Echo | 222,594 | 76.5% of leader |
| #4 | Chi | 222,302 | 76.4% of leader |
| #5 | Gin | 213,838 | 73.5% of leader |
| #6 | GoFlash | 212,948 | 73.2% of leader |
π― Simple Ping/Pong Endpoint
Test: Basic HTTP GET response without any processing
Key Insights:
- Foundation performance comparison
- Measures framework overhead
- Critical for high-throughput applications
Results: CSV Data
π URL Path Parameter Extraction
Test: Dynamic route matching and parameter extraction (/user/:id)
Key Insights:
- RESTful API performance
- Router efficiency comparison
- Path parsing overhead analysis
Results: CSV Data
π Request Context Operations
Test: Writing to and reading from request context
Key Insights:
- Context management efficiency
- State preservation performance
- Middleware communication overhead
Results: CSV Data
π¦ JSON Binding & Validation
Test: JSON request deserialization with struct binding and validation
Key Insights:
- API data processing performance
- Serialization/deserialization efficiency
- Validation overhead impact
Results: CSV Data
π Wildcard Route Parsing
Test: Trailing wildcard route matching (/files/*path)
Key Insights:
- File serving performance
- Catch-all route efficiency
- Dynamic path handling
Results: CSV Data
π Route Groups
Test: Basic route group organization (/api/v1/users)
Key Insights:
- API organization efficiency
- Group routing overhead
- Nested structure performance
Results: CSV Data
ποΈ Deep Route Groups (10 Levels)
Test: Complex nested route groups (/g1/g2/.../g10/endpoint)
Key Insights:
- Complex routing hierarchy performance
- Deep nesting overhead
- Scalability under complex structures
Results: CSV Data
βοΈ Single Middleware
Test: Basic middleware processing (e.g., request logging)
Key Insights:
- Middleware overhead analysis
- Basic processing pipeline performance
- Authentication/logging impact
Results: CSV Data
π Middleware Chain (10 Middlewares)
Test: Complex middleware chain with 10 sequential middlewares
Key Insights:
- Complex pipeline performance
- Cumulative middleware overhead
- Enterprise-grade processing chains
Results: CSV Data
| Framework | Port | Optimization |
|---|---|---|
| π₯ GoFlash | :17780 |
Production mode |
| πΈ Gin | :17781 |
Release mode |
| π·οΈ Fiber | :17782 |
Production settings |
| π’ Echo | :17783 |
Production mode |
| π Chi | :17784 |
Release mode |
Get up and running with the benchmark suite in minutes! Follow these step-by-step instructions:
- Go 1.21+ installed and configured
- wrk HTTP benchmarking tool
- macOS/Linux environment (recommended)
π οΈ Installing Prerequisites
brew install wrksudo apt-get install wrk# Build all framework servers
./benchmark buildThis command will:
- π¦ Download dependencies for all frameworks
- π¨ Compile optimized production builds
- π Place executables in
build/directory
# π High-Volume Load Testing (1M requests, 10 batches for statistical significance)
go run ./cmd run --requests 1000000 --connections 100 --batches 10
# β±οΈ Duration-Based Testing (1 minute per test scenario)
go run ./cmd run --duration 1m --connections 50 --batches 3
# π Full benchmark suite (recommended for comprehensive analysis)
go run ./cmd run --requests 10000 --connections 50 --batches 3
# β‘ Quick test (faster execution for development)
go run ./cmd run --requests 1000 --connections 10 --batches 1
# π― Custom framework and scenario selection
go run ./cmd run --duration 30s --frameworks flash,gin,gofiber --scenarios simple,json,param
# π Specific test configuration examples
go run ./cmd run --requests <requests> --connections <connections> --batches <batches>
go run ./cmd run --duration <duration> --connections <connections> --batches <batches>Parameters:
--requests: Total number of requests per scenario (use0for duration-based testing)--duration: Test duration per scenario (e.g.,30s,1m,5m)--connections: Concurrent connections--batches: Number of test batches for statistical significance--frameworks: Comma-separated list of frameworks to test (e.g.,flash,gin,gofiber)--scenarios: Comma-separated list of scenarios to run (e.g.,simple,json,param)
After running benchmarks, you'll find detailed results in the results/ directory:
results/
βββ π 2025-08-26/ # Date-based results directory
β βββ π summary.csv # Comprehensive comparison data
β βββ π parts/ # Individual framework results
β βββ π raw/ # Raw benchmark outputs
β βββ π images/ # Generated charts
βββ π previous-runs/ # Historical results
π§ Optimization Recommendations
- Close unnecessary applications to reduce system noise
- Run multiple batches for statistical significance
- Use consistent system load across test runs
- Monitor system resources during benchmarks
- Light testing:
--requests 1000 --connections 10 - Standard testing:
--requests 10000 --connections 50 - Heavy testing:
--requests 100000 --connections 100
# Increase file descriptor limit (if needed)
ulimit -n 65536
# Check current limits
ulimit -a| Framework | Port | Health Check | Base URL |
|---|---|---|---|
| π₯ GoFlash | 17780 |
GET /ping |
http://localhost:17780 |
| πΈ Gin | 17781 |
GET /ping |
http://localhost:17781 |
| π·οΈ Fiber | 17782 |
GET /ping |
http://localhost:17782 |
| π’ Echo | 17783 |
GET /ping |
http://localhost:17783 |
| π Chi | 17784 |
GET /ping |
http://localhost:17784 |
Each server implements the following endpoints for benchmarking:
GET /ping # Simple ping/pong
GET /param/:id # URL parameter extraction
GET /context # Request context operations
POST /json # JSON binding & validation
GET /wildcard/*path # Wildcard route parsing
GET /api/v1/group/ping # Basic route group
GET /g1/g2/.../g10/ping # Deep nested groups (10 levels)
GET /mw/ping # Single middleware
GET /mw10/ping # 10 middleware chain
Customize benchmark execution with these parameters:
| Parameter | Description | Default | Recommended Range |
|---|---|---|---|
--requests |
Total requests per test | 10000 |
1K - 100K |
--connections |
Concurrent connections | 50 |
10 - 200 |
--batches |
Number of test batches | 3 |
1 - 10 |
--tool |
Benchmark tool | wrk |
wrk or ab |
The benchmark suite generates multiple output formats:
- π CSV Data: Raw performance metrics for analysis
- π Summary Reports: Aggregated results across scenarios
- π Detailed Logs: Individual test execution details
- π Organized Structure: Date-based result directories
This benchmark suite is designed with modularity, atomicity, and accuracy in mind:
go-web-benchmarks/
βββ π cmd/ # Command-line interface
βββ π§ internal/ # Core framework logic
β βββ config/ # Configuration management
β βββ progress/ # Progress tracking
β βββ runner/ # Benchmark execution
β βββ types/ # Data structures
βββ ποΈ frameworks/ # Framework implementations
β βββ flash/ # GoFlash implementation
β βββ gin/ # Gin framework implementation
β βββ gofiber/ # Fiber framework implementation
β βββ echo/ # Echo framework implementation
β βββ chi/ # Chi framework implementation
βββ π results/ # Performance data and charts
βββ βοΈ config.yaml # YAML configuration
βββ π README.md # This documentation
Our approach ensures fair and accurate comparisons:
- Equivalent Implementations: Each endpoint performs identical operations across frameworks
- Production Settings: All servers run in optimized production mode
- Isolated Processes: Frameworks run in separate processes to prevent interference
- Statistical Validity: Multiple test batches ensure reliable results
- Resource Monitoring: System resource usage tracked during tests
- Atomic Execution: Tests are atomic and can be resumed from failures
- Deterministic Results: Consistent execution environment and parameters
- RPS (Requests Per Second): Primary performance indicator
- Latency Distribution: Response time characteristics
- Memory Usage: Resource consumption patterns
- CPU Utilization: Processing efficiency
- Router Efficiency: How quickly routes are matched and resolved
- Middleware Overhead: Processing cost of request/response pipeline
- Memory Allocation: Garbage collection and memory management impact
- Serialization Speed: JSON encoding/decoding performance
π― Production-Level Load Testing Examples
# Ultimate stress test - 1 million requests per scenario, 10 statistical batches
go run ./cmd run --requests 1000000 --connections 100 --batches 10
# High-volume with all frameworks and scenarios (full comprehensive test)
go run ./cmd run --requests 1000000 --connections 200 --batches 10 --frameworks flash,gin,gofiber,echo,chi --scenarios simple,param,context,json,wildcard,groups,deepgroups,middleware,mw10
# Memory-intensive JSON processing test
go run ./cmd run --requests 500000 --connections 50 --batches 5 --scenarios json# 1-minute duration tests with statistical significance
go run ./cmd run --duration 1m --connections 50 --batches 3
# Extended duration testing for stability analysis
go run ./cmd run --duration 5m --connections 100 --batches 5
# Quick 1-minute validation across all scenarios
go run ./cmd run --duration 1m --connections 25 --batches 1 --scenarios simple,json,param# Progressive connection scaling
go run ./cmd run --duration 30s --connections 10 --batches 3 # Light load
go run ./cmd run --duration 30s --connections 50 --batches 3 # Medium load
go run ./cmd run --duration 30s --connections 200 --batches 3 # Heavy load
go run ./cmd run --duration 30s --connections 500 --batches 3 # Extreme load
# Framework comparison under different loads
go run ./cmd run --requests 100000 --connections 50 --frameworks flash,gin,gofiber
go run ./cmd run --requests 100000 --connections 200 --frameworks flash,gin,gofiberThe benchmark suite supports resuming from failed runs:
# Resume from last failed run
./benchmark run --resumeTest specific frameworks only:
# Test only GoFlash and Gin
go run ./cmd run --frameworks flash,gin
# Compare top 3 performers
go run ./cmd run --duration 1m --frameworks flash,gin,gofiber --batches 5Test specific scenarios only:
# Test only simple and JSON scenarios
go run ./cmd run --scenarios simple,json
# Focus on API-heavy scenarios
go run ./cmd run --duration 1m --scenarios json,param,context --batches 3
# Test routing performance
go run ./cmd run --requests 50000 --scenarios simple,param,wildcard,groups,deepgroupsOverride configuration parameters:
# Use ApacheBench instead of wrk
./benchmark run --tool ab
# Custom test duration
./benchmark run --duration 60sWe welcome contributions to improve the benchmark suite! Here's how you can help:
- Bug Reports: Use the GitHub issue tracker
- Feature Requests: Suggest new frameworks or scenarios
- Performance Issues: Report unexpected results
- Create Framework Directory: Add implementation in
frameworks/ - Update Configuration: Add framework to
config.yaml - Implement Endpoints: Ensure all test scenarios are covered
- Test Thoroughly: Run benchmarks to verify results
- Define Scenario: Add to
config.yamlscenarios section - Implement Handlers: Add endpoints to all frameworks
- Update Documentation: Document the new scenario
- Test Validation: Ensure consistent behavior across frameworks
# Run all tests
go test ./...
# Run specific package tests
go test ./internal/config
go test ./internal/runner- Follow Go conventions and best practices
- Add comprehensive documentation
- Include unit tests for new functionality
- Ensure atomic and deterministic behavior
This project is licensed under the MIT License - see the LICENSE file for details.
Made with β€οΈ for the Go community
Accurate, reproducible, and meaningful performance benchmarks









