Skip to content

dibyo10/go-taskqueue-workers-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Go Task Queue API

A minimal asynchronous job processing service built in Go using:

  • Worker pools
  • Channels
  • Background task execution
  • HTTP routing with net/http
  • In-memory state management

This project demonstrates practical concurrency patterns in Go, including channel-based task queues and result processing pipelines.


Architecture Overview

The system consists of:

  • HTTP API layer
  • Task queue (channel)
  • Worker pool (3 concurrent workers)
  • Result listener
  • In-memory task store

Flow

  1. Client submits a task via POST /tasks
  2. Task is pushed into a channel-based queue
  3. Workers consume tasks concurrently
  4. Each task is processed asynchronously
  5. Results are sent to a result channel
  6. Result listener updates the task store
  7. Client fetches task status via GET /tasks/{id}

This separates:

  • Request handling
  • Background processing
  • Result persistence

Endpoints

Health Check

GET /health

Response:

200 OK
Level 2 running

Create Task

POST /tasks

Request Body:

{
  "Input": "hello world"
}

Response:

"Task1created"

Get Task Status

GET /tasks/{id}

Response:

{
  "id": 1,
  "input": "hello world",
  "status": "completed",
  "result": "HELLO WORLD"
}

Concurrency Model

This service uses:

  • Unbuffered channels for task dispatch
  • Worker pool pattern
  • Result fan-in channel
  • Goroutines for parallel execution

Workers block on the task queue until work arrives. The result listener serializes updates into the shared task store.


Core Concepts Demonstrated

  • Goroutines
  • Channels (producer-consumer pattern)
  • Worker pool implementation
  • Background processing in web servers
  • JSON encoding/decoding
  • RESTful routing using Go 1.22 pattern matching
  • Synchronization using sync.WaitGroup

Running the Project

go run main.go

Server starts on:

http://localhost:8080

Limitations (Intentional for Learning)

This is an in-memory system and does not include:

  • Persistent storage
  • Graceful shutdown handling
  • Task retries
  • Backpressure control
  • Rate limiting
  • Distributed scaling
  • Context cancellation
  • Mutex protection for shared map

It is designed as a concurrency learning project, not a production-ready system.


Possible Extensions

If extending this into a production-grade system, consider adding:

  • Mutex protection for shared state
  • Buffered channels
  • Graceful shutdown with context
  • Persistent database storage
  • Retry logic and failure states
  • Task timeout handling
  • Metrics and observability
  • Idempotent task submission
  • Structured logging

About

Concurrent in-memory job processing API in Go using worker pools, channels, and background task execution.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages