Because sometimes your messages need a queue, and your queues need a REST API.
pgmq-rest provides a REST API for PGMQ (PostgreSQL Message Queue), making it easy to integrate message queues into your applications.
The fastest way to get started is using Docker Compose:
cd example docker-compose up -d
To stop the stack:
Here's a quick example of how to use the API:
# Send a single message curl -X POST http://localhost:8080/api/v1/send \ -H "Content-Type: application/json" \ -d '{"queue_name": "my_queue", "msg": {"task": "process_data"}}' # Response: [123] # Returns the message ID # Send multiple messages curl -X POST http://localhost:8080/api/v1/send_batch \ -H "Content-Type: application/json" \ -d '{"queue_name": "my_queue", "msgs": [{"task": "process_data"}, {"task": "process_data"}]}' # Response: [123, 124] # Returns message IDs for each message # Read messages curl -X POST http://localhost:8080/api/v1/read \ -H "Content-Type: application/json" \ -d '{"queue_name": "my_queue", "vt": 30, "qty": 1}' # Response: [["123", 1, "2024-04-15T12:00:00Z", "2024-04-15T12:00:30Z", {"task": "process_data"}, {}]] # Format: [msg_id, read_ct, enqueued_at, vt, message, headers] # Read messages with polling curl -X POST http://localhost:8080/api/v1/read_with_poll \ -H "Content-Type: application/json" \ -d '{"queue_name": "my_queue", "vt": 30, "qty": 1}' # Response: [["123", 1, "2024-04-15T12:00:00Z", "2024-04-15T12:00:30Z", {"task": "process_data"}, {}]] # Format: [msg_id, read_ct, enqueued_at, vt, message, headers]
The API provides the following main endpoints:
POST /api/v1/send
- Send a single message to a queuePOST /api/v1/send_batch
- Send multiple messages to a queuePOST /api/v1/read
- Read messages from a queuePOST /api/v1/read_with_poll
- Read messages with pollingGET /api/v1/metrics
- Get queue metricsFor detailed API documentation, visit http://localhost:8080/docs after starting the service.
Variable Description DefaultDB_HOST
PostgreSQL host postgres DB_PORT
PostgreSQL port 5432 DB_NAME
Database name postgres DB_USER
Database user postgres DB_PASSWORD
Database password postgres DB_POOL_SIZE
Connection pool size 20 Performance Considerations
DB_POOL_SIZE
based on your application's concurrency needssend_batch
for better performance when sending multiple messagesStart a local PostgreSQL instance with pgmq:
# Cleanup existing container and volumes docker stop pgmq 2>/dev/null || true docker rm pgmq 2>/dev/null || true docker volume rm pgmq_data 2>/dev/null || true # Start PGMQ docker run -d --name pgmq \ -p 5432:5432 \ -e POSTGRES_USER=postgres \ -e POSTGRES_PASSWORD=postgres \ -e POSTGRES_DB=postgres \ -v pgmq_data:/var/lib/postgresql/data \ -v $(pwd)/init-pgmq.sql:/docker-entrypoint-initdb.d/init-pgmq.sql \ tembo.docker.scarf.sh/tembo/pg17-pgmq:latest
Run the development server:
DB_HOST=localhost DB_PORT=5432 DB_USER=postgres DB_PASSWORD=postgres DB_NAME=postgres DB_POOL_SIZE=20 bun dev
Run the tests:
DB_HOST=localhost DB_PORT=5432 DB_USER=postgres DB_PASSWORD=postgres DB_NAME=postgres DB_POOL_SIZE=20 bun test
If you prefer to run the services manually instead of using Docker Compose:
Start the PGMQ container:
docker run -d --name pgmq \ -p 5432:5432 \ -e POSTGRES_USER=postgres \ -e POSTGRES_PASSWORD=postgres \ -e POSTGRES_DB=postgres \ -v $(pwd)/init-pgmq.sql:/docker-entrypoint-initdb.d/init-pgmq.sql \ tembo.docker.scarf.sh/tembo/pg17-pgmq:latest
Run the pgmq-rest container:
docker run -d --name pgmq-rest \ -p 8080:8080 \ -e DB_HOST=pgmq \ -e DB_PORT=5432 \ -e DB_NAME=postgres \ -e DB_USER=postgres \ -e DB_PASSWORD=postgres \ --link pgmq:postgres \ eichenroth/pgmq-rest:latest
Cleanup:
docker stop pgmq-rest pgmq docker rm pgmq-rest pgmq docker volume rm pgmq_data
Common issues and solutions:
Connection Issues
Queue Operations Fail
Performance Issues
This project is licensed under the MIT License.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4