Getting Started
QueueCore is a managed pub/sub message broker designed for modern distributed systems. It provides exactly-once delivery semantics, durable message storage, and automatic partition balancing out of the box. Whether you're building event-driven microservices, processing real-time data pipelines, or orchestrating background jobs, QueueCore handles the infrastructure so you can focus on your application logic.
Prerequisites
- A QueueCore account — request access here
- Node.js 18+ / Python 3.9+ / Go 1.21+
- An API key generated from your project dashboard
Install the SDK
npm install @queuecore/sdk
Initialize the Client
import { QueueCore } from '@queuecore/sdk';
const pq = new QueueCore({
apiKey: process.env.QUEUECORE_API_KEY,
region: 'us-east-1'
});
The client automatically handles connection pooling, reconnection, and request retries. You can pass additional configuration options like timeout, maxRetries, and logger for more control over client behavior.
Authentication
All API requests to QueueCore must include a valid API key. Keys are scoped to a specific project and environment, and can be generated from the project settings page in your dashboard.
Authorization Header
Include your API key in the Authorization header of every request:
Authorization: Bearer pq_live_sk_abc123...
Key Formats
QueueCore uses prefixed API keys to distinguish between environments:
pq_live_sk_{32 chars}— Production keys with access to live topics and datapq_test_sk_{32 chars}— Test keys that operate in an isolated sandbox environment
Environment Scoping
Each API key is bound to one of three environments: development, staging, or production. Keys cannot access resources outside of their assigned environment. This ensures complete isolation between your deployment stages.
Key Rotation
QueueCore supports zero-downtime key rotation. You can have up to two active keys per environment at any time. Generate a new key, deploy it to your services, then revoke the old key from the dashboard. Both keys remain valid during the transition period.
Topics
Topics are named channels that messages are published to and consumed from. Each topic has its own retention policy, delivery guarantee, and partition configuration. Topics are automatically replicated across availability zones for durability.
Create a Topic
const topic = await pq.topics.create({
name: 'order.events',
retention: '30d',
delivery: 'exactly-once',
partitions: 6
});
console.log(topic);
// { id: 'top_8xk2m9', name: 'order.events', partitions: 6, status: 'active' }
List Topics
const topics = await pq.topics.list();
// [{ name: 'order.events', ... }, { name: 'user.signups', ... }]
Configuration Options
| Parameter | Type | Default | Description |
|---|---|---|---|
name |
string | required | Unique topic identifier. Use dot notation for namespacing. |
retention |
string | "7d" |
How long messages are retained. Supports d (days), h (hours), m (minutes). |
delivery |
enum | "exactly-once" |
Delivery guarantee: exactly-once or at-least-once |
partitions |
number | 3 |
Number of partitions for parallel consumption. Can be increased later. |
Publishing
Publish messages to a topic to make them available to consumers. Each message is durably stored across multiple replicas before the publish call returns, ensuring no data loss.
Publish a Message
const result = await pq.publish('order.events', {
type: 'order.created',
orderId: 'ord_92jf8x',
amount: 9900,
currency: 'usd'
});
console.log(result);
// {
// offset: 14829,
// partition: 2,
// timestamp: '2026-03-28T10:42:18.003Z',
// latency: 12
// }
Batch Publishing
For high-throughput scenarios, use batch publishing to send up to 1,000 messages in a single request. Batched messages are atomically written to the topic.
const results = await pq.publishBatch('order.events', [
{ type: 'order.created', orderId: 'ord_a1b2c3', amount: 4500 },
{ type: 'order.created', orderId: 'ord_d4e5f6', amount: 12000 },
{ type: 'order.updated', orderId: 'ord_g7h8i9', status: 'shipped' }
]);
console.log(results.count); // 3
console.log(results.latency); // 18
Consuming
Consumers subscribe to topics and receive messages in order. QueueCore uses a push-based model with automatic offset tracking, so your consumer receives messages as soon as they are published.
Create a Consumer
const consumer = pq.consume('order.events', {
group: 'order-processing',
startFrom: 'latest'
});
consumer.on('message', async (msg) => {
console.log(msg.data); // { type: 'order.created', ... }
console.log(msg.offset); // 14829
console.log(msg.topic); // 'order.events'
console.log(msg.partition); // 2
await processOrder(msg.data);
await msg.ack();
});
consumer.on('error', (err) => {
console.error('Consumer error:', err.message);
});
await consumer.start();
Messages must be explicitly acknowledged with msg.ack() after processing. If a message is not acknowledged within the configured timeout (default: 30 seconds), it will be redelivered to another consumer in the group.
Consumer Groups
Consumer groups enable parallel message processing across multiple consumer instances. When multiple consumers share the same group name, QueueCore automatically distributes topic partitions among them. Each partition is assigned to exactly one consumer in the group, ensuring every message is processed once.
List Consumer Groups
const groups = await pq.consumerGroups.list('order.events');
// [
// { name: 'order-processing', consumers: 3, lag: 42 },
// { name: 'analytics-pipeline', consumers: 1, lag: 1830 }
// ]
Describe a Group
const group = await pq.consumerGroups.describe('order.events', 'order-processing');
console.log(group);
// {
// name: 'order-processing',
// topic: 'order.events',
// consumers: 3,
// partitions: [
// { id: 0, consumer: 'csr_a1', offset: 4821, lag: 12 },
// { id: 1, consumer: 'csr_b2', offset: 4819, lag: 14 },
// { id: 2, consumer: 'csr_c3', offset: 4823, lag: 16 }
// ]
// }
Reset Group Offsets
await pq.consumerGroups.resetOffsets('order.events', 'order-processing', {
to: 'earliest'
});
All consumers in the group must be stopped before resetting offsets. The to parameter accepts "earliest", "latest", or a specific offset number.
Dead-Letter Queues
Dead-letter queues (DLQs) capture messages that fail processing after a configurable number of retries. Instead of blocking the consumer or losing the message, failed messages are routed to a dedicated DLQ topic for inspection and replay.
Configure a DLQ
const consumer = pq.consume('order.events', {
group: 'order-processing',
deadLetter: {
maxRetries: 5,
retryDelay: '30s',
alertThreshold: 100
}
});
When alertThreshold is reached, QueueCore sends a webhook notification to your configured alert endpoint. The retryDelay uses exponential backoff, with the specified value as the base delay.
Inspect the DLQ
const deadMessages = await pq.dlq.inspect('order.events', {
group: 'order-processing',
limit: 10
});
// Returns messages with original data, error reason, retry count, and timestamp
Replay Messages
// Replay a single message by ID
await pq.dlq.replay('order.events', 'order-processing', {
messageId: 'msg_x8k2n4'
});
// Replay all messages in the DLQ
await pq.dlq.replayAll('order.events', 'order-processing');
Offsets
Every message in a topic is assigned a sequential offset number. Offsets allow you to track consumption progress, replay historical messages, or skip ahead. QueueCore automatically manages offsets for consumer groups, but you can also control them manually.
Get Current Offset
const offset = await pq.offsets.get('order.events', {
group: 'order-processing',
partition: 0
});
console.log(offset);
// { current: 4821, latest: 4833, lag: 12 }
Seek to a Specific Offset
await pq.offsets.seek('order.events', {
group: 'order-processing',
partition: 0,
offset: 4800
});
Reset to Earliest or Latest
// Reset to the beginning of the topic
await pq.offsets.reset('order.events', {
group: 'order-processing',
to: 'earliest'
});
// Skip to the end, only process new messages
await pq.offsets.reset('order.events', {
group: 'order-processing',
to: 'latest'
});
SDKs
QueueCore provides official SDKs for three languages. All SDKs offer the same feature set and are maintained in lockstep with the API.
Node.js / TypeScript
npm install @queuecore/sdk
Requires Node.js 18 or later. Ships with full TypeScript type definitions. Supports ESM and CommonJS.
Python
pip install queuecore
Requires Python 3.9 or later. Async-first with asyncio support. Synchronous wrapper available.
Go
go get github.com/queuecore/queuecore-go
Requires Go 1.21 or later. Context-aware with full cancellation support.
Feature Support
All SDKs support the complete QueueCore feature set:
- Topic management (create, list, describe, delete)
- Publish (single and batch)
- Consume (push-based with automatic offset tracking)
- Dead-letter queue inspection and replay
- Offset management (get, seek, reset)
- Health checks and connection monitoring
Rate Limits
QueueCore enforces rate limits per API key to ensure fair usage and platform stability. Limits are applied on a per-second sliding window basis. When a limit is exceeded, the API returns a 429 status code with a Retry-After header.
| Tier | Publish | Consume | Admin API |
|---|---|---|---|
| Developer | 100 req/s | 50 req/s | 10 req/s |
| Pro | 1,000 req/s | 500 req/s | 50 req/s |
| Enterprise | Custom | Custom | Custom |
All SDKs include built-in retry logic with exponential backoff for rate-limited requests. Enterprise customers can request custom limits through their account manager.
Errors
QueueCore uses standard HTTP status codes and returns structured error responses. All error responses include a machine-readable code and a human-readable message.
Common Error Codes
| Status | Code | Description |
|---|---|---|
401 |
INVALID_API_KEY |
The API key is missing, malformed, or has been revoked. |
404 |
TOPIC_NOT_FOUND |
The specified topic does not exist in this environment. |
429 |
RATE_LIMIT_EXCEEDED |
You have exceeded your plan's rate limit. Retry after the duration specified in the Retry-After header. |
503 |
SERVICE_UNAVAILABLE |
QueueCore is temporarily unavailable. Check the status page for updates. |
Error Response Format
{
"error": {
"code": "TOPIC_NOT_FOUND",
"message": "The topic 'user.events' does not exist in the current environment.",
"status": 404,
"requestId": "req_7x9k2m4n"
}
}
Always include the requestId when contacting support. It allows the team to trace the exact request through the system for faster resolution.