Comprehensive documentation of the hybrid PostgreSQL + MongoDB logging architecture for external system integrations, API request tracking, and webhook delivery monitoring.
ProKure uses a hybrid storage approach: PostgreSQL for configuration and metadata (external systems, endpoints, mappings), and MongoDB for high-volume API request/response logs. This allows transactional integrity for configuration while providing scalable, flexible storage for logs.
PostgreSQL: ACID compliance for configurations, foreign key relationships to business entities, critical for system setup.
MongoDB: Schema flexibility for varying request/response payloads, horizontal scaling for high-volume logs, TTL indexes for automatic expiration.
Database: prokure_logs | Collection: integration_logs
Schema file: scripts/mongodb/integration-logs-schema.js
| Section | Fields | Description |
|---|---|---|
| Identifiers | request_id, correlation_id, parent_request_id |
UUIDs for request tracking and correlation |
| System Reference | system_code, endpoint_code |
External system and endpoint identification |
| Request | http_method, request_url, request_headers, request_body |
Full request details (sensitive data redacted) |
| Response | response_status, response_headers, response_body |
Complete response capture |
| Timing | started_at, completed_at, duration_ms |
Performance metrics |
| Status | status, is_success |
SUCCESS, FAILED, TIMEOUT, CANCELLED |
| Error | error.type, error.message, error.stack_trace |
Detailed error information |
| Retry | retry_info.attempt_number, retry_info.next_retry_at |
Retry chain tracking |
| Context | context.entity_type, context.entity_id, context.operation_type |
Business entity reference |
| Actor | actor.user_id, actor.triggered_by |
Who initiated the request |
// Primary lookup { "request_id": 1 } // Unique { "correlation_id": 1 } // Time-based queries { "started_at": -1 } { "system_code": 1, "started_at": -1 } // Status queries { "status": 1, "started_at": -1 } { "is_success": 1, "started_at": -1 } // Business context { "context.entity_type": 1, "context.entity_id": 1 } // TTL index (auto-delete after 90 days) { "created_at": 1 }, { expireAfterSeconds: 7776000 }
Fetch external system and endpoint configuration from PostgreSQL. Validate authentication tokens, check rate limits.
Apply JSONata transformations to convert internal data format to external system format. Map internal codes to external codes.
// Example: Transform PO to SAP format { "request_transform": { "PurchaseOrder": { "VendorCode": $.vendor.external_code, "Items": $.items.{ "Material": item_code, "Qty": quantity } } } }
Execute HTTP request and immediately log to MongoDB with full request details. Start timing measurement.
Capture response, update MongoDB document with response details, calculate duration, determine success/failure.
// MongoDB update on response db.integration_logs.updateOne( { "request_id": requestId }, { $set: { "response_status": 201, "response_body": { "po_number": "4500012345" }, "completed_at": new Date(), "duration_ms": 2500, "status": "SUCCESS", "is_success": true } } );
On failure, log error details and determine if retry is needed based on endpoint retry policy.
Write summary record to PostgreSQL for dashboard queries and reports. Links to MongoDB via request_id.
MongoDB aggregation queries power real-time dashboards and analytics. Views are pre-defined for common queries.
| View Name | Purpose | Refresh |
|---|---|---|
v_failed_requests_24h |
Recent failed requests with error details | Real-time |
v_system_health |
Per-system success rate and latency (last hour) | Real-time |
db.integration_logs.aggregate([ { $match: { started_at: { $gte: new Date(Date.now() - 24 * 60 * 60 * 1000) } } }, { $group: { _id: "$system_code", total_requests: { $sum: 1 }, successful: { $sum: { $cond: ["$is_success", 1, 0] } }, avg_duration_ms: { $avg: "$duration_ms" } } }, { $project: { system_code: "$_id", total_requests: 1, success_rate: { $multiply: [{ $divide: ["$successful", "$total_requests"] }, 100] }, avg_duration_ms: { $round: ["$avg_duration_ms", 2] } } } ]);
Always redact sensitive data (passwords, API keys, tokens) before logging. Use [REDACTED] placeholder in headers and body.
Default TTL is 90 days. Adjust based on compliance requirements. For audit-critical integrations, consider longer retention or archival to cold storage.
Large response payloads (>1MB) should be truncated. Store reference to full payload in object storage if needed for debugging.
Always propagate correlation_id across service boundaries. This enables end-to-end request tracing across microservices.