This documentation focuses on single-node deployment scenarios for Data Integration. When the Precisely Agent is installed on a Virtual Machine, it uses default resource allocations, which are often not ideal for production-scale workloads. This guide offers a baseline approach to estimating and configuring resource allocations to help you achieve optimal performance. You can adapt these calculations based on the specifics of your environment, data volume, and pipeline complexity. Ensure that you have a Precisely Agent installed; for more information, refer to Install and download agent.
| vCPUs | 8 |
| RAM | 16 GB |
| Storage | 300 GB |
This guide outlines a method to estimate and configure resource allocation for your Data Integration workflows. Use these assumptions as a starting point, and adjust them as needed to match your workload characteristics.
- Benchmarking Conditions: Dataset with 100 tables, each table with 12 columns, per schema/pipeline.
- Pipeline Complexity: Simple, which refers to the level of processing, data volume, and operations involved in a pipeline that affect its performance and resource requirements.
- Record Size: Up to 1024 bytes per record.
Default Resource Allocation for DI Pods
There are four pods that are part of DI workflows. The table below outlines their default resource allocation:
| Pod Name | vCPU (millicores) | JVM Memory (MiB) | Pod Memory (MiB) | Disk (GB) |
|---|---|---|---|---|
| connect-cdc | 1200 | 512 | 3072 | 8 |
| connect-hub | 1000 | 512 | 1024 | 8 |
| cloud-applier | 1000 | 1024 | 2048 | - |
| sqdata-management | 200 | N/A | 500 | - |