About mainframe replication components - Precisely Data Integrity Suite

Data Integrity Suite

Product
Spatial_Analytics
Data_Integration
Data_Enrichment
Data_Governance
Precisely_Data_Integrity_Suite
geo_addressing_1
Data_Observability
Data_Quality
dis_core_foundation
Services
Spatial Analytics
Data Integration
Data Enrichment
Data Governance
Geo Addressing
Data Observability
Data Quality
Core Foundation
ft:title
Data Integrity Suite
ft:locale
en-US
PublicationType
pt_product_guide
copyrightfirst
2000
copyrightlast
2026

The mainframe replication pipeline uses:
  • Controller daemon is the first point of contact for any agent requesting communication with any other agent in both single and multi-platform environments. Controller Daemons are accessed using a TCP/IP interface to an assigned Port on the platform where they are running.
  • Data capture agent (IMS, VSAM, or Db2) is used with a publisher to fully configure transient storage and forward transport of the captured data.
  • Engine processes data from the data capture agent running on any supported platform. The communication between Capture/Publisher and Engine is managed to ensure that only complete Units of work are committed to targets for data integrity between source and target.

The Apply Engine is the component that runs within the DI Suite agent and the apply engine script is used in the SaaS UI for mainframe replication. The Apply Engine configuration is passed to the DI Suite agent whenever a configuration is committed. It is saved to the DIS Agent container and run by the sqdata-management service in the Agent. The sqdata-management service is required for running a mainframe replication pipeline. View sqdata-management in the Agent Details by selecting Configuration > Agents and click a specific agent.

Lastly, the capture / publisher runs on the mainframe side.