VSAM or IMS or Db2 for z/OS to Kafka

Data Integrity Suite

Product
Spatial_Analytics
Data_Integration
Data_Enrichment
Data_Governance
Precisely_Data_Integrity_Suite
geo_addressing_1
Data_Observability
Data_Quality
dis_core_foundation
Services
Spatial Analytics
Data Integration
Data Enrichment
Data Governance
Geo Addressing
Data Observability
Data Quality
Core Foundation
ft:title
Data Integrity Suite
ft:locale
en-US
PublicationType
pt_product_guide
copyrightfirst
2000
copyrightlast
2025

This connection captures real-time or near-real-time changes from mainframe data storage systems, such as VSAM (Virtual Storage Access Method) or IMS (Information Management System), and streams them into Kafka topics. The data is extracted from the source systems and transformed into a Kafka-compatible format, enabling seamless integration with modern data streaming platforms. The primary objective of this process is to continuously replicate changes from legacy mainframe systems into modern environments, supporting real-time data processing, analytics, and integration with other systems.

To create a mainframe replication pipeline when the source is VSAM or IMS or Db2 for z/OS and the target connection is Kafka, follow the steps below.

  1. On the main navigation menu, select Integration > Mainframe Replication.
  2. Click + Create Pipeline to open the Create Mainframe Pipeline window.
  3. Choose either VSAM or IMS or Db2 for z/OS to Kafka and specify the name and description for the mainframe replication pipeline.
  4. Select an existing runtime engine from the dropdown menu, or click + Add Engine to open the Add Runtime Engine window and add a new engine.
  5. By default, Include sample files and Automatically start pipeline when runtime engine starts are toggled ON.
  6. Click Create to set up the mainframe pipeline. The newly created pipeline will then appear in the table.
Note: For Db2 for z/OS to Kafka pipelines, you have the option to select the type of replication:
  1. Advanced customization: Select this option if the you require field transformations, record format changes, filtering, or normalization/denormalization of records during replication.
  2. High volume replication: Choose this option for replicating large volumes of data with frequent changes. Suitable for data environments with a high throughput of transactional data.

Once you have followed these steps, your mainframe pipeline will be created, displayed in the table, and ready for real-time data streaming from mainframe systems to Kafka.