This connection captures real-time or near-real-time changes from mainframe data storage systems, such as VSAM (Virtual Storage Access Method) or IMS (Information Management System), and streams them into Kafka topics. The data is extracted from the source systems and transformed into a Kafka-compatible format, enabling seamless integration with modern data streaming platforms. The primary objective of this process is to continuously replicate changes from legacy mainframe systems into modern environments, supporting real-time data processing, analytics, and integration with other systems.
To create a mainframe replication pipeline when the source is VSAM or IMS or Db2 for z/OS and the target connection is Kafka, follow the steps below.
- On the main navigation menu, select Integration > Mainframe Replication.
- Click + Create Pipeline to open the Create Mainframe Pipeline window.
- Choose either VSAM or IMS or Db2 for z/OS to Kafka and specify the name and description for the mainframe replication pipeline.
- Select an existing runtime engine from the dropdown menu, or click + Add Engine to open the Add Runtime Engine window and add a new engine.
- By default, Include sample files and Automatically start pipeline when runtime engine starts are toggled ON.
- Click Create to set up the mainframe pipeline. The newly created pipeline will then appear in the table.
- Advanced customization: Select this option if the you require field transformations, record format changes, filtering, or normalization/denormalization of records during replication.
- High volume replication: Choose this option for replicating large volumes of data with frequent changes. Suitable for data environments with a high throughput of transactional data.
Once you have followed these steps, your mainframe pipeline will be created, displayed in the table, and ready for real-time data streaming from mainframe systems to Kafka.