Relevant log files for replication
Data Integrity Suite
Product
Spatial_Analytics
Data_Integration
Data_Enrichment
Data_Governance
Precisely_Data_Integrity_Suite
geo_addressing_1
Data_Observability
Data_Quality
dis_core_foundation
Services
Spatial Analytics
Data Integration
Data Enrichment
Data Governance
Geo Addressing
Data Observability
Data Quality
Core Foundation
ft:title
Data Integrity Suite
ft:locale
en-US
PublicationType
pt_product_guide
copyrightfirst
2000
copyrightlast
2025
Get started
Navigate the dashboard
Manage workspaces
Explore key services
Data Catalog
Data Governance
Data Integration
Data Quality
Data Observability
Data Enrichment
Spatial Analytics
Geo Addressing
Resources
FAQs about Data Integrity Suite
Account
Account overview
Profile
View and update profile
Change default language
Change appearance
Sign in and out
Subscription
About entitlements and subscriptions
About subscription datasets
Enrichment datasets overview
Geo Addressing datasets overview
View subscriptions
About data shares
Usage
View usage details
API keys
Manage API keys
Downloads
Configuration
Configuration overview
Data
Datasources
About datasource
About connections
Type of connectivity
Base connectivity
Advanced connectivity bundle
Datasources summary
Supported datasources
Add datasource and connection
Configuring firewall allowlist for data access
Datasources configuration
Relational Database Management Systems (RDBMS)
Amazon Redshift
Connection configuration
Authentication methods
Asset types
Azure SQL server
Connection configuration
Authentication methods
Asset types and mapping
Key vault configuration
Permissions
Azure Synapse Analytics
Connection configuration
Authentication methods
Key vault configuration
Asset types and mapping
Databricks
Connection configuration
Authentication methods
Key vault configuration
Asset types
Permissions
DB2 LUW
Connection configuration
Authentication methods
Key vault configuration
Asset types and mapping
DB2 z/OS
Connection configuration
Authentication methods
Key vault configuration
Asset types and mapping
Permissions
Google BigQuery
Connection configuration
Asset types
Microsoft SQL server
Connection configuration
Authentication methods
Key vault configuration
Asset types and mapping
Permissions
MySQL
Connection configuration
Authentication methods
Key vault configuration
Asset types and mapping
Permissions
Oracle
Connection configuration
Authentication methods
Key vault configuration
Asset types and mapping
Permissions
PostgreSQL
Connection configuration
Authentication methods
Key vault configuration
Asset types and mapping
Permissions
Snowflake
Connection configuration
Authentication methods
Key vault configuration
Asset types and mapping
Permissions
File Systems (Object Stores) and File (Types)
Azure Data Lake Storage (ADLS) Gen2
Connection configuration
Asset types
Extract, Transform, and Load (ETL)
Azure Data Factory
Connection configuration
Asset types
MS SSIS
Connection Configuration
Asset types
dbt Cloud
Connection configuration
Authentication methods
Key vault configuration
Asset types
Permissions
Business Intelligence (BI)
Azure Power BI
Connection configuration
Authentication methods
Key vault configuration
Asset types
Google Looker
Connection configuration
Authentication methods
Key vault configuration
Asset types
MS Power BI report server
Connection configuration
Authentication methods
Key vault configuration
Asset types
Microsoft SSRS
Connection configuration
Asset types
Tableau
Connection configuration
Authentication methods
Key vault configuration
Asset types
Business Applications
MS Dynamics 365
Connection configuration
Asset types
JDBC
Prerequisites
Connection configuration
Key vault configuration
View, edit and remove datasource
Manage connections
Find and test connection
Edit, duplicate and remove connection
View connection details & data assets
View insights and schedule
Troubleshoot datasources errors
Agents
About agent
Domain names and their usage in agent services
Install and download agent
Manage agent
Update agent services
Migrate agent
Available CLI commands
Troubleshoot agent errors
Agent best practices
Pipeline engines
About pipeline engines
Supported pipeline engines
Databricks
Google Dataproc
Precisely Agent
Snowflake
Configure pipeline engines
Set up pipeline engine on Databricks
Replication connections
About replication connections
Types of replication connections
BigQuery
Snowflake
Oracle
SQL Server
IBM Db2 for LUW
IBM Db2 for IBMi
IBM Db2 for z/OS
SAP
Kafka
PostgreSQL
Create and manage replication connection
Replication engines
About replication engines
Add continuous/mainframe runtime engine
Add continuous runtime engine
Edit continuous runtime engine
Remove continuous runtime engine
View continuous runtime engine details
Configure mainframe runtime engine
Install runtime engine for Db2 for Linux, Unix or Windows
Install additional components
Install components for IBM i
Preparing IBM i Environment
Prepare DB2/400 environment
Install log reader
Install components for IBM z
Install and configure IBMz components
Progress package installation on Db2ZOS database server
Progress package installation on Db2LUW database server
Set up zFS and network configuration
Prepare Db2 for capture
Set up and configure Data Integration
Finalize data capture and security configuration
Data samples
About data sample storage
Set sample data location
Show failed records sample
Key Vault
Add Key Vault connection
Install vault agent secret
View, edit, remove key vault
Catalog
Tag types
Create tag types
Governance
Business assets
Configure business asset types
Change history for business asset types
Technical assets
Explore hierarchy views for technical asset types
Use the containment hierarchy view
Add a child technical asset type
Use the inheritance hierarchy view
Edit a technical asset type
Add fields to connector-harvested technical asset types
Modify fields for connector-harvested technical asset types
Change history for technical asset types
Models
Create a model asset type
Policies
Create a policy asset type
Fields
Supported field types
Asset path
Boolean
Counter
Date
Decimal
Integer
JSON
Link
List
Reference list
Relationship
Relation lookup
Tag
Text
Define fields on asset types
Relationships
Configure relationship types
Relationship types
Define fields on relationship types
Predicates
Supported predicate functional types
Grammar
Hierarchy
Lineage
See also
Semantic
Simple
Configure custom predicate
Reference lists
Configure reference list
Add child reference list
Define fields on reference list
Import
Understand import spreadsheet template
Import business and technical asset types
Import asset relationships
Import models, policies and reference lists
Update existing assets using import
Import users
Security
Security overview
Overview of roles and permissions in the security tab
Users
Invite and manage users
User groups
View default user groups
Configure custom user groups
Roles
View default roles
Configure custom roles
Policies
View default security policies
Configure custom security policy
SSO
Configure Just-in-Time SSO login functionality
Workspace settings
Configure session timeout
Security best practices
AI
AI Manager
Features
LLM connections
Supported AI functionalities
Catalog
Catalog overview
Metadata management
Catalog spatial data
Search cataloged assets
Sort cataloged assets
View asset quality scores
Filter lineage diagrams
Search lineage diagrams
Trace lineage diagrams
Relationships and lineage for replication projects
Catalog best practices
Datasources
View and filter datasources
Explore datasource lineage
Add relationship to datasource lineage
Datasets
About sample data
Upload or generate sample data
View and filter datasets
View profiling details at the dataset level
Generate description for datasets
Explore dataset lineage
View table lineage for replication pipeline
Add relationship to dataset lineage
Configure quality rules from the datasets page
Run quality rules for specific datasets
Configure governance rules and scores for datasets
Fields
View and filter fields
View profiling details at the field level
Generate description for data fields
Explore field lineage
View column-level lineage for replication pipeline
Add relationship to field lineage
Configure quality rules from the fields page
Run quality rules for specific fields
Configure governance rules and scores for fields
Technical assets
Create technical assets
View technical assets details
Establish relationship between technical assets
View profiling for technical assets
Profiling categories
Time-series chart
Selective Profiling
View change history for technical assets
View relationships in a replication pipeline
Tags
Create a tag
Default tags for connectors
Jobs
View catalog jobs
Workflow
Workflow trigger types
Workflow conditions
Workflow transitions
Workflow activities
Assignments
Respond to an assignment
Reassign workflow tasks
Requests
Make a request
Workflows
Configure workflows
Configure a workflow based on a score
Email activity
Form activity
Field change activity
Relationship change activity
Make API Call activity
Delete activity
Schedules
Configure a scheduled workflow
Request types
Configure request types
Configure a request type workflow
Build a certification workflow
Settings
Edit workflow settings
Send workflow assignment emails
Configure assignment ID schemes
Governance
Governance overview
Business assets
Create business assets
Establish relationship between business assets
View change history for business assets
Generate description for business terms
Models
Create models
Add child to a model
Policies
Create policies
Rules
Create a governance rule
Manage existing governance rules
Configure governance rule scores
Integration
Integration overview
Data Integration source-to-target compatibility
Agent resource allocation
Use case-to-pod compatibility
Estimating resource allocation based on workload
Estimate the workload
Per pipeline/schema resource requirements
Calculate total resources
Provision the VM
Update stateful sets to modify pod resources
Modifying pod configuration
Update agent
Update operator
Update Kafka
About replication pipelines
About continuous replication components
About mainframe replication components
Continuous replication
About continuous replication
Manage replication project
Add, edit and view replication project
Start and stop replication project
View replication project alerts
Validate replication project configuration
Apply configuration changes
Manage diagnostic bundles
Debug and export replication project
Create continuous replication pipeline
General Setup
Connections
Add a source connection
Add a target connection
Source data configuration
Configure mapping
RRN mapping
Journal mapping
Kafka
Create target table
Manage mapping actions
Change dataset mapping
Change field mapping for a subset of selected tables
Set column mapping
Change table mapping for selected rows
Add filter
Soft delete for table targets
View continuous replication summary
Manage continuous replication pipeline
View replication pipeline details
Edit and delete replication pipeline
Start and stop replication pipelines
Configure metabase
Create and manage metabase
Supported metabase
Oracle metabase
Db2 for IBMi metabase
Db2 for Linux, Unix, or Windows metabase
SQL Server metabase
Mainframe replication
About mainframe replication
Install and configure mainframe replication components
Create mainframe replication
VSAM or IMS or Db2 for z/OS to Kafka
Kafka, VSAM, IMS to Db2
Manage mainframe replication
Edit and delete mainframe replication pipeline
Start and stop mainframe replication pipeline
Apply configuration changes to mainframe replication pipeline
View mainframe replication pipeline log
Manage mainframe editor
Edit and rename replication pipeline
Duplicate, move and delete replication pipeline
Set as main script and manage substitution variables
Save and validate mainframe pipeline
Apply configuration changes
Add a configuration script
Troubleshoot integration errors
Agent configuration paths for replication
Metabase errors
Relevant log files for replication
Pipeline cannot stop when Kafka fails
Kafka target ACK/timeout error
Quality
Quality overview
Quality features supported by each datasource
Update Spark connector
Configure Databricks cluster
Required permissions in Databricks for performing essential operations
Best practices for configuring Databricks pools
Snowflake UDF for LLM Execution
Rules
Data quality rules
Understand default rules
Configure custom rules
Auto generate rule description using AI
Target assets
Fields
Fields by condition
Datasets
Apply pass conditions to custom rules
Auto generate rule expression for pass conditions using AI
AI generated test data to validate rule expressions
Configure row filters and define conditions
Mapping in custom rules
Configure rule schedule
View rule scores
View rule definition
View rule run status
View failed records
Additional information
Required Snowflake roles and permissions for data quality rules and profiling
Snowflake warehouse sizing recommendations
Profiling guidelines for agent based connections
Pipelines
About pipelines
Configure quality pipeline
About transformation steps
Configure transformation steps
Supported transformation steps
Structure
Split
Union
Join
Output
General
Copy field
Filter field
Filter row
Evaluate rule
Execute formula
Generate key
LLM transform
Custom coding
Rename field
Split field
String
Cleanse data
Replace values
Convert case
Trim string
Get substring
Pad string
Replace between
Replace by position
Cleanup whitespace
Addressing
Geocode address
Identify country
Verify address
Parsing
Parse email
Parse name
Parse phone number
Standardization
Standardize date
Standardize field
Enrich
Enrich
Table lookup
Matching and consolidation
Match and group
Matching options
Match Key options
Entity based approach
Match Key options for Entity Based approach
Automated approach
Custom approach
Match Key options for Custom approach
Rule based approach
Match algorithms
Consolidate matches
Spatial
Search at point
Create run configuration
Managing databricks quota limitation
Run quality pipeline
Validate quality pipeline
Check validation errors
Access profiling and suggestions
Schedules
About schedules
Creating and managing schedules
Jobs
About quality jobs
Manage quality jobs
Troubleshoot quality errors
Auto cancellation of Snowflake queries
Error while opening existing pipeline
Pipeline using Snowflake engine fails
Out of memory error in spark jobs on Precisely agent
Quality best practices
Observability
Observability overview
Observability features supported by respective datasources
Observability support for datasources
Agent based support to generate observation alerts
Configuration to generate freshness alerts for agent based assets
Configuration for Microsoft SQL connection
Configuration for Oracle connection
Alerts
About alerts
View and manage observation alerts
View alerts associated with an Observer
Finetune alerts
Interpret alerts
Generation of data drift alerts outside the defined threshold limit
Generation of data drift alerts for unique value count
Generation of data drift alerts based on distribution of value count
Generation of freshness alert
Observers
About observers
Determine observer creation
Create and manage observers
Setup observer rules
Freshness rule
Volume rule
Data drift rule
Data drift statistics
Schema drift rule
Scheduling and limitations
Filter datasets
Profilers
About profilers
About profiling
Create and manage data profile
View data profiling results
View data profiling trends
View data profile run history
Configure profiler scheduler
View scheduler status and run history
Profilers best practices
Troubleshoot observability errors
Profile or observer run failures
Missing individual profile statistics
Spatial analytics
Spatial analytics overview
Launch location reports
Partner integrations
Integrate with Databricks
Customer user journey
Data Integrity Suite how-to videos
Welcome to Data Integrity Suite
Navigate the user interface and explore key features
Manage workspaces
Account and subscriptions
How to invite users and manage user groups
Manage API key and secret
Data Integrity Suite APIs - access and authentication
Create and manage datasources
Configure JDBC datasource via agent
Connect PostgreSQL database with data access connector
Create a Databricks connection
Create a replication connection
Overview of Catalog
Overview of rules and scoring
Introduction to Data Observability
Overview of Data Governance
Perform prerequisites to set up your replication environment
Create a continuous replication connection - sources and targets
Create a continuous replication pipeline
Create and configure a mainframe replication pipeline
Generate sample and create Data Quality pipeline
Transform and run a Data Quality pipeline
View Data Quality and Catalog job details
Create an Observer to monitor data
Understand Observer rules
View Alerts to observe changes in data
Create a Data Profile to analyze data
View Data Profiling results
Notices
Retrieve diagnostic bundles.
Gather these logs and artifacts at the project/model level. Listener logs (kernel/<HostName>*.log) : One set per listener/host. Rolls over
at 10 MB by default, up to 10 files. Listener status. Summary of alerts
across all models running on the listener. Needed to resolve
host-wide/cross-project issues, resolve design-time issues, understand
general environment, troubleshoot kernel startup.
Kernel logs (kernel/<ModelName>*.log) : One set per model/project per
listener/host. Rolls over at 10 MB by default, up to 10 files. Detailed logs
for kernel (one model running on one host). Needed for full resolution of
replication problems - runtime database connection issues, deployment/model
update issues, data correctness issues, understanding of data volumes and
performance.
Change Selector/Log Reader logs : One set per Change
Selector/Log Reader. Detailed logs for the capture mechanism. Needed for
full resolution of transaction log capture issues, other system issues on
the database source host, e.g. IBM i journaling, permissions accessing the
transaction logs, etc. For non-DB2 sources, these are part of the kernel
log, or are outside of the product (e.g. SQL Server agent).
Contact Support if necessary.