Connection configuration

Data Integrity Suite

Product
Spatial_Analytics
Data_Integration
Data_Enrichment
Data_Governance
Precisely_Data_Integrity_Suite
geo_addressing_1
Data_Observability
Data_Quality
dis_core_foundation
Services
Spatial Analytics
Data Integration
Data Enrichment
Data Governance
Geo Addressing
Data Observability
Data Quality
Core Foundation
ft:title
Data Integrity Suite
ft:locale
en-US
PublicationType
pt_product_guide
copyrightfirst
2000
copyrightlast
2025

This section provides you the details on various fields required to configure Amazon Redshift in Data Integrity Suite.

Table 1.
Field Description
Name Specifies the name of data connection. This is a mandatory field. Example: RedshiftDataConnection
Description Allows you to describe the purpose of this data connection. Example: Connection to the Redshift cluster for data analytics.
Agent Select an agent from the dropdown list, responsible for managing the Oracle Server connection. Example: Agent1
Use Connection As Select the type of connection to define its role:
  • Source: Choose this option if the connection is used to extract data from the data source.
  • Target: Choose this option if the connection is used to transfer data to the data source.
  • Source and Target: Choose this option if the connection will be used for both extracting data from and transferring data to the data source.
Host Specifies the hostname or IP address of the Redshift cluster you want to connect to. This is a mandatory field. Example: redshift-cluster-1.abc123xyz.us-west-2.redshift.amazonaws.com
Port Specifies the port number on which the Redshift cluster is listening. The default port is 5439. This is a mandatory field. Example: 5439
Database Specifies the name of the Redshift database you want to connect to. This is a mandatory field. Example: analytics_db
Schemas Specifies the schema details used for connecting to the Redshift database. Example: public
JDBC URL Params Specifies the JDBC connection string containing parameters needed to connect to Redshift using a JDBC driver. Example: jdbc:redshift://redshift-cluster-1.abc123xyz.us-west-2.redshift.amazonaws.com:5439/analytics_db

If using the Redshift connection as the Target, provide the following additional fields:

Uploading Method Choose between Standard or S3 Staging, which uses Amazon S3 as an intermediary storage area before final data transfer. If using S3 Staging, configure these additional fields:
  • S3 Key ID: Specifies a unique identifier for accessing S3 storage services, typically used with an S3 Access Key for authenticating requests. This field is mandatory. Example: AKIAIOSFODNN7EXAMPLE
  • Encryption: Specifies the type of encryption used for securing data. Choose between 'No encryption', where data is stored in its original form, or 'AES-CBC envelope encryption', which uses the AES-CBC method for enhanced security by wrapping the data with an encryption key. Example: AES-CBC envelope encryption
  • Purge Staging Files and Tables: Specifies whether temporary files and tables in the S3 bucket should be automatically deleted after the data has been successfully transferred to its final destination. This setting can be toggled on or off. Example: On
  • S3 Bucket Name: Specifies the name of the S3 bucket used for staging data. This is the identifier for the storage location within the cloud storage service and is a mandatory field. Example: my-s3-bucket
  • S3 Bucket Path: Specifies the specific directory path within the S3 bucket where data will be temporarily stored before final processing or transfer. Example: staging/data
  • S3 Bucket Region: Specifies the geographical region where the S3 bucket is hosted. This affects data latency and compliance with regional data laws and is a mandatory field. Example: us-west-2
  • S3 Access Key: Specifies a secret key paired with the S3 Key ID, used together to authenticate and grant access to Amazon S3 storage. This field is mandatory. Example: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
  • S3 Filename Pattern: Specifies the naming convention or template for files stored in the S3 bucket. This pattern may include variables such as date, time, or specific identifiers to organize and retrieve files efficiently. Example: data_{date}_{time}.csv