Search at point

Data Integrity Suite

Product
Spatial_Analytics
Data_Integration
Data_Enrichment
Data_Governance
Precisely_Data_Integrity_Suite
geo_addressing_1
Data_Observability
Data_Quality
dis_core_foundation
Services
Spatial Analytics
Data Integration
Data Enrichment
Data Governance
Geo Addressing
Data Observability
Data Quality
Core Foundation
ft:title
Data Integrity Suite
ft:locale
en-US
PublicationType
pt_product_guide
copyrightfirst
2000
copyrightlast
2025
Type: Geospatial step

The Search at Point step performs a search against a spatial dataset allowing you to enrich your data with additional attributes. This step is currently supported for the Snowflake connection only.

Search at Point takes either a point geometry or longitude (X) and latitude(Y) coordinates from your input data and identifies the geographic area (or "polygon") in which the point resides. In the step, you can choose the available spatial datasets from the Precisely Data Suite.

Unlike the Enrich step, which requires the input data to have a Precisely ID for lookup, the Search at Point step can perform a search using only a point or longitude (X) and latitude(Y). Therefore, Search at Point is useful in cases where the data is not associated with an address and does not have a Precisely ID but still needs to be searched against a spatial dataset.

Search at Point requires a data subscription to process the search. When you add the Search at Point step to a pipeline, a message Needs Data Subscription appears in the step configuration panel.

You can use a dataset and preview the selected fields while designing a pipeline. However, to run the pipeline, you must have a data subscription. For more information see, About entitlements and subscriptions.

The pipeline job fails to run if the Search at Point step includes fields from an unlicensed dataset. Contact your Precisely support representative to purchase and configure shares for data entitlements.

Important: To subscribe to data or to create data shares in Data Integrity Suite workspace, contact your Precisely support representative. Depending on the subscribed platform, customers can send an email to the Databricks Partnership (databricks.partnership@precisely.com) or to the Snowflake Partnership (snowflake.partnership@precisely.com) to provision subscribed data. For more information about how to view data shares in the Databricks environment, see Read data shared using Databricks-to-Databricks Delta Sharing (Databricks documentation). For more information about how to view data shares in the Snowflake environment, see Data Consumers (Snowflake documentation).
  1. Set up data share: Ensure you have set up a data share that contains the datasets you intend to use for data enrichment. This share should include all the relevant datasets required for the Enrich step.
  2. Create database or a catalog from a data share: For the first-time user, it's crucial to create a database (for Snowflake) or a catalog (for Databricks) from the data share within your workspace. This creates a database/catalog that includes the share within your workspace, making it accessible for future enrichment steps.
  3. Name your data share: While creating a database or catalog, you'll have the option to provide a name. This name helps you identify the specific dataset collection associated with the Enrich step.
  4. Specify the name in the pipeline engine: To run a pipeline in Snowflake or Databricks, specify the database or catalog name in the fields Enrich datasets database or Enrich datasets catalog respectively. These fields are a part of pipeline engine creation and are marked as optional but they are mandatory to run a Data Quality pipeline with the Enrich or Spatial Enrich steps.

Check Subscription Details Click this link to view data subscription entitlements and usage for your workspace. This option is visible to workspace administrators. Contact your Precisely support representative to purchase and configure shares for data entitlements.

Click Dismiss to close the subscription message or Learn More to know the subscription details.

Search at Point properties

In the Search at Point step configuration panel, you can either provide Geometry or Coordinates as input points. The operator can read your sample input data and if it finds any Geospatial data type, the Geometry option gets selected by default. If both data types are available in your sample input data, you can choose either Geometry or Coordinates for your operation. Based on your selection, the drop-down displays the filtered columns from the sample data.

Geometry - Click the drop-down to select the geometry column from your sample input data.

Coordinates - Click the respective drop-down to select the Longitude (X) and Latitude (Y) column from your sample input data.

Next, you need to select the dataset on which you want to run the Search at Point operation.

Click the folder icon to open the Choose Spatial Dataset & Fields dialog box.

The dialog box has three panels:

  • Dataset panel—This panel lists available datasets. Click a dataset to select from fields in that dataset.
  • Dataset (dataset-identifier) fields panel—This panel shows fields in a dataset. Select fields on this panel to add the corresponding data column to the output schema for the step. You can click the information icon to view information about a field. This displays a pop-up window that shows the dataset field name, its friendly name, and the data type with a brief description of the data returned by a field.
  • Selected Fields panel—This panel shows the output field names for the selected fields, grouped by dataset.

    Click a dataset name to expand or collapse the list of fields that are currently selected for the dataset. To remove all selections for a dataset, click the ellipsis next to the dataset name, then click Remove. To remove a single dataset field, click the remove button next to the field name.

After you complete the field selections, click the Choose Dataset button to add the selected fields and return to Search at Point step configuration panel. It shows the number of fields selected. These will be output fields for the step.

From the Options drop-down you can limit the number of records you want in the output.

You can enter a custom name for the Count and the Index fields or leave them as default. By default, the "Count" and "Index" will appear as column names in the output.

Preview Click this button to preview the result of the transformation.

Save Click this button to close settings and save changes to the transformation settings.

Cancel Click this button to close settings for this transformation without saving any changes.