Intelligent Object Connectors for Custom Data Sources

Admins can use Intelligent Object Connectors to integrate custom data sources without the need to create table definition files. This feature allows admins to load custom files automatically using an Intelligent Object connector, then access those files as any other view using the connector’s database report schemas.

To use Intelligent Object Connectors for Custom Data Sources:

  1. Log into the NAC.

  2. Select Inbound Connectors from the Connectors section of the side menu.

  3. Select the New Connector button.

  4. Add connector details including display name and description.

  5. Select Custom (Intelligent Objects) as the connector type under Intelligent Objects.

  6. Create an Intelligent Object Configuration for each file you want to load by selecting Add.

  7. Enter the following information:

    • Name - Enter the name to be used for table in Nitro. Nitro automatically appends an __c to the table name.
    • Delimiter - Select the type of delimiter used in the data file(s)
    • Pipe
    • Comma
    • Semi-colon
    • Tab
    • Source Filename Pattern - Enter the filename pattern to identify which files to load to the table
    • Force all columns as String Data type - Nitro determines the data type of the columns. Users have the option to force all columns of the source data to be loaded as type String. If data types are not read correctly, this option allows users to load the data consistently.
    • Data Lake Load Pattern:
    • Full Truncate and Insert Data - This pattern always deletes the existing data in the table and replaces it with the new data files, when new files are loaded
    • Append Data - This pattern appends the existing data with all new files loaded to this table where table metadata is the same. If table metadata changes, a new table must be created.
  8. Select Save.

To start the metadata discovery and load process using FTP:

  1. Create a connector user in the NAC specific for this connector.

  2. Load data to S3 using SFTP.

  3. When the job is complete, run the Data Lake Intelligent Load job.

To start the metadata discovery and load process using S3 Pull:

  1. Obtain AWS policy changes.

  2. Load data to S3 using S3 Pull.

  3. When the job is complete, run the Data Lake Intelligent Load job.

Once the job completes successfully, users can log into the NAC to view the created column definitions. To access the data, users must view the Report Current or Report History schema using a BI tool, for example, Nitro Explorer.