Loading Data into MyInsights

Creating a Job to Populate Data

Creating a metadata package requires the jobs to populate the data from the source tables into the table in the MyInsights layer.

  1. Create tasks in the tasks directory of the package to create the tables/views and populate the data from the source tables. The structure of the physical tables/views should match the metadata of the tables/views.
  2. Create a task sequence in the taskSequences directory of the package to invoke the tasks in the previous step.
  3. Create a job in the jobs directory of the package to invoke the task sequence in the previous step.
  4. Zip the tasks, tasksequences, and jobs sub-directories. This zipped package is what is deployed to the connector.

Creating a Job to Load Data to the Cache Server

The Nitro data sync relies on the Nitro cache server to handle the access to the data. Creating a metadata package involves the jobs to load the data from the tables in MyInsights layer to the cache server.

When loading the MyInsights cache using the CRM Enhanced Presto Load job, if the data structure of the Nitro HTML table changes, the table in the Nitro Presto Cache layer also changes. Admins do not need to drop and redeploy the table in the Presto Cache.

  1. Create a task sequence in the taskSequences directory of the package to invoke the tasks to load the data to the cache server. Change the context parameters but do not change the tasks section.
  • objectName - name of the object defined in HTML(MyInsights) .yml
  • tableName - name of the table defined in HTML(MyInsights) . yml
  • partitionKey - name of the partition key defined in HTML(MyInsights) . yml

Example

Copy
ts_fact_trx_trend_rs_hive.yml
context : 
objectName :fact_trx_trend_html__c
tableName  :fact_trx_trend__c
partitionKey  : crm_user_id__v
tasks - 
- tk_crm_sysdate.sqlv
- tk_crm_purgedate.sqlv
- tk_crm_cleanup_s3_redshift.script
- name : tk_crm_cache_ddl_presto.sql
platform:  CACHEDB
- name  :  tk_crm_cache_ddl_redshift.sql
platform : CACHEDB
- tk_crm_rs_to_s3_unld.sql
- name:  tk_crm_s3_to_hive_ins.sql 
platform  : CACHEDB
platform  : CACHEDB
platform  : CACHEDB
- name  -  tk_crm_presto_rename.sql
platform  : CACHEDB
- tk_crm_cleanup_s3_hive.script
  1. Create a job in the jobs directory of the package to invoke the task sequence from the previous step.
  2. Zip the tasks, tasksequences, and jobs sub-directories. This zipped package is deployed to the connector.

Uploading the Metadata Package

Deploying the metadata package involves uploading the zipped file to the connector. This is done using the CLI client.

Run or schedule the defined jobs to populate the data and load the data to the Nitro cache server.

View Nitro Data in a MyInsights visualization by creating a MyInsights report. MyInsights leverages the Veeva JavaScript library to query data specifically from Nitro.

There is a dedicated method to access the Nitro data:

  • queryVDSRecord(queryConfig) - Query executes against Nitro data

To use the labels of Nitro tables and columns, use the following methods:

  • getObjectLabels(Object) - Returns the labels for the list of tables as they exist in Nitro
  • getFieldLabels(queryConfig) - Returns the labels for the list of columns as the exist in Nitro