Bring Snowflake data to Adaptive
There are 3 ways to bring data from Snowflake into Adaptive:
- Use Adaptive's native integration module >>> that's what I will cover in this post <<<
- Use third-party integration platforms such as Tray.ai, Workato, Boomi, etc.
- Host and Write your own script (in Python, Powershell, C#, you name it!)
This article focuses on point 1: Adaptive's integration module.
Within Adaptive's native integration module there are also 3 possibles methods! I will explain the pros and cons of each and will give more details on the easiest one! From within Adaptive, you can connect to Snowflake, by setting up one of these data sources:
- JDBC
- ETL (Scripted Data Source)
- CCDS (Custom Cloud Data Source) >>> preferred method, explained below <<<
My preferred methodology is the CCDS using AWS S3, since Snowflake and S3 have super easy native integration within Snowflake. And Adaptive has a dedicated library to connect to S3.
Preferred Adaptive to Snowflake integration: CCDS + AWS S3
Adaptive will fetch your Snowflake data from a file on AWS S3 (or an SFTP); S3 being preferred due ot the ease of integration.
I will provide below, links to the documentation on the Snowflake <> AWS S3 integration. So keep reading.
❗As of today: you can't make HTTPS web requests from Adaptive's CCDS to Snowflake! Adaptive’s library is limited for HTTPS requests and it can’t work with Snowflake and you can't add external JavaScripts libraries! But maybe it will come down the road?
Workflow
Your Snowflake/Data team writes a small script that will trigger the queries, generate a CSV output file and save it in AWS S3 (on a scheduled basis for example!). Then on Adaptive's end, a simple JavaScript will fetch the file and push it to your sheets!
Snowflake's documentation
Other integrations within Adaptive, pros and cons
Datasource | Requires Agent? | Requires Pentaho Kettle? | Requires Javascripting? | Comments |
---|---|---|---|---|
JDBC | Yes π΄ | No π’ | No π’ | Requires install of data agent (more info below) + its drivers to manage the connection. It's a bit of an old way to do things. Sometimes the Agent service may stop running! Def. not my favorite! |
ETL | Yes π΄ | Yes π΄ | No π’ | Requires install of data agent + Pentaho (more info below). Snowflake will deposit the data into the server where this agent runs. Pentaho transforms data for Adaptive. Definitely not recommended... Plus I heard rumors that this may retire! |
CCDS | No π’ | No π’ | Yes π΄ | Setup explained above. Minimal Javascripting in Adaptive to fetch the CSV file. Recommended and preferred method: simple and quick to setup. |
Data Agent and Pentaho
The Data Agent is a component of Integration that runs on a server, extracting data from JDBC-compliant databases or custom data sources, and can also export data from Adaptive Planning. It requires a Windows server behind the customer's firewall, operating as a hosted service to manage access to on-premises apps and connections to Adaptive Planning in the cloud.
The Agent Service, a Windows Service launching a Java application, manages the data agents by polling the Adaptive Planning Gateway, which communicates with the Agent Service through the firewall. Multiple data agents can be hosted by a single Agent Service instance.
The installation includes a Data Agent Service Manager for setup and configuration, and optionally, Pentaho Kettle for ETL processes. If used, the Agent Service manages Kettle ETL job runners in Java virtual machines. Adaptive Planning provides plugins for integrating Kettle with the cloud. Pentaho components are only needed for Kettle scripts; otherwise, JDBC-compliant databases are used.
Data agents are managed through the agent UI in the cloud, accessible via a web browser. The UI allows customers to install, provision, suspend, resume, and upgrade data agents, as well as monitor their status and version.
Comments
Post a Comment