For data to have value, it can’t be static – data must be available to users throughout an organization, for a variety of applications. But as data is used, it presents its own set of challenges. As data moves from a warehouse to other applications and data models, multiple versions of truth can be created. Similarly, any changes to data in the data warehouse can have downstream impacts, breaking existing data models and making data unusable.
These obstacles scale with an organization. Complex data ecosystems have multiple layers of views with many data versions. Teams of developers must work in concert – the larger the dining room, the more cooks in a kitchen. With high complexity and multiple dependencies, data transformation can quickly become a liability.
How can enterprise organizations maintain a vibrant data ecosystem while supporting data transformation at scale? To support data movement, agile organizations need a rapid yet powerful way to adapt data to new needs and changes.
As tools built to promote data governance and agility, Talend and dbt are bolstered by features that make them ideal for deployment at enterprise scale. On the front side of data management, Talend extracts and loads data from any data source with built-in data quality checks; on the backend, dbt efficiently transforms data from the Cloud with reusable data models and automated testing that pinpoints model dependencies before deployment.
Talend and dbt are particularly well suited for data management in the Snowflake Data Cloud. With 1 in 5 Snowflake customers using Talend for ingestion and integration – Talend is heavily relied on for efficiently moving clean, healthy data from any source system into the Snowflake Data Cloud.
With dbt, organizations can better understand, document and transform data in the Data Cloud. As a cloud-native application, dbt cloud allows engineers to develop, test and deploy new models from within the Snowflake Cloud.
Additionally – dbt supports faster data model development with simple SQL statements – known and loved by the data community. Data models are deployed with confidence – with dbt’s automated testing, checkpoints prior to data model execution alert developers to any changes that could break an existing model. Automatically generated documentation and dependency graphs make data lineage visible and easy to troubleshoot.
With Talend and dbt, organizations can rely on data quality from source to analytics and advanced applications. Talend’s Data Fabric combines data integration, preparation and stewardship tools that start at the source system, and persist through to the persistent staging layer.
dbt amplifies Talend Data Fabric with schema tests on source data, it supports Data Quality within the Snowflake Data Cloud. Data value testing helps prevent fan-outs or missed joins in staging models, while automated testing provide CI on deployment of new pull requests. Programmers can view SQL code inline to investigate data models.
Talend and dbt excel in enterprise environments. Talend’s Cloud API Services enable an enterprise data hub or data warehouse environment with real-time or near real-time capabilities. Dbt’s git-based version control allows teams to work collaboratively in shared repositories, where changes are tracked and visible.
Passerelle has optimized the connection between Talend and Snowflake with a Governed Dynamic Ingestion Framework that provides managed CDC, preliminary data cleansing and creation of data history, alongside an Audit and Control Framework that supports targeted troubleshooting for data inaccuracies.
Looking ahead, Passerelle engineers will use dbt cloud to develop data models for specific verticals, including financial services, healthcare and manufacturing.