Integrate to Databricks with one click
The integration between the Databricks and Kensu platforms brings users enhanced data observability and metadata automation capabilities completing Databricks Unity's capabilities. This integration empowers data teams deploying Databricks jobs and notebooks to automate the harvesting of metadata, lineage, and data metrics during the execution of the deployed Spark jobs, ensuring data quality, performance, and compliance.
Freeze Spark jobs automatically and avoid
issues propagation
Kensu Circuit Breaker is a proactive protection against propagating inaccurate or incomplete data in your Databricks environment. It enables jobs to stop working as soon as an incident has been detected, so you can solve the issue without end users being impacted by faulty or inconsistent data.
Understand what is happening in your Databricks environment
Kensu streamlines the collection of metadata and lineage information during Spark job execution and eliminates manual effort and potential human errors associated with capturing these critical insights.
The solution seamlessly interacts with all data source formats, whether internal or external, for read or write operations. It automatically computes metrics and enriches the visibility of any formats such as CSV, Kafka, Parquet, Delta Tables, and more.
Dig in, analyze the issue, and cut troubleshooting time in half
Kensu observations illuminate the backstory behind the problem for every affected data source and provide Databricks users with all the contextual insights (e.g., comprehensive metadata, lineage, and data metrics) they need to expedite the root cause analysis process.
“Databricks is partnering with Kensu's Enterprise Data Observability platform to streamline data teams' efficiency by swiftly detecting root causes of data incidents. Healthy data is crucial for our users, and Kensu is aligned on this mission to bring trustworthy data for reliable decision-making.”
Roger Murff
Databricks - VP Partners