What Integrates with Unity Catalog?
Find out what Unity Catalog integrations exist in 2025. Learn what software and services currently integrate with Unity Catalog, and sort them by reviews, cost, features, and more. Below is a list of products that Unity Catalog currently integrates with:
-
1
Google Cloud Platform
Google
Free ($300 in free credits) 55,888 RatingsGoogle Cloud is an online service that lets you create everything from simple websites to complex apps for businesses of any size. Customers who are new to the system will receive $300 in credits for testing, deploying, and running workloads. Customers can use up to 25+ products free of charge. Use Google's core data analytics and machine learning. All enterprises can use it. It is secure and fully featured. Use big data to build better products and find answers faster. You can grow from prototypes to production and even to planet-scale without worrying about reliability, capacity or performance. Virtual machines with proven performance/price advantages, to a fully-managed app development platform. High performance, scalable, resilient object storage and databases. Google's private fibre network offers the latest software-defined networking solutions. Fully managed data warehousing and data exploration, Hadoop/Spark and messaging. -
2
Google Cloud BigQuery
Google
Free ($300 in free credits) 1,731 RatingsBigQuery is a serverless, multicloud data warehouse that makes working with all types of data effortless, allowing you to focus on extracting valuable business insights quickly. As a central component of Google’s data cloud, it streamlines data integration, enables cost-effective and secure scaling of analytics, and offers built-in business intelligence for sharing detailed data insights. With a simple SQL interface, it also supports training and deploying machine learning models, helping to foster data-driven decision-making across your organization. Its robust performance ensures that businesses can handle increasing data volumes with minimal effort, scaling to meet the needs of growing enterprises. Gemini within BigQuery brings AI-powered tools that enhance collaboration and productivity, such as code recommendations, visual data preparation, and intelligent suggestions aimed at improving efficiency and lowering costs. The platform offers an all-in-one environment with SQL, a notebook, and a natural language-based canvas interface, catering to data professionals of all skill levels. This cohesive workspace simplifies the entire analytics journey, enabling teams to work faster and more efficiently. -
3
Snowflake is a cloud-native data platform that combines data warehousing, data lakes, and data sharing into a single solution. By offering elastic scalability and automatic scaling, Snowflake enables businesses to handle vast amounts of data while maintaining high performance at low cost. The platform's architecture allows users to separate storage and compute, offering flexibility in managing workloads. Snowflake supports real-time data sharing and integrates seamlessly with other analytics tools, enabling teams to collaborate and gain insights from their data more efficiently. Its secure, multi-cloud architecture makes it a strong choice for enterprises looking to leverage data at scale.
-
4
Salesforce empowers sales teams to enhance their effectiveness and reach their objectives more efficiently. As the leading CRM platform globally, Salesforce goes beyond basic functionalities. It equips teams with essential tools designed to expand their client portfolios, discover new prospects, and finalize transactions swiftly from any location. The platform boasts an extensive array of features, such as managing contacts, tracking opportunities, overseeing leads, integrating emails, generating reports and dashboards, forecasting sales, and enabling file synchronization and sharing, among many others. This means you can spend less time sifting through spreadsheets and more time focusing on your business operations. With no need for additional hardware or software, our user-friendly setup assistant allows you to optimize sales processes and address customer inquiries in mere minutes. By linking data across sales, service, and marketing, you can provide buyers with smooth and personalized experiences. Gain a holistic understanding of each customer, including their accounts, activity history, and relationships. Moreover, integrating social data can offer even richer insights into your customers' behaviors and preferences.
-
5
MySQL stands out as the most widely used open source database globally. Thanks to its established track record in performance, dependability, and user-friendliness, it has emerged as the preferred database for web applications, powering notable platforms such as Facebook, Twitter, and YouTube, alongside the top five websites. Furthermore, MySQL is also highly favored as an embedded database solution, being distributed by numerous independent software vendors and original equipment manufacturers. Its versatility and robust features contribute to its widespread adoption across various industries.
-
6
Azure Synapse Analytics
Microsoft
1 RatingAzure Synapse represents the advanced evolution of Azure SQL Data Warehouse. It is a comprehensive analytics service that integrates enterprise data warehousing with Big Data analytics capabilities. Users can query data flexibly, choosing between serverless or provisioned resources, and can do so at scale. By merging these two domains, Azure Synapse offers a cohesive experience for ingesting, preparing, managing, and delivering data, catering to the immediate requirements of business intelligence and machine learning applications. This integration enhances the efficiency and effectiveness of data-driven decision-making processes. -
7
LangChain provides a comprehensive framework that empowers developers to build and scale intelligent applications using large language models (LLMs). By integrating data and APIs, LangChain enables context-aware applications that can perform reasoning tasks. The suite includes LangGraph, a tool for orchestrating complex workflows, and LangSmith, a platform for monitoring and optimizing LLM-driven agents. LangChain supports the full lifecycle of LLM applications, offering tools to handle everything from initial design and deployment to post-launch performance management. Its flexibility makes it an ideal solution for businesses looking to enhance their applications with AI-powered reasoning and automation.
-
8
Apache Kafka
The Apache Software Foundation
1 RatingApache Kafka® is a robust, open-source platform designed for distributed streaming. It can scale production environments to accommodate up to a thousand brokers, handling trillions of messages daily and managing petabytes of data with hundreds of thousands of partitions. The system allows for elastic growth and reduction of both storage and processing capabilities. Furthermore, it enables efficient cluster expansion across availability zones or facilitates the interconnection of distinct clusters across various geographic locations. Users can process event streams through features such as joins, aggregations, filters, transformations, and more, all while utilizing event-time and exactly-once processing guarantees. Kafka's built-in Connect interface seamlessly integrates with a wide range of event sources and sinks, including Postgres, JMS, Elasticsearch, AWS S3, among others. Additionally, developers can read, write, and manipulate event streams using a diverse selection of programming languages, enhancing the platform's versatility and accessibility. This extensive support for various integrations and programming environments makes Kafka a powerful tool for modern data architectures. -
9
Amazon Redshift
Amazon
$0.25 per hourAmazon Redshift is the preferred choice among customers for cloud data warehousing, outpacing all competitors in popularity. It supports analytical tasks for a diverse range of organizations, from Fortune 500 companies to emerging startups, facilitating their evolution into large-scale enterprises, as evidenced by Lyft's growth. No other data warehouse simplifies the process of extracting insights from extensive datasets as effectively as Redshift. Users can perform queries on vast amounts of structured and semi-structured data across their operational databases, data lakes, and the data warehouse using standard SQL queries. Moreover, Redshift allows for the seamless saving of query results back to S3 data lakes in open formats like Apache Parquet, enabling further analysis through various analytics services, including Amazon EMR, Amazon Athena, and Amazon SageMaker. Recognized as the fastest cloud data warehouse globally, Redshift continues to enhance its performance year after year. For workloads that demand high performance, the new RA3 instances provide up to three times the performance compared to any other cloud data warehouse available today, ensuring businesses can operate at peak efficiency. This combination of speed and user-friendly features makes Redshift a compelling choice for organizations of all sizes. -
10
Microsoft Fabric
Microsoft
$156.334/month/ 2CU Connecting every data source with analytics services on a single AI-powered platform will transform how people access, manage, and act on data and insights. All your data. All your teams. All your teams in one place. Create an open, lake-centric hub to help data engineers connect data from various sources and curate it. This will eliminate sprawl and create custom views for all. Accelerate analysis through the development of AI models without moving data. This reduces the time needed by data scientists to deliver value. Microsoft Teams, Microsoft Excel, and Microsoft Teams are all great tools to help your team innovate faster. Connect people and data responsibly with an open, scalable solution. This solution gives data stewards more control, thanks to its built-in security, compliance, and governance. -
11
Trino
Trino
FreeTrino is a remarkably fast query engine designed to operate at exceptional speeds. It serves as a high-performance, distributed SQL query engine tailored for big data analytics, enabling users to delve into their vast data environments. Constructed for optimal efficiency, Trino excels in low-latency analytics and is extensively utilized by some of the largest enterprises globally to perform queries on exabyte-scale data lakes and enormous data warehouses. It accommodates a variety of scenarios, including interactive ad-hoc analytics, extensive batch queries spanning several hours, and high-throughput applications that require rapid sub-second query responses. Trino adheres to ANSI SQL standards, making it compatible with popular business intelligence tools like R, Tableau, Power BI, and Superset. Moreover, it allows direct querying of data from various sources such as Hadoop, S3, Cassandra, and MySQL, eliminating the need for cumbersome, time-consuming, and error-prone data copying processes. This capability empowers users to access and analyze data from multiple systems seamlessly within a single query. Such versatility makes Trino a powerful asset in today's data-driven landscape. -
12
Azure SQL Database
Microsoft
$0.5218 per vCore-hourAzure SQL Database, a member of the Azure SQL suite, is a sophisticated and adaptable relational database service designed specifically for cloud environments. It is continuously updated, ensuring you benefit from the latest advancements, including AI-driven features that enhance both performance and reliability. With serverless computing and Hyperscale storage options, resources can effortlessly adjust according to your needs, allowing you to concentrate on creating innovative applications without the stress of managing storage or resources. This fully managed SQL database simplifies the challenges of ensuring high availability, performing tuning, handling backups, and executing other essential database management tasks. You can expedite your application development on the unique cloud platform that offers evergreen SQL, utilizing up-to-date SQL Server features while remaining free from concerns about updates, upgrades, or the end of support. Customize your modern app development experience with both provisioned and serverless compute choices, ensuring flexibility and efficiency tailored to your specific needs. This way, you can unleash your creativity while relying on a robust foundation. -
13
LanceDB
LanceDB
$16.03 per monthLanceDB is an accessible, open-source database specifically designed for AI development. It offers features such as hyperscalable vector search and sophisticated retrieval capabilities for Retrieval-Augmented Generation (RAG), along with support for streaming training data and the interactive analysis of extensive AI datasets, making it an ideal foundation for AI applications. The installation process takes only seconds, and it integrates effortlessly into your current data and AI toolchain. As an embedded database—similar to SQLite or DuckDB—LanceDB supports native object storage integration, allowing it to be deployed in various environments and efficiently scale to zero when inactive. Whether for quick prototyping or large-scale production, LanceDB provides exceptional speed for search, analytics, and training involving multimodal AI data. Notably, prominent AI companies have indexed vast numbers of vectors and extensive volumes of text, images, and videos at a significantly lower cost compared to other vector databases. Beyond mere embedding, it allows for filtering, selection, and streaming of training data directly from object storage, thereby ensuring optimal GPU utilization for enhanced performance. This versatility makes LanceDB a powerful tool in the evolving landscape of artificial intelligence. -
14
PuppyGraph
PuppyGraph
FreePuppyGraph allows you to effortlessly query one or multiple data sources through a cohesive graph model. Traditional graph databases can be costly, require extensive setup time, and necessitate a specialized team to maintain. They often take hours to execute multi-hop queries and encounter difficulties when managing datasets larger than 100GB. Having a separate graph database can complicate your overall architecture due to fragile ETL processes, ultimately leading to increased total cost of ownership (TCO). With PuppyGraph, you can connect to any data source, regardless of its location, enabling cross-cloud and cross-region graph analytics without the need for intricate ETLs or data duplication. By directly linking to your data warehouses and lakes, PuppyGraph allows you to query your data as a graph without the burden of constructing and maintaining lengthy ETL pipelines typical of conventional graph database configurations. There's no longer a need to deal with delays in data access or unreliable ETL operations. Additionally, PuppyGraph resolves scalability challenges associated with graphs by decoupling computation from storage, allowing for more efficient data handling. This innovative approach not only enhances performance but also simplifies your data management strategy. -
15
StarRocks
StarRocks
FreeRegardless of whether your project involves a single table or numerous tables, StarRocks guarantees an impressive performance improvement of at least 300% when compared to other widely used solutions. With its comprehensive array of connectors, you can seamlessly ingest streaming data and capture information in real time, ensuring that you always have access to the latest insights. The query engine is tailored to suit your specific use cases, allowing for adaptable analytics without the need to relocate data or modify SQL queries. This provides an effortless way to scale your analytics capabilities as required. StarRocks not only facilitates a swift transition from data to actionable insights, but also stands out with its unmatched performance, offering a holistic OLAP solution that addresses the most prevalent data analytics requirements. Its advanced memory-and-disk-based caching framework is purpose-built to reduce I/O overhead associated with retrieving data from external storage, significantly enhancing query performance while maintaining efficiency. This unique combination of features ensures that users can maximize their data's potential without unnecessary delays. -
16
AWS Glue
Amazon
AWS Glue is a fully managed data integration solution that simplifies the process of discovering, preparing, and merging data for purposes such as analytics, machine learning, and application development. By offering all the necessary tools for data integration, AWS Glue enables users to begin analyzing their data and leveraging it for insights within minutes rather than taking months. The concept of data integration encompasses various activities like identifying and extracting data from multiple sources, enhancing, cleaning, normalizing, and consolidating that data, as well as organizing and loading it into databases, data warehouses, and data lakes. Different users, each utilizing various tools, often manage these tasks. Operating within a serverless environment, AWS Glue eliminates the need for infrastructure management, automatically provisioning, configuring, and scaling the resources essential for executing data integration jobs. This efficiency allows organizations to focus more on data-driven decision-making without the overhead of manual resource management. -
17
Starburst Enterprise
Starburst Data
Starburst empowers organizations to enhance their decision-making capabilities by providing rapid access to all their data without the hassle of transferring or duplicating it. As companies accumulate vast amounts of data, their analysis teams often find themselves waiting for access to perform their evaluations. By facilitating direct access to data at its source, Starburst ensures that teams can quickly and accurately analyze larger datasets without the need for data movement. Starburst Enterprise offers a robust, enterprise-grade version of the open-source Trino (formerly known as Presto® SQL), which is fully supported and tested for production use. This solution not only boosts performance and security but also simplifies the deployment, connection, and management of a Trino environment. By enabling connections to any data source—be it on-premises, in the cloud, or within a hybrid cloud setup—Starburst allows teams to utilize their preferred analytics tools while seamlessly accessing data stored in various locations. This innovative approach significantly reduces the time taken for insights, helping businesses stay competitive in a data-driven world. -
18
Tecton
Tecton
Deploy machine learning applications in just minutes instead of taking months. Streamline the conversion of raw data, create training datasets, and deliver features for scalable online inference effortlessly. By replacing custom data pipelines with reliable automated pipelines, you can save significant time and effort. Boost your team's productivity by enabling the sharing of features across the organization while standardizing all your machine learning data workflows within a single platform. With the ability to serve features at massive scale, you can trust that your systems will remain operational consistently. Tecton adheres to rigorous security and compliance standards. Importantly, Tecton is not a database or a processing engine; instead, it integrates seamlessly with your current storage and processing systems, enhancing their orchestration capabilities. This integration allows for greater flexibility and efficiency in managing your machine learning processes. -
19
Databricks Data Intelligence Platform
Databricks
The Databricks Data Intelligence Platform empowers every member of your organization to leverage data and artificial intelligence effectively. Constructed on a lakehouse architecture, it establishes a cohesive and transparent foundation for all aspects of data management and governance, enhanced by a Data Intelligence Engine that recognizes the distinct characteristics of your data. Companies that excel across various sectors will be those that harness the power of data and AI. Covering everything from ETL processes to data warehousing and generative AI, Databricks facilitates the streamlining and acceleration of your data and AI objectives. By merging generative AI with the integrative advantages of a lakehouse, Databricks fuels a Data Intelligence Engine that comprehends the specific semantics of your data. This functionality enables the platform to optimize performance automatically and manage infrastructure in a manner tailored to your organization's needs. Additionally, the Data Intelligence Engine is designed to grasp the unique language of your enterprise, making the search and exploration of new data as straightforward as posing a question to a colleague, thus fostering collaboration and efficiency. Ultimately, this innovative approach transforms the way organizations interact with their data, driving better decision-making and insights. -
20
PostgreSQL
PostgreSQL Global Development Group
PostgreSQL stands out as a highly capable, open-source object-relational database system that has been actively developed for more than three decades, earning a solid reputation for its reliability, extensive features, and impressive performance. Comprehensive resources for installation and usage are readily available in the official documentation, which serves as an invaluable guide for both new and experienced users. Additionally, the open-source community fosters numerous forums and platforms where individuals can learn about PostgreSQL, understand its functionalities, and explore job opportunities related to it. Engaging with this community can enhance your knowledge and connection to the PostgreSQL ecosystem. Recently, the PostgreSQL Global Development Group announced updates for all supported versions, including 15.1, 14.6, 13.9, 12.13, 11.18, and 10.23, which address 25 reported bugs from the past few months. Notably, this marks the final release for PostgreSQL 10, meaning that it will no longer receive any security patches or bug fixes going forward. Therefore, if you are currently utilizing PostgreSQL 10 in your production environment, it is highly recommended that you plan to upgrade to a more recent version to ensure continued support and security. Upgrading will not only help maintain the integrity of your data but also allow you to take advantage of the latest features and improvements introduced in newer releases. -
21
Apache Spark
Apache Software Foundation
Apache Spark™ serves as a comprehensive analytics platform designed for large-scale data processing. It delivers exceptional performance for both batch and streaming data by employing an advanced Directed Acyclic Graph (DAG) scheduler, a sophisticated query optimizer, and a robust execution engine. With over 80 high-level operators available, Spark simplifies the development of parallel applications. Additionally, it supports interactive use through various shells including Scala, Python, R, and SQL. Spark supports a rich ecosystem of libraries such as SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming, allowing for seamless integration within a single application. It is compatible with various environments, including Hadoop, Apache Mesos, Kubernetes, and standalone setups, as well as cloud deployments. Furthermore, Spark can connect to a multitude of data sources, enabling access to data stored in systems like HDFS, Alluxio, Apache Cassandra, Apache HBase, and Apache Hive, among many others. This versatility makes Spark an invaluable tool for organizations looking to harness the power of large-scale data analytics. -
22
MLflow
MLflow
MLflow is an open-source suite designed to oversee the machine learning lifecycle, encompassing aspects such as experimentation, reproducibility, deployment, and a centralized model registry. The platform features four main components that facilitate various tasks: tracking and querying experiments encompassing code, data, configurations, and outcomes; packaging data science code to ensure reproducibility across multiple platforms; deploying machine learning models across various serving environments; and storing, annotating, discovering, and managing models in a unified repository. Among these, the MLflow Tracking component provides both an API and a user interface for logging essential aspects like parameters, code versions, metrics, and output files generated during the execution of machine learning tasks, enabling later visualization of results. It allows for logging and querying experiments through several interfaces, including Python, REST, R API, and Java API. Furthermore, an MLflow Project is a structured format for organizing data science code, ensuring it can be reused and reproduced easily, with a focus on established conventions. Additionally, the Projects component comes equipped with an API and command-line tools specifically designed for executing these projects effectively. Overall, MLflow streamlines the management of machine learning workflows, making it easier for teams to collaborate and iterate on their models. -
23
DuckDB
DuckDB
Handling and storing tabular data, such as that found in CSV or Parquet formats, is essential for data management. Transferring large result sets to clients is a common requirement, especially in extensive client/server frameworks designed for centralized enterprise data warehousing. Additionally, writing to a single database from various simultaneous processes poses its own set of challenges. DuckDB serves as a relational database management system (RDBMS), which is a specialized system for overseeing data organized into relations. In this context, a relation refers to a table, characterized by a named collection of rows. Each row within a table maintains a consistent structure of named columns, with each column designated to hold a specific data type. Furthermore, tables are organized within schemas, and a complete database comprises a collection of these schemas, providing structured access to the stored data. This organization not only enhances data integrity but also facilitates efficient querying and reporting across diverse datasets. -
24
LlamaIndex
LlamaIndex
LlamaIndex serves as a versatile "data framework" designed to assist in the development of applications powered by large language models (LLMs). It enables the integration of semi-structured data from various APIs, including Slack, Salesforce, and Notion. This straightforward yet adaptable framework facilitates the connection of custom data sources to LLMs, enhancing the capabilities of your applications with essential data tools. By linking your existing data formats—such as APIs, PDFs, documents, and SQL databases—you can effectively utilize them within your LLM applications. Furthermore, you can store and index your data for various applications, ensuring seamless integration with downstream vector storage and database services. LlamaIndex also offers a query interface that allows users to input any prompt related to their data, yielding responses that are enriched with knowledge. It allows for the connection of unstructured data sources, including documents, raw text files, PDFs, videos, and images, while also making it simple to incorporate structured data from sources like Excel or SQL. Additionally, LlamaIndex provides methods for organizing your data through indices and graphs, making it more accessible for use with LLMs, thereby enhancing the overall user experience and expanding the potential applications. -
25
Unstructured
Unstructured
Approximately 80% of corporate data is stored in challenging formats such as HTML, PDF, CSV, PNG, and PPTX, among others. Unstructured simplifies the extraction and transformation of intricate data to be compatible with all leading vector databases and LLM frameworks. This platform enables data scientists to preprocess data efficiently at scale, allowing them to allocate more time to modeling and analysis rather than data collection and cleaning. With our enterprise-grade connectors, we can gather data from various sources and convert it into AI-friendly JSON files, making it easier for organizations to integrate AI into their operations. Rely on Unstructured to provide meticulously curated data that is clean of any artifacts and, crucially, ready for use with LLMs. In doing so, we empower businesses to harness the full potential of their data for innovative applications. -
26
Daft
Daft
Daft is an advanced framework designed for ETL, analytics, and machine learning/artificial intelligence at scale, providing an intuitive Python dataframe API that surpasses Spark in both performance and user-friendliness. It integrates seamlessly with your ML/AI infrastructure through efficient zero-copy connections to essential Python libraries like Pytorch and Ray, and it enables the allocation of GPUs for model execution. Operating on a lightweight multithreaded backend, Daft starts by running locally, but when the capabilities of your machine are exceeded, it effortlessly transitions to an out-of-core setup on a distributed cluster. Additionally, Daft supports User-Defined Functions (UDFs) in columns, enabling the execution of intricate expressions and operations on Python objects with the necessary flexibility for advanced ML/AI tasks. Its ability to scale and adapt makes it a versatile choice for data processing and analysis in various environments. -
27
DataNimbus
DataNimbus
DataNimbus, an AI-powered platform, streamlines payments and accelerates AI implementation through innovative solutions. DataNimbus improves scalability and governance by seamlessly integrating Databricks components such as Spark, Unity Catalog and ML Ops. Its offerings include a designer, a marketplace of reusable connectors and blocks for machine learning, and agile APIs. All are designed to simplify workflows while driving data-driven innovation.
- Previous
- You're on page 1
- Next