What Integrates with Kedro?
Find out what Kedro integrations exist in 2025. Learn what software and services currently integrate with Kedro, and sort them by reviews, cost, features, and more. Below is a list of products that Kedro currently integrates with:
-
1
Vertex AI
Google
Free ($300 in free credits) 727 RatingsFully managed ML tools allow you to build, deploy and scale machine-learning (ML) models quickly, for any use case. Vertex AI Workbench is natively integrated with BigQuery Dataproc and Spark. You can use BigQuery to create and execute machine-learning models in BigQuery by using standard SQL queries and spreadsheets or you can export datasets directly from BigQuery into Vertex AI Workbench to run your models there. Vertex Data Labeling can be used to create highly accurate labels for data collection. Vertex AI Agent Builder empowers developers to design and deploy advanced generative AI applications for enterprise use. It supports both no-code and code-driven development, enabling users to create AI agents through natural language prompts or by integrating with frameworks like LangChain and LlamaIndex. -
2
Docker streamlines tedious configuration processes and is utilized across the entire development lifecycle, facilitating swift, simple, and portable application creation on both desktop and cloud platforms. Its all-encompassing platform features user interfaces, command-line tools, application programming interfaces, and security measures designed to function cohesively throughout the application delivery process. Jumpstart your programming efforts by utilizing Docker images to craft your own distinct applications on both Windows and Mac systems. With Docker Compose, you can build multi-container applications effortlessly. Furthermore, it seamlessly integrates with tools you already use in your development workflow, such as VS Code, CircleCI, and GitHub. You can package your applications as portable container images, ensuring they operate uniformly across various environments, from on-premises Kubernetes to AWS ECS, Azure ACI, Google GKE, and beyond. Additionally, Docker provides access to trusted content, including official Docker images and those from verified publishers, ensuring quality and reliability in your application development journey. This versatility and integration make Docker an invaluable asset for developers aiming to enhance their productivity and efficiency.
-
3
Jupyter Notebook
Project Jupyter
3 RatingsThe Jupyter Notebook is a web-based open-source tool that enables users to create and distribute documents featuring live code, visualizations, equations, and written explanations. Its applications are diverse and encompass tasks such as data cleaning and transformation, statistical modeling, numerical simulations, data visualization, machine learning, among others, showcasing its versatility in various fields. Additionally, it serves as an excellent platform for collaboration and sharing insights within the data science community. -
4
At the heart of extensible programming lies the definition of functions. Python supports both mandatory and optional parameters, keyword arguments, and even allows for arbitrary lists of arguments. Regardless of whether you're just starting out in programming or you have years of experience, Python is accessible and straightforward to learn. This programming language is particularly welcoming for beginners, while still offering depth for those familiar with other programming environments. The subsequent sections provide an excellent foundation to embark on your Python programming journey! The vibrant community organizes numerous conferences and meetups for collaborative coding and sharing ideas. Additionally, Python's extensive documentation serves as a valuable resource, and the mailing lists keep users connected. The Python Package Index (PyPI) features a vast array of third-party modules that enrich the Python experience. With both the standard library and community-contributed modules, Python opens the door to limitless programming possibilities, making it a versatile choice for developers of all levels.
-
5
Plotly Dash
Plotly
2 RatingsDash & Dash Enterprise allow you to build and deploy analytic web applications using Python, R, or Julia. No JavaScript or DevOps are required. The world's most successful companies offer AI, ML and Python analytics at a fraction of the cost of full-stack development. Dash is the way they do it. Apps and dashboards that run advanced analytics such as NLP, forecasting and computer vision can be delivered. You can work in Python, R, or Julia. Reduce costs by migrating legacy per-seat license software to Dash Enterprise's unlimited end-user pricing model. You can deploy and update Dash apps faster without an IT or DevOps staff. You can create pixel-perfect web apps and dashboards without having to write any CSS. Kubernetes makes it easy to scale. High availability support for mission-critical Python apps -
6
Pandas is an open-source data analysis and manipulation tool that is not only fast and powerful but also highly flexible and user-friendly, all within the Python programming ecosystem. It provides various tools for importing and exporting data across different formats, including CSV, text files, Microsoft Excel, SQL databases, and the efficient HDF5 format. With its intelligent data alignment capabilities and integrated management of missing values, users benefit from automatic label-based alignment during computations, which simplifies the process of organizing disordered data. The library features a robust group-by engine that allows for sophisticated aggregating and transforming operations, enabling users to easily perform split-apply-combine actions on their datasets. Additionally, pandas offers extensive time series functionality, including the ability to generate date ranges, convert frequencies, and apply moving window statistics, as well as manage date shifting and lagging. Users can even create custom time offsets tailored to specific domains and join time series data without the risk of losing any information. This comprehensive set of features makes pandas an essential tool for anyone working with data in Python.
-
7
Amazon SageMaker
Amazon
Amazon SageMaker is a comprehensive machine learning platform that integrates powerful tools for model building, training, and deployment in one cohesive environment. It combines data processing, AI model development, and collaboration features, allowing teams to streamline the development of custom AI applications. With SageMaker, users can easily access data stored across Amazon S3 data lakes and Amazon Redshift data warehouses, facilitating faster insights and AI model development. It also supports generative AI use cases, enabling users to develop and scale applications with cutting-edge AI technologies. The platform’s governance and security features ensure that data and models are handled with precision and compliance throughout the entire ML lifecycle. Furthermore, SageMaker provides a unified development studio for real-time collaboration, speeding up data discovery and model deployment. -
8
Matplotlib
Matplotlib
FreeMatplotlib serves as a versatile library for generating static, animated, and interactive visual representations in Python. It simplifies the creation of straightforward plots while also enabling the execution of more complex visualizations. Numerous third-party extensions enhance Matplotlib's capabilities, featuring various advanced plotting interfaces such as Seaborn, HoloViews, and ggplot, along with tools for projections and mapping like Cartopy. This extensive ecosystem allows users to tailor their visualizations to meet specific needs and preferences. -
9
Azure Machine Learning
Microsoft
Streamline the entire machine learning lifecycle from start to finish. Equip developers and data scientists with an extensive array of efficient tools for swiftly building, training, and deploying machine learning models. Enhance the speed of market readiness and promote collaboration among teams through leading-edge MLOps—akin to DevOps but tailored for machine learning. Drive innovation within a secure, reliable platform that prioritizes responsible AI practices. Cater to users of all expertise levels with options for both code-centric and drag-and-drop interfaces, along with automated machine learning features. Implement comprehensive MLOps functionalities that seamlessly align with existing DevOps workflows, facilitating the management of the entire machine learning lifecycle. Emphasize responsible AI by providing insights into model interpretability and fairness, securing data through differential privacy and confidential computing, and maintaining control over the machine learning lifecycle with audit trails and datasheets. Additionally, ensure exceptional compatibility with top open-source frameworks and programming languages such as MLflow, Kubeflow, ONNX, PyTorch, TensorFlow, Python, and R, thus broadening accessibility and usability for diverse projects. By fostering an environment that promotes collaboration and innovation, teams can achieve remarkable advancements in their machine learning endeavors. -
10
Apache Spark
Apache Software Foundation
Apache Spark™ serves as a comprehensive analytics platform designed for large-scale data processing. It delivers exceptional performance for both batch and streaming data by employing an advanced Directed Acyclic Graph (DAG) scheduler, a sophisticated query optimizer, and a robust execution engine. With over 80 high-level operators available, Spark simplifies the development of parallel applications. Additionally, it supports interactive use through various shells including Scala, Python, R, and SQL. Spark supports a rich ecosystem of libraries such as SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming, allowing for seamless integration within a single application. It is compatible with various environments, including Hadoop, Apache Mesos, Kubernetes, and standalone setups, as well as cloud deployments. Furthermore, Spark can connect to a multitude of data sources, enabling access to data stored in systems like HDFS, Alluxio, Apache Cassandra, Apache HBase, and Apache Hive, among many others. This versatility makes Spark an invaluable tool for organizations looking to harness the power of large-scale data analytics. -
11
MLflow
MLflow
MLflow is an open-source suite designed to oversee the machine learning lifecycle, encompassing aspects such as experimentation, reproducibility, deployment, and a centralized model registry. The platform features four main components that facilitate various tasks: tracking and querying experiments encompassing code, data, configurations, and outcomes; packaging data science code to ensure reproducibility across multiple platforms; deploying machine learning models across various serving environments; and storing, annotating, discovering, and managing models in a unified repository. Among these, the MLflow Tracking component provides both an API and a user interface for logging essential aspects like parameters, code versions, metrics, and output files generated during the execution of machine learning tasks, enabling later visualization of results. It allows for logging and querying experiments through several interfaces, including Python, REST, R API, and Java API. Furthermore, an MLflow Project is a structured format for organizing data science code, ensuring it can be reused and reproduced easily, with a focus on established conventions. Additionally, the Projects component comes equipped with an API and command-line tools specifically designed for executing these projects effectively. Overall, MLflow streamlines the management of machine learning workflows, making it easier for teams to collaborate and iterate on their models. -
12
Azure Databricks
Microsoft
Harness the power of your data and create innovative artificial intelligence (AI) solutions using Azure Databricks, where you can establish your Apache Spark™ environment in just minutes, enable autoscaling, and engage in collaborative projects within a dynamic workspace. This platform accommodates multiple programming languages such as Python, Scala, R, Java, and SQL, along with popular data science frameworks and libraries like TensorFlow, PyTorch, and scikit-learn. With Azure Databricks, you can access the most current versions of Apache Spark and effortlessly connect with various open-source libraries. You can quickly launch clusters and develop applications in a fully managed Apache Spark setting, benefiting from Azure's expansive scale and availability. The clusters are automatically established, optimized, and adjusted to guarantee reliability and performance, eliminating the need for constant oversight. Additionally, leveraging autoscaling and auto-termination features can significantly enhance your total cost of ownership (TCO), making it an efficient choice for data analysis and AI development. This powerful combination of tools and resources empowers teams to innovate and accelerate their projects like never before. -
13
Kubeflow
Kubeflow
The Kubeflow initiative aims to simplify the process of deploying machine learning workflows on Kubernetes, ensuring they are both portable and scalable. Rather than duplicating existing services, our focus is on offering an easy-to-use platform for implementing top-tier open-source ML systems across various infrastructures. Kubeflow is designed to operate seamlessly wherever Kubernetes is running. It features a specialized TensorFlow training job operator that facilitates the training of machine learning models, particularly excelling in managing distributed TensorFlow training tasks. Users can fine-tune the training controller to utilize either CPUs or GPUs, adapting it to different cluster configurations. In addition, Kubeflow provides functionalities to create and oversee interactive Jupyter notebooks, allowing for tailored deployments and resource allocation specific to data science tasks. You can test and refine your workflows locally before transitioning them to a cloud environment whenever you are prepared. This flexibility empowers data scientists to iterate efficiently, ensuring that their models are robust and ready for production. -
14
Dask
Dask
Dask is a freely available open-source library that is developed in collaboration with various community initiatives such as NumPy, pandas, and scikit-learn. It leverages the existing Python APIs and data structures, allowing users to seamlessly transition between NumPy, pandas, and scikit-learn and their Dask-enhanced versions. The schedulers in Dask are capable of scaling across extensive clusters with thousands of nodes, and its algorithms have been validated on some of the most powerful supercomputers globally. However, getting started doesn't require access to a large cluster; Dask includes schedulers tailored for personal computing environments. Many individuals currently utilize Dask to enhance computations on their laptops, taking advantage of multiple processing cores and utilizing disk space for additional storage. Furthermore, Dask provides lower-level APIs that enable the creation of customized systems for internal applications. This functionality is particularly beneficial for open-source innovators looking to parallelize their own software packages, as well as business executives aiming to scale their unique business strategies efficiently. In essence, Dask serves as a versatile tool that bridges the gap between simple local computations and complex distributed processing. -
15
Apache Airflow
The Apache Software Foundation
Airflow is a community-driven platform designed for the programmatic creation, scheduling, and monitoring of workflows. With its modular architecture, Airflow employs a message queue to manage an unlimited number of workers, making it highly scalable. The system is capable of handling complex operations through its ability to define pipelines using Python, facilitating dynamic pipeline generation. This flexibility enables developers to write code that can create pipelines on the fly. Users can easily create custom operators and expand existing libraries, tailoring the abstraction level to meet their specific needs. The pipelines in Airflow are both concise and clear, with built-in parametrization supported by the robust Jinja templating engine. Eliminate the need for complex command-line operations or obscure XML configurations! Instead, leverage standard Python functionalities to construct workflows, incorporating date-time formats for scheduling and utilizing loops for the dynamic generation of tasks. This approach ensures that you retain complete freedom and adaptability when designing your workflows, allowing you to efficiently respond to changing requirements. Additionally, Airflow's user-friendly interface empowers teams to collaboratively refine and optimize their workflow processes.
- Previous
- You're on page 1
- Next