What Integrates with NVIDIA NGC?
Find out what NVIDIA NGC integrations exist in 2025. Learn what software and services currently integrate with NVIDIA NGC, and sort them by reviews, cost, features, and more. Below is a list of products that NVIDIA NGC currently integrates with:
-
1
AWS is the leading provider of cloud computing, delivering over 200 fully featured services to organizations worldwide. Its offerings cover everything from infrastructure—such as compute, storage, and networking—to advanced technologies like artificial intelligence, machine learning, and agentic AI. Businesses use AWS to modernize legacy systems, run high-performance workloads, and build scalable, secure applications. Core services like Amazon EC2, Amazon S3, and Amazon DynamoDB provide foundational capabilities, while advanced solutions like SageMaker and AWS Transform enable AI-driven transformation. The platform is supported by a global infrastructure that includes 38 regions, 120 availability zones, and 400+ edge locations, ensuring low latency and high reliability. AWS integrates with leading enterprise tools, developer SDKs, and partner ecosystems, giving teams the flexibility to adopt cloud at their own pace. Its training and certification programs help individuals and companies grow cloud expertise with industry-recognized credentials. With its unmatched breadth, depth, and proven track record, AWS empowers organizations to innovate and compete in the digital-first economy.
-
2
TensorFlow
TensorFlow
Free 2 RatingsTensorFlow is a comprehensive open-source machine learning platform that covers the entire process from development to deployment. This platform boasts a rich and adaptable ecosystem featuring various tools, libraries, and community resources, empowering researchers to advance the field of machine learning while allowing developers to create and implement ML-powered applications with ease. With intuitive high-level APIs like Keras and support for eager execution, users can effortlessly build and refine ML models, facilitating quick iterations and simplifying debugging. The flexibility of TensorFlow allows for seamless training and deployment of models across various environments, whether in the cloud, on-premises, within browsers, or directly on devices, regardless of the programming language utilized. Its straightforward and versatile architecture supports the transformation of innovative ideas into practical code, enabling the development of cutting-edge models that can be published swiftly. Overall, TensorFlow provides a powerful framework that encourages experimentation and accelerates the machine learning process. -
3
Domino Enterprise MLOps Platform
Domino Data Lab
1 RatingThe Domino Enterprise MLOps Platform helps data science teams improve the speed, quality, and impact of data science at scale. Domino is open and flexible, empowering professional data scientists to use their preferred tools and infrastructure. Data science models get into production fast and are kept operating at peak performance with integrated workflows. Domino also delivers the security, governance and compliance that enterprises expect. The Self-Service Infrastructure Portal makes data science teams become more productive with easy access to their preferred tools, scalable compute, and diverse data sets. By automating time-consuming and tedious DevOps tasks, data scientists can focus on the tasks at hand. The Integrated Model Factory includes a workbench, model and app deployment, and integrated monitoring to rapidly experiment, deploy the best models in production, ensure optimal performance, and collaborate across the end-to-end data science lifecycle. The System of Record has a powerful reproducibility engine, search and knowledge management, and integrated project management. Teams can easily find, reuse, reproduce, and build on any data science work to amplify innovation. -
4
Effortlessly switch between eager and graph modes using TorchScript, while accelerating your journey to production with TorchServe. The torch-distributed backend facilitates scalable distributed training and enhances performance optimization for both research and production environments. A comprehensive suite of tools and libraries enriches the PyTorch ecosystem, supporting development across fields like computer vision and natural language processing. Additionally, PyTorch is compatible with major cloud platforms, simplifying development processes and enabling seamless scaling. You can easily choose your preferences and execute the installation command. The stable version signifies the most recently tested and endorsed iteration of PyTorch, which is typically adequate for a broad range of users. For those seeking the cutting-edge, a preview is offered, featuring the latest nightly builds of version 1.10, although these may not be fully tested or supported. It is crucial to verify that you meet all prerequisites, such as having numpy installed, based on your selected package manager. Anaconda is highly recommended as the package manager of choice, as it effectively installs all necessary dependencies, ensuring a smooth installation experience for users. This comprehensive approach not only enhances productivity but also ensures a robust foundation for development.
-
5
NVIDIA GPU-Optimized AMI
Amazon
$3.06 per hourThe NVIDIA GPU-Optimized AMI serves as a virtual machine image designed to enhance your GPU-accelerated workloads in Machine Learning, Deep Learning, Data Science, and High-Performance Computing (HPC). By utilizing this AMI, you can quickly launch a GPU-accelerated EC2 virtual machine instance, complete with a pre-installed Ubuntu operating system, GPU driver, Docker, and the NVIDIA container toolkit, all within a matter of minutes. This AMI simplifies access to NVIDIA's NGC Catalog, which acts as a central hub for GPU-optimized software, enabling users to easily pull and run performance-tuned, thoroughly tested, and NVIDIA-certified Docker containers. The NGC catalog offers complimentary access to a variety of containerized applications for AI, Data Science, and HPC, along with pre-trained models, AI SDKs, and additional resources, allowing data scientists, developers, and researchers to concentrate on creating and deploying innovative solutions. Additionally, this GPU-optimized AMI is available at no charge, with an option for users to purchase enterprise support through NVIDIA AI Enterprise. For further details on obtaining support for this AMI, please refer to the section labeled 'Support Information' below. Moreover, leveraging this AMI can significantly streamline the development process for projects requiring intensive computational resources. -
6
Nutanix Enterprise AI
Nutanix
Nutanix Enterprise AI makes it simple to deploy, operate, and develop enterprise AI applications through secure AI endpoints that utilize large language models and generative AI APIs. By streamlining the process of integrating GenAI, Nutanix enables organizations to unlock extraordinary productivity boosts, enhance revenue streams, and realize the full potential of generative AI. With user-friendly workflows, you can effectively monitor and manage AI endpoints, allowing you to tap into your organization's AI capabilities. The platform's point-and-click interface facilitates the effortless deployment of AI models and secure APIs, giving you the flexibility to select from Hugging Face, NVIDIA NIM, or your customized private models. You have the option to run enterprise AI securely, whether on-premises or in public cloud environments, all while utilizing your existing AI tools. The system also allows for straightforward management of access to your language models through role-based access controls and secure API tokens designed for developers and GenAI application owners. Additionally, with just a single click, you can generate URL-ready JSON code, making API testing quick and efficient. This comprehensive approach ensures that enterprises can fully leverage their AI investments and adapt to evolving technological landscapes seamlessly.
- Previous
- You're on page 1
- Next