Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Description

Featuring up to 8 NVidia® H100 80GB GPUs, each equipped with 16896 CUDA cores and 528 Tensor Cores, this represents NVidia®'s latest flagship technology, setting a high standard for AI performance. The system utilizes the SXM5 NVLINK module, providing a memory bandwidth of 2.6 Gbps and enabling peer-to-peer bandwidth of up to 900GB/s. Additionally, the fourth generation AMD Genoa processors support up to 384 threads with a boost clock reaching 3.7GHz. For NVLINK connectivity, the SXM4 module is employed, which boasts an impressive memory bandwidth exceeding 2TB/s and a P2P bandwidth of up to 600GB/s. The second generation AMD EPYC Rome processors can handle up to 192 threads with a boost clock of 3.3GHz. The designation 8A100.176V indicates the presence of 8 RTX A100 GPUs, complemented by 176 CPU core threads and virtualized capabilities. Notably, even though it has fewer tensor cores compared to the V100, the architecture allows for enhanced processing speeds in tensor operations. Moreover, the second generation AMD EPYC Rome is also available with configurations supporting up to 96 threads and a boost clock of 3.35GHz, further enhancing the system's performance capabilities. This combination of advanced hardware ensures optimal efficiency for demanding computational tasks.

Description

We developed a containerization platform entirely in Rust, aiming to achieve the quickest cold-start times possible. It allows you to scale seamlessly from hundreds of GPUs down to zero within seconds, ensuring that you only pay for the resources you utilize. You can deploy functions to the cloud in mere seconds while accommodating custom container images and specific hardware needs. Forget about writing YAML; our system simplifies the process. Startups and researchers in academia are eligible for free compute credits up to $25,000 on Modal, which can be applied to GPU compute and access to sought-after GPU types. Modal continuously monitors CPU utilization based on the number of fractional physical cores, with each physical core corresponding to two vCPUs. Memory usage is also tracked in real-time. For both CPU and memory, you are billed only for the actual resources consumed, without any extra charges. This innovative approach not only streamlines deployment but also optimizes costs for users.

API Access

Has API

API Access

Has API

Screenshots View All

Screenshots View All

Integrations

Python
WaveSpeedAI

Integrations

Python
WaveSpeedAI

Pricing Details

$3.01 per hour
Free Trial
Free Version

Pricing Details

$0.192 per core per hour
Free Trial
Free Version

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Vendor Details

Company Name

DataCrunch

Country

Finland

Website

datacrunch.io

Vendor Details

Company Name

Modal Labs

Country

United States

Website

modal.com

Product Features

Product Features

Infrastructure-as-a-Service (IaaS)

Analytics / Reporting
Configuration Management
Data Migration
Data Security
Load Balancing
Log Access
Network Monitoring
Performance Monitoring
SLA Monitoring

Serverless

API Proxy
Application Integration
Data Stores
Developer Tooling
Orchestration
Reporting / Analytics
Serverless Computing
Storage

Alternatives

Alternatives