Average Ratings 0 Ratings
Average Ratings 0 Ratings
Description
We are thrilled to present DBRX, a versatile open LLM developed by Databricks. This innovative model achieves unprecedented performance on a variety of standard benchmarks, setting a new benchmark for existing open LLMs. Additionally, it equips both the open-source community and enterprises crafting their own LLMs with features that were once exclusive to proprietary model APIs; our evaluations indicate that it outperforms GPT-3.5 and competes effectively with Gemini 1.0 Pro. Notably, it excels as a code model, outperforming specialized counterparts like CodeLLaMA-70B in programming tasks, while also demonstrating its prowess as a general-purpose LLM. The remarkable quality of DBRX is complemented by significant enhancements in both training and inference efficiency. Thanks to its advanced fine-grained mixture-of-experts (MoE) architecture, DBRX elevates the efficiency of open models to new heights. In terms of inference speed, it can be twice as fast as LLaMA2-70B, and its total and active parameter counts are approximately 40% of those in Grok-1, showcasing its compact design without compromising capability. This combination of speed and size makes DBRX a game-changer in the landscape of open AI models.
Description
The Gemma family consists of advanced, lightweight models developed using the same innovative research and technology as the Gemini models. These cutting-edge models are equipped with robust security features that promote responsible and trustworthy AI applications, achieved through carefully curated data sets and thorough refinements. Notably, Gemma models excel in their various sizes—2B, 7B, 9B, and 27B—often exceeding the performance of some larger open models. With the introduction of Keras 3.0, users can experience effortless integration with JAX, TensorFlow, and PyTorch, providing flexibility in framework selection based on specific tasks. Designed for peak performance and remarkable efficiency, Gemma 2 is specifically optimized for rapid inference across a range of hardware platforms. Furthermore, the Gemma family includes diverse models that cater to distinct use cases, ensuring they adapt effectively to user requirements. These lightweight language models feature a decoder and have been trained on an extensive array of textual data, programming code, and mathematical concepts, which enhances their versatility and utility in various applications.
API Access
Has API
API Access
Has API
Integrations
Double
EPIC
GPT-3.5
GPT-4
Gemma
Google AI Studio
Google Cloud Platform
Hugging Face
Kaggle
LM-Kit.NET
Integrations
Double
EPIC
GPT-3.5
GPT-4
Gemma
Google AI Studio
Google Cloud Platform
Hugging Face
Kaggle
LM-Kit.NET
Pricing Details
No price information available.
Free Trial
Free Version
Pricing Details
No price information available.
Free Trial
Free Version
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Vendor Details
Company Name
Databricks
Country
United States
Website
www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm
Vendor Details
Company Name
Country
United States
Website
ai.google.dev/gemma