Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Description

Chinchilla is an advanced language model that operates with a compute budget comparable to Gopher while having 70 billion parameters and utilizing four times the amount of data. This model consistently and significantly surpasses Gopher (280 billion parameters), as well as GPT-3 (175 billion), Jurassic-1 (178 billion), and Megatron-Turing NLG (530 billion), across a wide variety of evaluation tasks. Additionally, Chinchilla's design allows it to use significantly less computational power during the fine-tuning and inference processes, which greatly enhances its applicability in real-world scenarios. Notably, Chinchilla achieves a remarkable average accuracy of 67.5% on the MMLU benchmark, marking over a 7% enhancement compared to Gopher, showcasing its superior performance in the field. This impressive capability positions Chinchilla as a leading contender in the realm of language models.

Description

We are thrilled to present DBRX, a versatile open LLM developed by Databricks. This innovative model achieves unprecedented performance on a variety of standard benchmarks, setting a new benchmark for existing open LLMs. Additionally, it equips both the open-source community and enterprises crafting their own LLMs with features that were once exclusive to proprietary model APIs; our evaluations indicate that it outperforms GPT-3.5 and competes effectively with Gemini 1.0 Pro. Notably, it excels as a code model, outperforming specialized counterparts like CodeLLaMA-70B in programming tasks, while also demonstrating its prowess as a general-purpose LLM. The remarkable quality of DBRX is complemented by significant enhancements in both training and inference efficiency. Thanks to its advanced fine-grained mixture-of-experts (MoE) architecture, DBRX elevates the efficiency of open models to new heights. In terms of inference speed, it can be twice as fast as LLaMA2-70B, and its total and active parameter counts are approximately 40% of those in Grok-1, showcasing its compact design without compromising capability. This combination of speed and size makes DBRX a game-changer in the landscape of open AI models.

API Access

Has API

API Access

Has API

Screenshots View All

No images available

Screenshots View All

Integrations

Cogent DataHub
Double
GPT-3.5
GPT-4
MusicFX
Rayven
Stitch
WeatherNext
ZenML

Integrations

Cogent DataHub
Double
GPT-3.5
GPT-4
MusicFX
Rayven
Stitch
WeatherNext
ZenML

Pricing Details

No price information available.
Free Trial
Free Version

Pricing Details

No price information available.
Free Trial
Free Version

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Vendor Details

Company Name

Google DeepMind

Country

United States

Website

arxiv.org/abs/2203.15556

Vendor Details

Company Name

Databricks

Country

United States

Website

www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm

Product Features

Product Features

Alternatives

Qwen2.5-Max Reviews

Qwen2.5-Max

Alibaba

Alternatives

DeepSeek-V2 Reviews

DeepSeek-V2

DeepSeek
FLIP Reviews

FLIP

Kanerika
Kimi K2 Reviews

Kimi K2

Moonshot AI
Kimi K2 Reviews

Kimi K2

Moonshot AI
Ai2 OLMoE Reviews

Ai2 OLMoE

The Allen Institute for Artificial Intelligence