Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Description

Introducing the next iteration of our open-source large language model, this version features model weights along with initial code for the pretrained and fine-tuned Llama language models, which span from 7 billion to 70 billion parameters. The Llama 2 pretrained models have been developed using an impressive 2 trillion tokens and offer double the context length compared to their predecessor, Llama 1. Furthermore, the fine-tuned models have been enhanced through the analysis of over 1 million human annotations. Llama 2 demonstrates superior performance against various other open-source language models across multiple external benchmarks, excelling in areas such as reasoning, coding capabilities, proficiency, and knowledge assessments. For its training, Llama 2 utilized publicly accessible online data sources, while the fine-tuned variant, Llama-2-chat, incorporates publicly available instruction datasets along with the aforementioned extensive human annotations. Our initiative enjoys strong support from a diverse array of global stakeholders who are enthusiastic about our open approach to AI, including companies that have provided valuable early feedback and are eager to collaborate using Llama 2. The excitement surrounding Llama 2 signifies a pivotal shift in how AI can be developed and utilized collectively.

Description

Stability AI, along with its CarperAI lab, is excited to unveil Stable Beluga 1 and its advanced successor, Stable Beluga 2, previously known as FreeWilly, both of which are robust new Large Language Models (LLMs) available for public use. These models exhibit remarkable reasoning capabilities across a wide range of benchmarks, showcasing their versatility and strength. Stable Beluga 1 is built on the original LLaMA 65B foundation model and has undergone meticulous fine-tuning with a novel synthetically-generated dataset utilizing Supervised Fine-Tune (SFT) in the conventional Alpaca format. In a similar vein, Stable Beluga 2 utilizes the LLaMA 2 70B foundation model, pushing the boundaries of performance in the industry. Their development marks a significant step forward in the evolution of open access AI technologies.

API Access

Has API

API Access

Has API

Screenshots View All

Screenshots View All

Integrations

AI4Chat
Aili
Alpaca
Amazon Bedrock
Anyscale
AnythingLLM
Automi
BlueFlame AI
Entry Point AI
Graydient AI
Kiin
Ludwig
Meta AI
Msty
PostgresML
Scout
Second State
Taylor AI
Tune AI
Verta

Integrations

AI4Chat
Aili
Alpaca
Amazon Bedrock
Anyscale
AnythingLLM
Automi
BlueFlame AI
Entry Point AI
Graydient AI
Kiin
Ludwig
Meta AI
Msty
PostgresML
Scout
Second State
Taylor AI
Tune AI
Verta

Pricing Details

Free
Free Trial
Free Version

Pricing Details

Free
Free Trial
Free Version

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Vendor Details

Company Name

Meta

Founded

2004

Country

United States

Website

ai.meta.com/llama/

Vendor Details

Company Name

Stability AI

Founded

2021

Country

United Kingdom

Website

stability.ai/news/stable-beluga-large-instruction-fine-tuned-models

Product Features

Product Features

Alternatives

Aya Reviews

Aya

Cohere AI

Alternatives

Llama 2 Reviews

Llama 2

Meta
Vicuna Reviews

Vicuna

lmsys.org
ChatGLM Reviews

ChatGLM

Zhipu AI
Mistral 7B Reviews

Mistral 7B

Mistral AI