Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Description

Jurassic-1 offers two model sizes, with the Jumbo variant being the largest at 178 billion parameters, representing the pinnacle of complexity in language models released for developers. Currently, AI21 Studio is in an open beta phase, inviting users to register and begin exploring Jurassic-1 through an accessible API and an interactive web platform. At AI21 Labs, our goal is to revolutionize how people engage with reading and writing by integrating machines as cognitive collaborators, a vision that requires collective effort to realize. Our exploration of language models dates back to what we refer to as our Mesozoic Era (2017 😉). Building upon this foundational research, Jurassic-1 marks the inaugural series of models we are now offering for broad public application. As we move forward, we are excited to see how users will leverage these advancements in their own creative processes.

Description

In honor of Archimedes, whose 2311th anniversary we celebrate this year, we are excited to introduce our inaugural Mathstral model, a specialized 7B architecture tailored for mathematical reasoning and scientific exploration. This model features a 32k context window and is released under the Apache 2.0 license. Our intention behind contributing Mathstral to the scientific community is to enhance the pursuit of solving advanced mathematical challenges that necessitate intricate, multi-step logical reasoning. The launch of Mathstral is part of our wider initiative to support academic endeavors, developed in conjunction with Project Numina. Much like Isaac Newton during his era, Mathstral builds upon the foundation laid by Mistral 7B, focusing on STEM disciplines. It demonstrates top-tier reasoning capabilities within its category, achieving remarkable results on various industry-standard benchmarks. Notably, it scores 56.6% on the MATH benchmark and 63.47% on the MMLU benchmark, showcasing the performance differences by subject between Mathstral 7B and its predecessor, Mistral 7B, further emphasizing the advancements made in mathematical modeling. This initiative aims to foster innovation and collaboration within the mathematical community.

API Access

Has API

API Access

Has API

Screenshots View All

Screenshots View All

Integrations

302.AI
Codestral Mamba
Continue
Deep Infra
Entry Point AI
GMTech
HoneyHive
Keywords AI
Langflow
Ministral 8B
Noma
OpenPipe
Overseer AI
PI Prompts
Pipeshift
Pixtral Large
PostgresML
Symflower
Toolmark
Tune AI

Integrations

302.AI
Codestral Mamba
Continue
Deep Infra
Entry Point AI
GMTech
HoneyHive
Keywords AI
Langflow
Ministral 8B
Noma
OpenPipe
Overseer AI
PI Prompts
Pipeshift
Pixtral Large
PostgresML
Symflower
Toolmark
Tune AI

Pricing Details

No price information available.
Free Trial
Free Version

Pricing Details

Free
Free Trial
Free Version

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Vendor Details

Company Name

AI21 Labs

Founded

2017

Country

Israel

Website

www.ai21.com/blog/announcing-ai21-studio-and-jurassic-1

Vendor Details

Company Name

Mistral AI

Founded

2023

Country

France

Website

mistral.ai/news/mathstral/

Product Features

Product Features

Alternatives

Alternatives

Mistral Large 2 Reviews

Mistral Large 2

Mistral AI
Solar Pro 2 Reviews

Solar Pro 2

Upstage AI
Alpaca Reviews

Alpaca

Stanford Center for Research on Foundation Models (CRFM)