Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Description

In honor of Archimedes, whose 2311th anniversary we celebrate this year, we are excited to introduce our inaugural Mathstral model, a specialized 7B architecture tailored for mathematical reasoning and scientific exploration. This model features a 32k context window and is released under the Apache 2.0 license. Our intention behind contributing Mathstral to the scientific community is to enhance the pursuit of solving advanced mathematical challenges that necessitate intricate, multi-step logical reasoning. The launch of Mathstral is part of our wider initiative to support academic endeavors, developed in conjunction with Project Numina. Much like Isaac Newton during his era, Mathstral builds upon the foundation laid by Mistral 7B, focusing on STEM disciplines. It demonstrates top-tier reasoning capabilities within its category, achieving remarkable results on various industry-standard benchmarks. Notably, it scores 56.6% on the MATH benchmark and 63.47% on the MMLU benchmark, showcasing the performance differences by subject between Mathstral 7B and its predecessor, Mistral 7B, further emphasizing the advancements made in mathematical modeling. This initiative aims to foster innovation and collaboration within the mathematical community.

Description

Phi-4-mini-reasoning is a transformer-based language model with 3.8 billion parameters, specifically designed to excel in mathematical reasoning and methodical problem-solving within environments that have limited computational capacity or latency constraints. Its optimization stems from fine-tuning with synthetic data produced by the DeepSeek-R1 model, striking a balance between efficiency and sophisticated reasoning capabilities. With training that encompasses over one million varied math problems, ranging in complexity from middle school to Ph.D. level, Phi-4-mini-reasoning demonstrates superior performance to its base model in generating lengthy sentences across multiple assessments and outshines larger counterparts such as OpenThinker-7B, Llama-3.2-3B-instruct, and DeepSeek-R1. Equipped with a 128K-token context window, it also facilitates function calling, which allows for seamless integration with various external tools and APIs. Moreover, Phi-4-mini-reasoning can be quantized through the Microsoft Olive or Apple MLX Framework, enabling its deployment on a variety of edge devices, including IoT gadgets, laptops, and smartphones. Its design not only enhances user accessibility but also expands the potential for innovative applications in mathematical fields.

API Access

Has API

API Access

Has API

Screenshots View All

Screenshots View All

Integrations

Hugging Face
AlphaCorp
Deep Infra
Expanse
Klee
Memo AI
Ministral 3B
Mistral Large
Mixtral 8x7B
Motific.ai
Noma
OpenPipe
PI Prompts
Pixtral Large
PostgresML
Ragas
SydeLabs
Verta
Weave
Wordware

Integrations

Hugging Face
AlphaCorp
Deep Infra
Expanse
Klee
Memo AI
Ministral 3B
Mistral Large
Mixtral 8x7B
Motific.ai
Noma
OpenPipe
PI Prompts
Pixtral Large
PostgresML
Ragas
SydeLabs
Verta
Weave
Wordware

Pricing Details

Free
Free Trial
Free Version

Pricing Details

No price information available.
Free Trial
Free Version

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Vendor Details

Company Name

Mistral AI

Founded

2023

Country

France

Website

mistral.ai/news/mathstral/

Vendor Details

Company Name

Microsoft

Founded

1975

Country

United States

Website

azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/

Product Features

Product Features

Alternatives

Alternatives

Mistral Large 2 Reviews

Mistral Large 2

Mistral AI
Phi-4-reasoning Reviews

Phi-4-reasoning

Microsoft
Mistral Large Reviews

Mistral Large

Mistral AI
DeepSeek R1 Reviews

DeepSeek R1

DeepSeek
Mistral NeMo Reviews

Mistral NeMo

Mistral AI