Average Ratings 0 Ratings
Average Ratings 0 Ratings
Description
DeepSeek-V2 is a cutting-edge Mixture-of-Experts (MoE) language model developed by DeepSeek-AI, noted for its cost-effective training and high-efficiency inference features. It boasts an impressive total of 236 billion parameters, with only 21 billion active for each token, and is capable of handling a context length of up to 128K tokens. The model utilizes advanced architectures such as Multi-head Latent Attention (MLA) to optimize inference by minimizing the Key-Value (KV) cache and DeepSeekMoE to enable economical training through sparse computations. Compared to its predecessor, DeepSeek 67B, this model shows remarkable improvements, achieving a 42.5% reduction in training expenses, a 93.3% decrease in KV cache size, and a 5.76-fold increase in generation throughput. Trained on an extensive corpus of 8.1 trillion tokens, DeepSeek-V2 demonstrates exceptional capabilities in language comprehension, programming, and reasoning tasks, positioning it as one of the leading open-source models available today. Its innovative approach not only elevates its performance but also sets new benchmarks within the field of artificial intelligence.
Description
DeepSeek has launched DeepSeek-V3.1-Terminus, an upgrade to the V3.1 architecture that integrates user suggestions to enhance output stability, consistency, and overall agent performance. This new version significantly decreases the occurrences of mixed Chinese and English characters as well as unintended distortions, leading to a cleaner and more uniform language generation experience. Additionally, the update revamps both the code agent and search agent subsystems to deliver improved and more dependable performance across various benchmarks. DeepSeek-V3.1-Terminus is available as an open-source model, with its weights accessible on Hugging Face, making it easier for the community to leverage its capabilities. The structure of the model remains consistent with DeepSeek-V3, ensuring it is compatible with existing deployment strategies, and updated inference demonstrations are provided for users to explore. Notably, the model operates at a substantial scale of 685B parameters and supports multiple tensor formats, including FP8, BF16, and F32, providing adaptability in different environments. This flexibility allows developers to choose the most suitable format based on their specific needs and resource constraints.
API Access
Has API
API Access
Has API
Pricing Details
Free
Free Trial
Free Version
Pricing Details
Free
Free Trial
Free Version
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Vendor Details
Company Name
DeepSeek
Founded
2023
Country
China
Website
deepseek.com
Vendor Details
Company Name
DeepSeek
Founded
2023
Country
China
Website
api-docs.deepseek.com/news/news250922