Best Marble Alternatives in 2025
Find the top alternatives to Marble currently available. Compare ratings, reviews, pricing, and features of Marble alternatives in 2025. Slashdot lists the best Marble alternatives on the market that offer competing products that are similar to Marble. Sort through Marble alternatives below to make the best choice for your needs
-
1
SAM 3D
Meta
FreeSAM 3D consists of a duo of sophisticated foundation models that can transform a typical RGB image into an impressive 3D representation of either objects or human figures. This system features SAM 3D Objects, which accurately reconstructs the complete 3D geometry, textures, and spatial arrangements of items found in real-world environments, effectively addressing challenges posed by clutter, occlusions, and varying lighting conditions. Additionally, SAM 3D Body generates dynamic human mesh models that capture intricate poses and shapes, utilizing the "Meta Momentum Human Rig" (MHR) format for enhanced detail. The design of this system allows it to operate effectively with images taken in natural settings without the need for further training or fine-tuning: users simply upload an image, select the desired object or individual, and receive a downloadable asset (such as .OBJ, .GLB, or MHR) that is instantly ready for integration into 3D software. Highlighting features like open-vocabulary reconstruction applicable to any object category, multi-view consistency, and occlusion reasoning, the models benefit from a substantial and diverse dataset containing over one million annotated images from the real world, which contributes significantly to their adaptability and reliability. Furthermore, the models are available as open-source, promoting wider accessibility and collaborative improvement within the development community. -
2
Lyria
Google
Lyria, Google’s text-to-music model, is now available on Vertex AI, allowing businesses to generate custom music tracks with just a text prompt. It is perfect for marketers, content creators, and media professionals who need personalized, high-quality music for campaigns, videos, and podcasts. Lyria produces music across various genres and styles, eliminating the need for expensive licensing or time-consuming composition processes. The platform helps streamline content creation by tailoring soundtracks that match the mood, pacing, and narrative of your content. -
3
QwQ-Max-Preview
Alibaba
FreeQwQ-Max-Preview is a cutting-edge AI model based on the Qwen2.5-Max framework, specifically engineered to excel in areas such as complex reasoning, mathematical problem-solving, programming, and agent tasks. This preview showcases its enhanced capabilities across a variety of general-domain applications while demonstrating proficiency in managing intricate workflows. Anticipated to be officially released as open-source software under the Apache 2.0 license, QwQ-Max-Preview promises significant improvements and upgrades in its final iteration. Additionally, it contributes to the development of a more inclusive AI environment, as evidenced by the forthcoming introduction of the Qwen Chat application and streamlined model versions like QwQ-32B, which cater to developers interested in local deployment solutions. This initiative not only broadens accessibility but also encourages innovation within the AI community. -
4
gpt-4o-mini Realtime
OpenAI
$0.60 per inputThe gpt-4o-mini-realtime-preview model is a streamlined and economical variant of GPT-4o, specifically crafted for real-time interaction in both speech and text formats with minimal delay. It is capable of processing both audio and text inputs and outputs, facilitating “speech in, speech out” dialogue experiences through a consistent WebSocket or WebRTC connection. In contrast to its larger counterparts in the GPT-4o family, this model currently lacks support for image and structured output formats, concentrating solely on immediate voice and text applications. Developers have the ability to initiate a real-time session through the /realtime/sessions endpoint to acquire a temporary key, allowing them to stream user audio or text and receive immediate responses via the same connection. This model belongs to the early preview family (version 2024-12-17) and is primarily designed for testing purposes and gathering feedback, rather than handling extensive production workloads. The usage comes with certain rate limitations and may undergo changes during the preview phase. Its focus on audio and text modalities opens up possibilities for applications like conversational voice assistants, enhancing user interaction in a variety of settings. As technology evolves, further enhancements and features may be introduced to enrich user experiences. -
5
IGiS Photogrammetry Suite
Scanpoint Geomatics
1 RatingIGiS Photogrammetry Suite is a one-click solution to convert digital images into 3D. IGiS Photogrammetry Suite supports fully automated photogrammetry and geodesy tools for converting image into 3D, Dynamic, Automated Processing Workflow and High Accuracy Outputs. Photogrammetry uses aerial photos, drone photos, or satellite images to determine the spatial properties or dimensions of objects/features in a photographic image. IGiS Photogrammetry Suite is a product of IGiS, powered by Scanpoint Geomatics Limited. It offers a one-click solution for digital images processing and converting them to 3D. This will be useful input for GIS software and special effect building, as well as proxy measurements of objects of different sizes. -
6
BlueMarble
Comviva
BlueMarble functions as a comprehensive digital platform that integrates modular commerce, order management, customer support, and partner management specifically tailored for Communication Service Providers (CSPs). Built to be 5G ready and cloud-native, it utilizes a microservices approach to foster business agility, allowing for the swift creation of personalized customer experiences and journeys. The BlueMarble Commerce suite serves as a complete solution to facilitate omnichannel and multiplay digital commerce for telecommunications clients. Additionally, the BlueMarble BSS features a modular architecture that opens pathways to new revenue streams in the digital landscape while expediting the launch and scaling of innovative business lines such as IoT, 5G, cloud applications, and virtualized services. This omnichannel and multiplay digital commerce solution is designed to empower CSPs, leveraging microservices to enhance business agility and initiate their digital transformation. As a result, organizations can quickly adapt to evolving market demands while providing enriched customer interactions. -
7
Marble
Marble
€2500 per month for SaaSMarble is a real-time, open-source engine that tracks transactions, events, and users to detect money laundering, fraud, or service abuse. We offer a rule builder that is easy to use and can be used with any type of data. We also provide an engine that can run checks in batch mode or in real time, as well as a case manager for improving operational efficiency. Marble is a great tool for payment service providers, banking-as a-service providers, neobanks and marketplaces. It's also a good fit for telecommunications firms. It allows them to quickly create and update detection scenarios which generate decisions in minutes. These decisions can trigger events within your systems, cause friction or restrict real-time operations. You can also investigate them within Marble by using our case manager, or in your system by utilizing the Marble API. Marble was developed with compliance in mind. It ensures that everything is versioned, auditable and without time limits. -
8
GoMarble
GoMarble
$749 per monthEnhance your return on ad spend (ROAS) with expertly managed, targeted advertising campaigns powered by GoMarble AI. Our innovative generative AI model assimilates all your online assets to create precise buyer personas and develop effective marketing strategies. With the assistance of GoMarble’s generative AI tools, our talented designers and copywriters can swiftly produce, test, and launch advertisements. We have successfully established and nurtured numerous startups, giving us a deep understanding of the challenges faced by early-stage founders who juggle multiple responsibilities while striving for growth. GoMarble is designed to empower entrepreneurs by providing access to high-performance online advertising. This service is particularly tailored for brands that have found their niche in the market but require additional resources for expansion. Utilizing our AI audit tool, a dedicated expert will analyze your ad accounts to uncover opportunities for enhancing profitability. Simply upload your advertisement and provide a landing page to receive a comprehensive report that evaluates the visuals, copy, and hooks employed in your marketing efforts, ensuring you are always on the path to success. Through this process, we aim to not only boost your ad performance but also to help you streamline your marketing approach for maximum effectiveness. -
9
LongLLaMA
LongLLaMA
FreeThis repository showcases the research preview of LongLLaMA, an advanced large language model that can manage extensive contexts of up to 256,000 tokens or potentially more. LongLLaMA is developed on the OpenLLaMA framework and has been fine-tuned utilizing the Focused Transformer (FoT) technique. The underlying code for LongLLaMA is derived from Code Llama. We are releasing a smaller 3B base variant of the LongLLaMA model, which is not instruction-tuned, under an open license (Apache 2.0), along with inference code that accommodates longer contexts available on Hugging Face. This model's weights can seamlessly replace LLaMA in existing systems designed for shorter contexts, specifically those handling up to 2048 tokens. Furthermore, we include evaluation results along with comparisons to the original OpenLLaMA models, thereby providing a comprehensive overview of LongLLaMA's capabilities in the realm of long-context processing. -
10
Gemini-Exp-1206
Google
1 RatingGemini-Exp-1206 is a new experimental AI model that is currently being offered for preview exclusively to Gemini Advanced subscribers. This model boasts improved capabilities in handling intricate tasks, including programming, mathematical calculations, logical reasoning, and adhering to comprehensive instructions. Its primary aim is to provide users with enhanced support when tackling complex challenges. As this is an early preview, users may encounter some features that do not operate perfectly, and the model is also without access to real-time data. Access to Gemini-Exp-1206 can be obtained via the Gemini model drop-down menu on both desktop and mobile web platforms, allowing users to experience its advanced functionalities firsthand. -
11
GPT-5.1-Codex-Max
OpenAI
The GPT-5.1-Codex-Max represents the most advanced version within the GPT-5.1-Codex lineup, specifically tailored for software development and complex coding tasks. It enhances the foundational GPT-5.1 framework by emphasizing extended objectives like comprehensive project creation, significant refactoring efforts, and independent management of bugs and testing processes. This model incorporates adaptive reasoning capabilities, allowing it to allocate computational resources more efficiently based on the complexity of the tasks at hand, ultimately enhancing both performance and the quality of its outputs. Furthermore, it facilitates the use of various tools, including integrated development environments, version control systems, and continuous integration/continuous deployment (CI/CD) pipelines, while providing superior precision in areas such as code reviews, debugging, and autonomous operations compared to more general models. In addition to Max, other lighter variants like Codex-Mini cater to budget-conscious or scalable application scenarios. The entire GPT-5.1-Codex suite is accessible through developer previews and integrations, such as those offered by GitHub Copilot, making it a versatile choice for developers. This extensive range of options ensures that users can select a model that best fits their specific needs and project requirements. -
12
Marble Metrics
Marble Metrics
$10 per monthMarble Metrics distinguishes itself from other analytics services that prioritize privacy by ensuring that all of your analytics data is stored exclusively on servers owned by European entities. This commitment to data protection is consistent for all information, whether it is being stored or transmitted. To sustain our operations, we charge larger clients for our services, which allows us to provide free access for smaller projects, thereby eliminating any potential conflicts between privacy concerns and profit motives. Our platform includes a comprehensive suite of features that users expect from analytics tools, such as real-time tracking, page views, time spent on pages, bounce rates, and more. Importantly, all data gathered from your websites is solely yours and is used exclusively to populate your Dashboard, ensuring complete ownership. We prioritize the security of your data by storing it within the EU, with strict measures in place to prevent any analytics information from being linked to individual users. Additionally, our commitment to privacy means that you can analyze your website's performance without worrying about your users' identities being compromised. -
13
Lyria 2
Google
Lyria 2 is an innovative music generation tool developed by Google, built to empower musicians with high-fidelity sound and professional-grade audio. With its ability to create intricate and detailed compositions across genres such as classical, jazz, pop, and electronic, Lyria 2 allows users to control key elements of their music like key and BPM, giving them granular creative control. Musicians can use text prompts to generate custom music, accelerating the composition process by providing new starting points and refining ideas quickly. Lyria 2 also helps uncover new musical styles and techniques, encouraging exploration beyond familiar genres. By generating music with stunning nuance and realism, it opens new creative avenues for artists to experiment with melodies, harmonies, and arrangements, whether they're looking to refine existing compositions or craft entirely new pieces. -
14
MAI-Image-1
Microsoft AI
MAI-Image-1 is Microsoft’s inaugural fully in-house text-to-image generation model, which has impressively secured a spot in the top ten on the LMArena benchmark. Crafted with the intention of providing authentic value for creators, it emphasizes meticulous data selection and careful evaluation designed for real-world creative scenarios, while also integrating direct insights from industry professionals. This model is built to offer significant flexibility, visual richness, and practical utility. Notably, MAI-Image-1 excels in producing photorealistic images, showcasing realistic lighting effects, intricate landscapes, and more, all while maintaining an impressive balance between speed and quality. This efficiency allows users to swiftly manifest their ideas, iterate rapidly, and seamlessly transition their work into other tools for further enhancement. In comparison to many larger, slower models, MAI-Image-1 truly distinguishes itself through its agile performance and responsiveness, making it a valuable asset for creators. -
15
FLUX.1
Black Forest Labs
FreeFLUX.1 represents a revolutionary suite of open-source text-to-image models created by Black Forest Labs, achieving new heights in AI-generated imagery with an impressive 12 billion parameters. This model outperforms established competitors such as Midjourney V6, DALL-E 3, and Stable Diffusion 3 Ultra, providing enhanced image quality, intricate details, high prompt fidelity, and adaptability across a variety of styles and scenes. The FLUX.1 suite is available in three distinct variants: Pro for high-end commercial applications, Dev tailored for non-commercial research with efficiency on par with Pro, and Schnell designed for quick personal and local development initiatives under an Apache 2.0 license. Notably, its pioneering use of flow matching alongside rotary positional embeddings facilitates both effective and high-quality image synthesis. As a result, FLUX.1 represents a significant leap forward in the realm of AI-driven visual creativity, showcasing the potential of advancements in machine learning technology. This model not only elevates the standard for image generation but also empowers creators to explore new artistic possibilities. -
16
Veo 3.1 Fast
Google
Veo 3.1 Fast represents a major leap forward in generative video technology, combining the creative intelligence of Veo 3.1 with faster generation times and expanded control. Available through the Gemini API, the model turns written prompts and still images into cinematic videos with synchronized sound and expressive storytelling. Developers can guide scene generation using up to three reference images, extend video length continuously with “Scene Extension,” and even create dynamic transitions between first and last frames. Its enhanced AI engine maintains character and visual consistency across sequences while improving adherence to user intent and narrative tone. Veo 3.1 Fast’s audio generation adds depth with natural voices and realistic soundscapes, enabling richer, more immersive outputs. Integration with Google AI Studio and Vertex AI makes it simple to build, test, and deploy creative applications. Leading creative teams, such as Promise Studios and Latitude, are already using Veo 3.1 Fast for generative filmmaking and interactive storytelling. Offering the same price as Veo 3.0 but vastly improved capability, it sets a new benchmark for AI-driven video production. -
17
AudioLM
Google
AudioLM is an innovative audio language model designed to create high-quality, coherent speech and piano music by solely learning from raw audio data, eliminating the need for text transcripts or symbolic forms. It organizes audio in a hierarchical manner through two distinct types of discrete tokens: semantic tokens, which are derived from a self-supervised model to capture both phonetic and melodic structures along with broader context, and acoustic tokens, which come from a neural codec to maintain speaker characteristics and intricate waveform details. This model employs a series of three Transformer stages, initiating with the prediction of semantic tokens to establish the overarching structure, followed by the generation of coarse tokens, and culminating in the production of fine acoustic tokens for detailed audio synthesis. Consequently, AudioLM can take just a few seconds of input audio to generate seamless continuations that effectively preserve voice identity and prosody in speech, as well as melody, harmony, and rhythm in music. Remarkably, evaluations by humans indicate that the synthetic continuations produced are almost indistinguishable from actual recordings, demonstrating the technology's impressive authenticity and reliability. This advancement in audio generation underscores the potential for future applications in entertainment and communication, where realistic sound reproduction is paramount. -
18
Gen-3
Runway
Gen-3 Alpha marks the inaugural release in a new line of models developed by Runway, leveraging an advanced infrastructure designed for extensive multimodal training. This model represents a significant leap forward in terms of fidelity, consistency, and motion capabilities compared to Gen-2, paving the way for the creation of General World Models. By being trained on both videos and images, Gen-3 Alpha will enhance Runway's various tools, including Text to Video, Image to Video, and Text to Image, while also supporting existing functionalities like Motion Brush, Advanced Camera Controls, and Director Mode. Furthermore, it will introduce new features that allow for more precise manipulation of structure, style, and motion, offering users even greater creative flexibility. -
19
OpenLLaMA
OpenLLaMA
FreeOpenLLaMA is an openly licensed reproduction of Meta AI's LLaMA 7B, developed using the RedPajama dataset. The model weights we offer can seamlessly replace the LLaMA 7B in current applications. Additionally, we have created a more compact 3B version of the LLaMA model for those seeking a lighter alternative. This provides users with more flexibility in choosing the right model for their specific needs. -
20
PanGu-α
Huawei
PanGu-α has been created using the MindSpore framework and utilizes a powerful setup of 2048 Ascend 910 AI processors for its training. The training process employs an advanced parallelism strategy that leverages MindSpore Auto-parallel, which integrates five different parallelism dimensions—data parallelism, operation-level model parallelism, pipeline model parallelism, optimizer model parallelism, and rematerialization—to effectively distribute tasks across the 2048 processors. To improve the model's generalization, we gathered 1.1TB of high-quality Chinese language data from diverse fields for pretraining. We conduct extensive tests on PanGu-α's generation capabilities across multiple situations, such as text summarization, question answering, and dialogue generation. Additionally, we examine how varying model scales influence few-shot performance across a wide array of Chinese NLP tasks. The results from our experiments highlight the exceptional performance of PanGu-α, demonstrating its strengths in handling numerous tasks even in few-shot or zero-shot contexts, thus showcasing its versatility and robustness. This comprehensive evaluation reinforces the potential applications of PanGu-α in real-world scenarios. -
21
Gemini Live API
Google
The Gemini Live API is an advanced preview feature designed to facilitate low-latency, bidirectional interactions through voice and video with the Gemini system. This innovation allows users to engage in conversations that feel natural and human-like, while also enabling them to interrupt the model's responses via voice commands. In addition to handling text inputs, the model is capable of processing audio and video, yielding both text and audio outputs. Recent enhancements include the introduction of two new voice options and support for 30 additional languages, along with the ability to configure the output language as needed. Furthermore, users can adjust image resolution settings (66/256 tokens), decide on turn coverage (whether to send all inputs continuously or only during user speech), and customize interruption preferences. Additional features encompass voice activity detection, new client events for signaling the end of a turn, token count tracking, and a client event for marking the end of the stream. The system also supports text streaming, along with configurable session resumption that retains session data on the server for up to 24 hours, and the capability for extended sessions utilizing a sliding context window for better conversation continuity. Overall, Gemini Live API enhances interaction quality, making it more versatile and user-friendly. -
22
Ray3
Luma AI
$9.99 per monthRay3, developed by Luma Labs, is a cutting-edge video generation tool designed to empower creators in crafting visually compelling narratives with professional-grade quality. This innovative model allows for the production of native 16-bit High Dynamic Range (HDR) videos, which results in enhanced color vibrancy, richer contrasts, and a streamlined workflow akin to those found in high-end studios. It leverages advanced physics and ensures greater consistency in elements such as motion, lighting, and reflections, while also offering users visual controls to refine their projects. Additionally, Ray3 features a draft mode that facilitates rapid exploration of concepts, which can later be refined into stunning 4K HDR outputs. The model is adept at interpreting prompts with subtlety, reasoning about creative intent, and conducting early self-evaluations of drafts to make necessary adjustments for more precise scene and motion representation. Moreover, it includes capabilities such as keyframe support, looping and extending functions, upscaling options, and the ability to export frames, making it an invaluable asset for seamless integration into professional creative processes. By harnessing these features, creators can elevate their storytelling through dynamic visual experiences that resonate with their audiences. -
23
MAI-Voice-1
Microsoft
MAI-Voice-1 represents Microsoft's inaugural model for generating highly expressive and natural speech, aimed at delivering high-quality, emotionally nuanced audio in both single and multi-speaker contexts with remarkable efficiency, enabling the creation of an entire minute of audio in less than a second using just one GPU. This innovative technology is incorporated into Copilot Daily and Podcasts, enhancing a new Copilot Labs experience where users can explore its expressive speech and storytelling prowess, allowing for the development of interactive "choose your own adventure" stories or customized guided meditations with simple input. The vision for voice technology is to serve as the future interface for AI companions, and MAI-Voice-1 embodies this future with its swift performance and lifelike quality, solidifying its position as one of the most advanced speech generation systems on the market. Microsoft is actively investigating the opportunities presented by voice interfaces to foster engaging, personalized interactions with AI systems, potentially transforming how users connect with technology. Through these advancements, the integration of MAI-Voice-1 is set to redefine user experiences in various applications. -
24
Zyphra Zonos
Zyphra
$0.02 per minuteZyphra is thrilled to unveil the beta release of Zonos-v0.1, which boasts two sophisticated and real-time text-to-speech models that include high-fidelity voice cloning capabilities. Our release features both a 1.6B transformer and a 1.6B hybrid model, all under the Apache 2.0 license. Given the challenges in quantitatively assessing audio quality, we believe that the generation quality produced by Zonos is on par with or even surpasses that of top proprietary TTS models currently available. Additionally, we are confident that making models of this quality publicly accessible will greatly propel advancements in TTS research. You can find the Zonos model weights on Huggingface, with sample inference code available on our GitHub repository. Furthermore, Zonos can be utilized via our model playground and API, which offers straightforward and competitive flat-rate pricing options. To illustrate the performance of Zonos, we have prepared a variety of sample comparisons between Zonos and existing proprietary models, highlighting its capabilities. This initiative emphasizes our commitment to fostering innovation in the field of text-to-speech technology. -
25
Decart Mirage
Decart Mirage
FreeMirage represents a groundbreaking advancement as the first real-time, autoregressive model designed for transforming video into a new digital landscape instantly, requiring no pre-rendering. Utilizing cutting-edge Live-Stream Diffusion (LSD) technology, it achieves an impressive processing rate of 24 FPS with latency under 40 ms, which guarantees smooth and continuous video transformations while maintaining the integrity of motion and structure. Compatible with an array of inputs including webcams, gameplay, films, and live broadcasts, Mirage can dynamically incorporate text-prompted style modifications in real-time. Its sophisticated history-augmentation feature ensures that temporal coherence is upheld throughout the frames, effectively eliminating the common glitches associated with diffusion-only models. With GPU-accelerated custom CUDA kernels, it boasts performance that is up to 16 times faster than conventional techniques, facilitating endless streaming without interruptions. Additionally, it provides real-time previews for both mobile and desktop platforms, allows for effortless integration with any video source, and supports a variety of deployment options, enhancing accessibility for users. Overall, Mirage stands out as a transformative tool in the realm of digital video innovation. -
26
Nano Banana
Google
Nano Banana offers a streamlined, user-friendly way to generate and edit images using Gemini’s “Fast” model. It focuses on fun, casual transformations, making it great for remixing selfies, trying new styles, or merging multiple pictures into a single creation. The model handles character consistency well, ensuring that people look like themselves even when placed in new settings or artistic interpretations. Users can easily perform spot edits like changing backgrounds, adjusting small details, or adding creative elements without needing advanced controls. Nano Banana also excels at playful results such as figurine effects, retro photo booth aesthetics, or themed portraits. These quick edits allow anyone to explore creative concepts in seconds. It’s built for low-effort, high-fun experimentation, making it perfect for social media content or personal projects. Nano Banana provides an approachable entry point for image generation without the depth or complexity of Pro-level features. -
27
Phi-2
Microsoft
We are excited to announce the launch of Phi-2, a language model featuring 2.7 billion parameters that excels in reasoning and language comprehension, achieving top-tier results compared to other base models with fewer than 13 billion parameters. In challenging benchmarks, Phi-2 competes with and often surpasses models that are up to 25 times its size, a feat made possible by advancements in model scaling and meticulous curation of training data. Due to its efficient design, Phi-2 serves as an excellent resource for researchers interested in areas such as mechanistic interpretability, enhancing safety measures, or conducting fine-tuning experiments across a broad spectrum of tasks. To promote further exploration and innovation in language modeling, Phi-2 has been integrated into the Azure AI Studio model catalog, encouraging collaboration and development within the research community. Researchers can leverage this model to unlock new insights and push the boundaries of language technology. -
28
PIX4Dmapper
Pix4D
$370 per monthCapture RGB, thermal, or multispectral images using any camera or drone and easily import them into PIX4Dmapper. The advanced photogrammetry algorithms of PIX4Dmapper convert your aerial or ground images into detailed digital maps and 3D models. You can effortlessly process your projects on your desktop with our specialized software or opt for PIX4Dcloud for online processing capabilities. Leverage the robust features of photogrammetry within the rayCloud environment to evaluate, manage, and enhance the quality of your projects. A quality report provides insights into the generated results, calibration specifics, and a variety of project quality indicators for thorough assessment. You can specify an area of interest, choose your processing preferences, add ground control points, and modify point clouds, DSMs, meshes, and orthomosaics as needed. Take advantage of default templates for streamlined project processing, or design custom templates with tailored settings to maintain full command over both data and quality, ensuring optimal results every time. This flexible approach allows you to adapt your workflow precisely to your project's unique requirements. -
29
CodeGemma
Google
CodeGemma represents an impressive suite of efficient and versatile models capable of tackling numerous coding challenges, including middle code completion, code generation, natural language processing, mathematical reasoning, and following instructions. It features three distinct model types: a 7B pre-trained version designed for code completion and generation based on existing code snippets, a 7B variant fine-tuned for translating natural language queries into code and adhering to instructions, and an advanced 2B pre-trained model that offers code completion speeds up to twice as fast. Whether you're completing lines, developing functions, or crafting entire segments of code, CodeGemma supports your efforts, whether you're working in a local environment or leveraging Google Cloud capabilities. With training on an extensive dataset comprising 500 billion tokens predominantly in English, sourced from web content, mathematics, and programming languages, CodeGemma not only enhances the syntactical accuracy of generated code but also ensures its semantic relevance, thereby minimizing mistakes and streamlining the debugging process. This powerful tool continues to evolve, making coding more accessible and efficient for developers everywhere. -
30
LTM-1
Magic AI
Magic’s LTM-1 technology facilitates context windows that are 50 times larger than those typically used in transformer models. As a result, Magic has developed a Large Language Model (LLM) that can effectively process vast amounts of contextual information when providing suggestions. This advancement allows our coding assistant to access and analyze your complete code repository. With the ability to reference extensive factual details and their own prior actions, larger context windows can significantly enhance the reliability and coherence of AI outputs. We are excited about the potential of this research to further improve user experience in coding assistance applications. -
31
Ultralytics
Ultralytics
Ultralytics provides a comprehensive vision-AI platform centered around its renowned YOLO model suite, empowering teams to effortlessly train, validate, and deploy computer-vision models. The platform features an intuitive drag-and-drop interface for dataset management, the option to choose from pre-existing templates or to customize models, and flexibility in exporting to various formats suitable for cloud, edge, or mobile applications. It supports a range of tasks such as object detection, instance segmentation, image classification, pose estimation, and oriented bounding-box detection, ensuring that Ultralytics’ models maintain high accuracy and efficiency, tailored for both embedded systems and extensive inference needs. Additionally, the offering includes Ultralytics HUB, a user-friendly web tool that allows individuals to upload images and videos, train models online, visualize results (even on mobile devices), collaborate with team members, and deploy models effortlessly through an inference API. This seamless integration of tools makes it easier than ever for teams to leverage cutting-edge AI technology in their projects. -
32
PaliGemma 2
Google
PaliGemma 2 represents the next step forward in tunable vision-language models, enhancing the already capable Gemma 2 models by integrating visual capabilities and simplifying the process of achieving outstanding performance through fine-tuning. This advanced model enables users to see, interpret, and engage with visual data, thereby unlocking an array of innovative applications. It comes in various sizes (3B, 10B, 28B parameters) and resolutions (224px, 448px, 896px), allowing for adaptable performance across different use cases. PaliGemma 2 excels at producing rich and contextually appropriate captions for images, surpassing basic object recognition by articulating actions, emotions, and the broader narrative associated with the imagery. Our research showcases its superior capabilities in recognizing chemical formulas, interpreting music scores, performing spatial reasoning, and generating reports for chest X-rays, as elaborated in the accompanying technical documentation. Transitioning to PaliGemma 2 is straightforward for current users, ensuring a seamless upgrade experience while expanding their operational potential. The model's versatility and depth make it an invaluable tool for both researchers and practitioners in various fields. -
33
OpenAI o1-mini
OpenAI
1 RatingThe o1-mini from OpenAI is an innovative and budget-friendly AI model that specializes in improved reasoning capabilities, especially in STEM areas such as mathematics and programming. As a member of the o1 series, it aims to tackle intricate challenges by allocating more time to analyze and contemplate solutions. Although it is smaller in size and costs 80% less than its counterpart, the o1-preview, the o1-mini remains highly effective in both coding assignments and mathematical reasoning. This makes it an appealing choice for developers and businesses that seek efficient and reliable AI solutions. Furthermore, its affordability does not compromise its performance, allowing a wider range of users to benefit from advanced AI technologies. -
34
PanGu-Σ
Huawei
Recent breakthroughs in natural language processing, comprehension, and generation have been greatly influenced by the development of large language models. This research presents a system that employs Ascend 910 AI processors and the MindSpore framework to train a language model exceeding one trillion parameters, specifically 1.085 trillion, referred to as PanGu-{\Sigma}. This model enhances the groundwork established by PanGu-{\alpha} by converting the conventional dense Transformer model into a sparse format through a method known as Random Routed Experts (RRE). Utilizing a substantial dataset of 329 billion tokens, the model was effectively trained using a strategy called Expert Computation and Storage Separation (ECSS), which resulted in a remarkable 6.3-fold improvement in training throughput through the use of heterogeneous computing. Through various experiments, it was found that PanGu-{\Sigma} achieves a new benchmark in zero-shot learning across multiple downstream tasks in Chinese NLP, showcasing its potential in advancing the field. This advancement signifies a major leap forward in the capabilities of language models, illustrating the impact of innovative training techniques and architectural modifications. -
35
GPT-5.1
OpenAI
The latest iteration in the GPT-5 series, known as GPT-5.1, aims to significantly enhance the intelligence and conversational abilities of ChatGPT. This update features two separate model types: GPT-5.1 Instant, recognized as the most popular option, is characterized by a warmer demeanor, improved instruction adherence, and heightened intelligence; on the other hand, GPT-5.1 Thinking has been fine-tuned as an advanced reasoning engine, making it easier to grasp, quicker for simpler tasks, and more diligent when tackling complex issues. Additionally, queries from users are now intelligently directed to the model variant that is best equipped for the specific task at hand. This update not only focuses on boosting raw cognitive capabilities but also on refining the communication style, resulting in models that are more enjoyable to interact with and better aligned with users' intentions. Notably, the system card addendum indicates that GPT-5.1 Instant employs a feature called "adaptive reasoning," allowing it to determine when deeper thought is necessary before formulating a response, while GPT-5.1 Thinking adjusts its reasoning time precisely in relation to the complexity of the question posed. Ultimately, these advancements mark a significant step forward in making AI interactions more intuitive and user-friendly. -
36
IBM Granite
IBM
FreeIBM® Granite™ comprises a suite of AI models specifically designed for business applications, built from the ground up to prioritize trust and scalability in AI implementations. Currently, the open-source Granite models can be accessed. Our goal is to make AI widely available to as many developers as possible, which is why we have released the essential Granite Code, as well as Time Series, Language, and GeoSpatial models as open-source on Hugging Face, under the permissive Apache 2.0 license, allowing extensive commercial use without restrictions. Every Granite model is developed using meticulously selected data, ensuring exceptional transparency regarding the sources of the training data. Additionally, we have made the tools that validate and maintain the quality of this data accessible to the public, meeting the rigorous standards required for enterprise-level applications. This commitment to openness and quality reflects our dedication to fostering innovation in the AI landscape. -
37
Marble
Marble
$14.99 per monthMarble allows you to unify all your social media platforms on a single page that continuously refreshes to include both previous and upcoming posts. By merging posts from various applications with your personal websites, you can present a distinctive page that highlights your most significant content, effectively showcasing your true self to the world. This innovative approach not only enhances your online presence but also simplifies the way you interact with your audience across different platforms. -
38
Gemini 3 Deep Think
Google
Gemini 3, the latest model from Google DeepMind, establishes a new standard for artificial intelligence by achieving cutting-edge reasoning capabilities and multimodal comprehension across various formats including text, images, and videos. It significantly outperforms its earlier version in critical AI assessments and showcases its strengths in intricate areas like scientific reasoning, advanced programming, spatial reasoning, and visual or video interpretation. The introduction of the innovative “Deep Think” mode takes performance to an even higher level, demonstrating superior reasoning abilities for exceptionally difficult tasks and surpassing the Gemini 3 Pro in evaluations such as Humanity’s Last Exam and ARC-AGI. Now accessible within Google’s ecosystem, Gemini 3 empowers users to engage in learning, developmental projects, and strategic planning with unprecedented sophistication. With context windows extending up to one million tokens and improved media-processing capabilities, along with tailored configurations for various tools, the model enhances precision, depth, and adaptability for practical applications, paving the way for more effective workflows across diverse industries. This advancement signals a transformative shift in how AI can be leveraged for real-world challenges. -
39
VideoPoet
Google
VideoPoet is an innovative modeling technique that transforms any autoregressive language model or large language model (LLM) into an effective video generator. It comprises several straightforward components. An autoregressive language model is trained across multiple modalities—video, image, audio, and text—to predict the subsequent video or audio token in a sequence. The training framework for the LLM incorporates a range of multimodal generative learning objectives, such as text-to-video, text-to-image, image-to-video, video frame continuation, inpainting and outpainting of videos, video stylization, and video-to-audio conversion. Additionally, these tasks can be combined to enhance zero-shot capabilities. This straightforward approach demonstrates that language models are capable of generating and editing videos with impressive temporal coherence, showcasing the potential for advanced multimedia applications. As a result, VideoPoet opens up exciting possibilities for creative expression and automated content creation. -
40
GPT-4 Turbo
OpenAI
$0.0200 per 1000 tokens 1 RatingThe GPT-4 model represents a significant advancement in AI, being a large multimodal system capable of handling both text and image inputs while producing text outputs, which allows it to tackle complex challenges with a level of precision unmatched by earlier models due to its extensive general knowledge and enhanced reasoning skills. Accessible through the OpenAI API for subscribers, GPT-4 is also designed for chat interactions, similar to gpt-3.5-turbo, while proving effective for conventional completion tasks via the Chat Completions API. This state-of-the-art version of GPT-4 boasts improved features such as better adherence to instructions, JSON mode, consistent output generation, and the ability to call functions in parallel, making it a versatile tool for developers. However, it is important to note that this preview version is not fully prepared for high-volume production use, as it has a limit of 4,096 output tokens. Users are encouraged to explore its capabilities while keeping in mind its current limitations. -
41
NVIDIA Cosmos
NVIDIA
FreeNVIDIA Cosmos serves as a cutting-edge platform tailored for developers, featuring advanced generative World Foundation Models (WFMs), sophisticated video tokenizers, safety protocols, and a streamlined data processing and curation system aimed at enhancing the development of physical AI. This platform empowers developers who are focused on areas such as autonomous vehicles, robotics, and video analytics AI agents to create highly realistic, physics-informed synthetic video data, leveraging an extensive dataset that encompasses 20 million hours of both actual and simulated footage, facilitating the rapid simulation of future scenarios, the training of world models, and the customization of specific behaviors. The platform comprises three primary types of WFMs: Cosmos Predict, which can produce up to 30 seconds of continuous video from various input modalities; Cosmos Transfer, which modifies simulations to work across different environments and lighting conditions for improved domain augmentation; and Cosmos Reason, a vision-language model that implements structured reasoning to analyze spatial-temporal information for effective planning and decision-making. With these capabilities, NVIDIA Cosmos significantly accelerates the innovation cycle in physical AI applications, fostering breakthroughs across various industries. -
42
Sky-T1
NovaSky
FreeSky-T1-32B-Preview is an innovative open-source reasoning model crafted by the NovaSky team at UC Berkeley's Sky Computing Lab. It delivers performance comparable to proprietary models such as o1-preview on various reasoning and coding assessments, while being developed at a cost of less than $450, highlighting the potential for budget-friendly, advanced reasoning abilities. Fine-tuned from Qwen2.5-32B-Instruct, the model utilized a meticulously curated dataset comprising 17,000 examples spanning multiple fields, such as mathematics and programming. The entire training process was completed in just 19 hours using eight H100 GPUs with DeepSpeed Zero-3 offloading technology. Every component of this initiative—including the data, code, and model weights—is entirely open-source, allowing both academic and open-source communities to not only replicate but also improve upon the model's capabilities. This accessibility fosters collaboration and innovation in the realm of artificial intelligence research and development. -
43
PygmalionAI
PygmalionAI
FreePygmalionAI is a vibrant community focused on the development of open-source initiatives utilizing EleutherAI's GPT-J 6B and Meta's LLaMA models. Essentially, Pygmalion specializes in crafting AI tailored for engaging conversations and roleplaying. The actively maintained Pygmalion AI model currently features the 7B variant, derived from Meta AI's LLaMA model. Requiring a mere 18GB (or even less) of VRAM, Pygmalion demonstrates superior chat functionality compared to significantly larger language models, all while utilizing relatively limited resources. Our meticulously assembled dataset, rich in high-quality roleplaying content, guarantees that your AI companion will be the perfect partner for roleplaying scenarios. Both the model weights and the training code are entirely open-source, allowing you the freedom to modify and redistribute them for any purpose you desire. Generally, language models, such as Pygmalion, operate on GPUs, as they require swift memory access and substantial processing power to generate coherent text efficiently. As a result, users can expect a smooth and responsive interaction experience when employing Pygmalion's capabilities. -
44
Marey
Moonvalley
$14.99 per monthMarey serves as the cornerstone AI video model for Moonvalley, meticulously crafted to achieve exceptional cinematography, providing filmmakers with unparalleled precision, consistency, and fidelity in every single frame. As the first video model deemed commercially safe, it has been exclusively trained on licensed, high-resolution footage to mitigate legal ambiguities and protect intellectual property rights. Developed in partnership with AI researchers and seasoned directors, Marey seamlessly replicates authentic production workflows, ensuring that the output is of production-quality, devoid of visual distractions, and primed for immediate delivery. Its suite of creative controls features Camera Control, which enables the transformation of 2D scenes into adjustable 3D environments for dynamic cinematic movements; Motion Transfer, which allows the timing and energy from reference clips to be transferred to new subjects; Trajectory Control, which enables precise paths for object movements without the need for prompts or additional iterations; Keyframing, which facilitates smooth transitions between reference images along a timeline; and Reference, which specifies how individual elements should appear and interact. By integrating these advanced features, Marey empowers filmmakers to push creative boundaries and streamline their production processes. -
45
Act-Two
Runway AI
$12 per monthAct-Two allows for the animation of any character by capturing and transferring movements, facial expressions, and dialogue from a performance video onto a static image or reference video of the character. To utilize this feature, you can choose the Gen‑4 Video model and click on the Act‑Two icon within Runway’s online interface, where you will need to provide two key inputs: a video showcasing an actor performing the desired scene and a character input, which can either be an image or a video clip. Additionally, you have the option to enable gesture control to effectively map the actor's hand and body movements onto the character images. Act-Two automatically integrates environmental and camera movements into static images, accommodates various angles, non-human subjects, and different artistic styles, while preserving the original dynamics of the scene when using character videos, although it focuses on facial gestures instead of full-body movement. Users are given the flexibility to fine-tune facial expressiveness on a scale, allowing them to strike a balance between natural motion and character consistency. Furthermore, they can preview results in real time and produce high-definition clips that last up to 30 seconds, making it a versatile tool for animators. This innovative approach enhances the creative possibilities for animators and filmmakers alike.