Amazon Bedrock
Amazon Bedrock is a comprehensive service that streamlines the development and expansion of generative AI applications by offering access to a diverse range of high-performance foundation models (FMs) from top AI organizations, including AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon. Utilizing a unified API, developers have the opportunity to explore these models, personalize them through methods such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that can engage with various enterprise systems and data sources. As a serverless solution, Amazon Bedrock removes the complexities associated with infrastructure management, enabling the effortless incorporation of generative AI functionalities into applications while prioritizing security, privacy, and ethical AI practices. This service empowers developers to innovate rapidly, ultimately enhancing the capabilities of their applications and fostering a more dynamic tech ecosystem.
Learn more
Canopy
Canopy, a cloud-based practice management system for accountants, is available. Our suite of features allows you to streamline your firm and connect with clients. It includes workflow, document management and billing, payments, a robust CRM, secure client portal, and tools that automate any post-filing issues such as IRS notices.
Learn more
Piper TTS
Piper is a rapidly operating, localized neural text-to-speech (TTS) system that is particularly optimized for devices like the Raspberry Pi 4, aiming to provide top-notch speech synthesis capabilities without the dependence on cloud infrastructure. It employs neural network models developed with VITS and subsequently exported to ONNX Runtime, which facilitates both efficient and natural-sounding speech production. Supporting a diverse array of languages, Piper includes English (both US and UK dialects), Spanish (from Spain and Mexico), French, German, and many others, with downloadable voice options available. Users have the flexibility to operate Piper through command-line interfaces or integrate it seamlessly into Python applications via the piper-tts package. The system boasts features such as real-time audio streaming, JSON input for batch processing, and compatibility with multi-speaker models, enhancing its versatility. Additionally, Piper makes use of espeak-ng for phoneme generation, transforming text into phonemes before generating speech. It has found applications in various projects, including Home Assistant, Rhasspy 3, and NVDA, among others, illustrating its adaptability across different platforms and use cases. With its emphasis on local processing, Piper appeals to users looking for privacy and efficiency in their speech synthesis solutions.
Learn more
Llama 2
Introducing the next iteration of our open-source large language model, this version features model weights along with initial code for the pretrained and fine-tuned Llama language models, which span from 7 billion to 70 billion parameters. The Llama 2 pretrained models have been developed using an impressive 2 trillion tokens and offer double the context length compared to their predecessor, Llama 1. Furthermore, the fine-tuned models have been enhanced through the analysis of over 1 million human annotations. Llama 2 demonstrates superior performance against various other open-source language models across multiple external benchmarks, excelling in areas such as reasoning, coding capabilities, proficiency, and knowledge assessments. For its training, Llama 2 utilized publicly accessible online data sources, while the fine-tuned variant, Llama-2-chat, incorporates publicly available instruction datasets along with the aforementioned extensive human annotations. Our initiative enjoys strong support from a diverse array of global stakeholders who are enthusiastic about our open approach to AI, including companies that have provided valuable early feedback and are eager to collaborate using Llama 2. The excitement surrounding Llama 2 signifies a pivotal shift in how AI can be developed and utilized collectively.
Learn more