Google has unveiled [https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024] its latest AI model, Gemini 1.5, which features what the company calls an “experimental” one million token context window. The new capability allows Gemini 1.5 to process extremely long text passages – up to one million characters – to understand context and meaning. This dwarfs previous AI systems like Claude 2.1 and GPT-4 Turbo, which max out at 200,000 and 128,000 tokens respectively: [https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2024/02/gemini-1.5-pro-tokens-comparison-1024x709.jpg]“Gemini 1.5 Pro achieves near-perfect recall on long-context retrieval tasks across modalities, improves the state-of-the-art in long-document QA, long-video QA and long-context ASR, and matches or surpasses Gemini 1.0 Ultra’s state-of-the-art performance across a broad set of benchmarks,” said Google researchers in a technical paper [https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf] (PDF). The efficiency of Google’s latest model is attributed to its innovative Mixture-of-Experts (MoE) architecture. “While a traditional Transformer functions as one large neural network, MoE models are divided into smaller ‘expert’ neural networks,” explained Demis Hassabis, CEO of Google DeepMind. “Depending on the type of input given, MoE models learn to selectively activate only the most relevant expert pathways in its neural network. This specialisation massively enhances the model’s efficiency.” To demonstrate the power of the 1M token context window, Google showed how Gemini 1.5 could ingest the entire 326,914-token Apollo 11 flight transcript and then accurately answer specific questions about it. It also summarised key details from a 684,000-token silent film when prompted. Google is initially providing developers and enterprises free access to a limited Gemini 1.5 preview with a one million token context window. A 128,000 token general release for the public will come later, along with pricing details. In December, we launched Gemini 1.0 Pro. Today, we're introducing Gemini 1.5 Pro! 🚀 This next-gen model uses a Mixture-of-Experts (MoE) approach for more efficient training & higher-quality responses. Gemini 1.5 Pro, our mid-sized model, will soon come standard with a… pic.twitter.com/m2BNufHd8C [https://t.co/m2BNufHd8C] — Sundar Pichai (@sundarpichai) February 15, 2024 [https://twitter.com/sundarpichai/status/1758145921131630989?ref_src=twsrc%5Etfw]For now, the one million token capability remains experimental. But if it lives up to its early promise, Gemini 1.5 could set a new standard for AI’s ability to understand complex, real-world text. Developers interested in testing Gemini 1.5 Pro can sign up [https://aistudio.google.com/app/waitlist/97445851] in AI Studio. Google says that enterprise customers can reach out to their Vertex AI account team. (Image Credit: Google) See also: Amazon trains 980M parameter LLM with ’emergent abilities’ [https://www.artificialintelligence-news.com/2024/02/15/amazon-trains-980m-parameter-llm-emergent-abilities/] [https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2022/04/ai-expo-world-728x-90-01.png] [https://www.ai-expo.net/]Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo [https://www.ai-expo.net/] taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX [https://www.blockchain-expo.com/], Digital Transformation Week [https://digitaltransformation-week.com/], and Cyber Security & Cloud Expo [https://www.cybersecuritycloudexpo.com/]. Explore other upcoming enterprise technology events and webinars powered by TechForge here. [https://techforge.pub/upcoming-events/] The post Google launches Gemini 1.5 with ‘experimental’ 1M token context [https://www.artificialintelligence-news.com/2024/02/16/google-launches-gemini-1-5-experimental-1m-token-context/] appeared first on AI News [https://www.artificialintelligence-news.com].