Google Cloud has announced the general availability of Mistral AI's Large-Instruct-2411 model on Vertex AI Model Garden. This 123B parameter LLM boasts enhanced reasoning, knowledge, and coding capabilities, along with improved long context handling, function calling, and system prompts. It's ideal for complex workflows requiring precise instructions and JSON outputs, large context applications leveraging retrieval-augmented generation (RAG), and code generation tasks. Access and deploy the model via Vertex AI's Model-as-a-Service (MaaS) or self-service options. Building with Mistral AI models on Vertex AI offers several advantages: choosing the right model for your needs, from efficient low-latency options to powerful models for complex tasks; easy experimentation with fully managed MaaS; simplified deployment at scale with managed infrastructure and pay-as-you-go pricing; and upcoming fine-tuning capabilities for bespoke solutions. You can also build and orchestrate intelligent agents using Vertex AI's tools, including LangChain, and integrate with production environments using Genkit's Vertex AI plugin. Benefit from Google Cloud's enterprise-grade security and compliance, including access controls via Vertex AI Model Garden's organization policy.
Mistral AI's Large-Instruct-2411 Now Available on Vertex AI
Google Cloud