AWS is investing heavily in building tools for LLMops

This post was originally published on Info World

Amazon Web Services (AWS) made it easy for enterprises to adopt a generic generative AI chatbot with the introducing of its “plug and play” Amazon Q assistant at its re:Invent 2023 conference. But for enterprises that want to build their own generative AI assistant with their own or someone else’s large language model (LLM) instead, things are more complicated.

To help enterprises in that situation, AWS has been investing in building and adding new tools for LLMops—operating and managing LLMs—to Amazon SageMaker, its machine learning and AI service, Ankur Mehrotra, general manager of SageMaker at AWS, told InfoWorld.com.

“We are investing a lot in machine learning operations (MLops) and foundation large language model operations capabilities to help enterprises manage various LLMs and ML models in production. These capabilities help enterprises move fast and swap parts of models or entire models as they become available,” he said.

Mehrotra expects the new capabilities will be added soon—and although he wouldn’t say when, the most logical time would be at this year’s re:Invent. For now his focus is on helping enterprises with the process of maintaining, fine-tuning and updating the LLMs they use.

Modelling scenarios

There are a several scenarios in which enterprises will find these LLMops capabilities useful,

Read the rest of this post, which was originally published on Info World.

Previous Post

Vector Databases: Storage Considerations and Solutions

Next Post

Generative AI agents will revolutionize AI architecture