IBM unveils generative AI foundation models

This post was originally published on Info World

IBM has unveiled generative AI foundation models and generative AI enhancements to its Watsonx AI and data platforms.

Announced September 7, IBM’s Granite series mutli-size foundation models use the “Decoder” architecture and apply generative AI to language and code tasks. Enterprise NLP (natural language processing) tasks are supported such as summarization, content generation, and insight extraction.

IBM plans to offer a comprehensive list of the sources of data as well as description of data processing and filtering steps performed to produce training data for the Granite series, which is due to be available this month. IBM also is offering third-party models including Meta’s Llama 2-chat 70 billon parameter model and the StarCoder LLM (large language model) for code generation on on IBM Cloud.

The models are trained on IBM’s enterprise-focused data lake. The company said it has established a training process that features rigorous data collection and leverages control points for deployments of models and applications for governance, risk assessment, compliance, and bias mitigation.

Other capabilities planned for the Watsonx platform include:

Tuning Studio for offers a mechanism to adapt foundation models to unique downstream tasks with their own enterprise data. Tuning Studio is due this month. Synthetic data generator

Read the rest of this post, which was originally published on Info World.

Previous Post

Exploring the 6G Spectrum Landscape

Next Post

IT Leaders are at a Crossroads – It’s Time They Demand More