How much will openness matter to AI?

This post was originally published on Info World

There is also a talent inversion at play. In the Linux era, the best developers were scattered, making open source the best way to collaborate. In the AI era, the scarce talent—the researchers who understand the math behind the magic—are being hoarded inside the walled gardens of Google and OpenAI.

This changes the definition of “open.” When Meta releases Llama, the license is almost immaterial because of the barriers to running and testing that code at scale. They are not inviting you to co-create the next version. This is “source available” distribution, not open source development, regardless of the license. The contribution loop for AI models is broken. If the “community” (we invoke that nebulous word far too casually) cannot effectively patch, train, or fork the model without millions of dollars in hardware, then the model is not truly open in the way that matters for long-term sustainability.

So why are Meta, Mistral, and DeepSeek releasing these powerful models for free? As I’ve written for years, open source is selfish. Companies contribute to open source not out of charity, but because it commoditizes a competitor’s product while freeing up resources to pay more for their proprietary products. If the intelligence

Read the rest of this post, which was originally published on Info World.

Previous Post

AWS launches Flexible Training Plans for inference endpoints in SageMaker AI

Next Post

Digital Realty Expands Amid Dutch Data Center Market Growth