This post was originally published on Network Computing
When Generative AI (GenAI) burst onto the scene, I watched for a few months and saw a familiar cycle. It was the same hype that drove cloud to heights before unknown. And in that cycle were patterns; patterns, it turns out that are not unique to a technology but rather applicable to the general innovation cycle.
So, it seems elementary to apply those cycles to AI and come up with model specialization as the second wave of generative AI. That’s based on the evolution of SaaS from general software hosting to specific business functions, from cloud as cheap compute to ecosystems of services, and on hardware from general purpose (CPU) to specific computing (GPU) need.
AI and GenAI are on the fast track
The biggest difference between AI and the cloud cycle is that AI is moving much faster. This can largely be attributed to the open-source model of development and the adoption of that model by enterprises at large. We already have so many derivative models that most of us can’t keep up. We’re left to track based on broad categories that are increasingly coupled to business function.
This was already evident earlier this year when we dug into AI adoption in one
— Read the rest of this post, which was originally published on Network Computing.