DeepSeek Shifts Network Operators’ View of AI

This post was originally published on Network Computing

The Chinese startup, DeepSeek, could reshuffle how operators plan to use AI in their networks.

DeepSeek made headlines earlier this year when it released an open source large language AI model it claimed is more efficient and easier to train than U.S. platforms from OpenAI and others. In March, the company rolled out DeepSeek-V3, promising improved executability of code and a boost in benchmark performance over an earlier V3 model that came out in December. The new model requires less than $6 million worth of computing power from Nvidia H800 chips, Reuters reported.

DeepSeek could reduce the hardware and costs for training, but what does that mean for network operators?

A Move to the Edge

One change is that organizations could use DeepSeek to fortify their edge computing capabilities. The edge, along with the LAN, is where AI models learn, according to Ed Fox, CTO at MetTel, a global provider of integrated digital communications products for businesses and government agencies.

Usman Javaid, chief products and marketing officer at Orange Business, also sees DeepSeek-R1 models running on an edge node. The models are efficient and can run wherever users want, and he said he foresees a day when mobile phones that incorporate efficient chipsets will be

Read the rest of this post, which was originally published on Network Computing.

Previous Post

When Will AI Be Smarter Than Humans? Don’t Ask

Next Post

Microsoft .NET Aspire adds resource graph, publishers