The Case for Bringing Network Visibility Data into OpenTelemetry

This post was originally published on Network Computing

Guarding users and organizations against productivity-draining conditions has always been a full-time job for ITOps. It’s just that in a digital workplace setting, the stakes are much higher.

It requires both short-term (reactive) ability and longer-term (preventative and proactive) capability.

Establishing and maintaining a constant gaze on the digitally delivered estate is crucial to finding and fixing what might go wrong in the short-term—as a break-fix response to an outage—or in the longer term, where continuous improvement or optimization work may be used to address capacity bottlenecks that constrain users or otherwise cause performance degradations.

Users depend on code, clouds, and connectivity their employers don’t own. There is limited out-of-the-box visibility into how these are assembled, particularly any third-party dependencies or interdependencies with other services or microservices.

That end-to-end chain of technologies used to deliver a unified digital experience won’t always play nicely together. In a world driven by hybrid or cloud-native deployments, where application components have become smaller, more distributed, shorter-lived, and increasingly ephemeral, a small change has a large blast radius. Timely diagnosis is critical if user experience and productivity are to be maintained.

For that reason, a goal of many ITOps teams is to move towards more

Read the rest of this post, which was originally published on Network Computing.

Previous Post

Playlist: ‘The Notebook Dump’ on the Light Reading Podcast

Next Post

Intel’s CPU decline may portend its RAN fate