editors desk

15 posts

The cloud cost wake-up call I predicted

For years, I’ve cautioned organizations about the hidden downsides of cloud computing. Ironically, I’m “Dave the cloud guy” who warns enterprises about the cloud. The benefits of the cloud’s agility, scalability, and innovation can quickly become a financial sinkhole without proper management. Enterprises are finally waking up, and it’s been fascinating to watch the recent shift to diligent cloud cost management. The costs of unchecked growth Cloud computing starts as a flexible and budget-friendly option, especially with its enticing pay-per-use model. However, unchecked growth can turn this dream into a financial nightmare due to the complexities the cloud introduces. According to the Flexera State of the Cloud Report, 87% of organizations have adopted multicloud strategies, complicating cost management even more by scattering workloads and expenses across various platforms. A stark reminder of cloud complexity surfaced when one company was hit by a staggering $65 million cloud monitoring bill in just one quarter, underscoring the urgency for businesses to gain visibility into their spending. As I’ve consistently preached, “Control your costs or your costs will control you.” Organizations are finally recognizing the need to analyze their cloud expenditures in-depth. I hate to say “I told you so,” but I have advocated for better accountability for years. “You can’t fix what you can’t see” is another guiding principle in my discussions on cloud cost management. Many organizations have embraced the agility that the cloud delivers, yet overlook the critical need for visibility. Effective tracking and optimization become nearly impossible when expenses are scattered across different providers in multicloud environments. Complexity is an unavoidable outcome of multicloud and comes with higher operational costs, which must also be managed. Flexera’s findings highlight that most organizations struggle with limited visibility into how resources are consumed, which is a massive barrier to effective management. Centralized monitoring is essential. Without it, organizations are flying blind, unable to spot waste or identify areas for optimization. The microservices conundrum The rise of cloud-native applications and microservices has further complicated cost management. These systems abstract physical resources, simplifying development but making costs harder to predict and control. Recent studies have revealed that 69% of CPU resources in container environments go unused, a direct contradiction of optimal cost management practices. Although open-source tools like Prometheus are excellent for tracking usage and spending, they often fall short as organizations scale. I recommend third-party monitoring solutions tailored to multicloud and microservices environments. These tools dive into specifics such as pods, nodes, and namespaces to provide deep insights and actionable recommendations for right-sizing workloads. This ensures efficient resource allocation without compromising performance. Cloud cost optimization isn’t a one-time event; it requires an ongoing commitment. Organizations must develop processes to analyze and optimize cloud usage regularly. Identifying overprovisioned workloads and aligning resources with actual needs are fundamental steps. Establishing a feedback loop to monitor performance metrics post-optimization is also crucial. Suppose performance metrics, such as service-level agreements, drop? Revisiting those changes is vital. This iterative process guarantees that cost savings do not come at the expense of functionality or business objectives. A critical component of effective cloud cost management is demystifying cloud pricing models. Providers often lay out their pricing structures in great detail, but translating them into actual costs can be difficult. A lack of understanding can lead to spiraling costs. The irony is that the tools intended to control cloud costs can often inflate them, as evidenced by the infamous $65 million cloud bill. Monitoring tools can serve as a helpful resource, but they can also become liabilities. Organizations must evaluate the pricing models of both their cloud services and monitoring solutions to ensure insights remain affordable and scalable, providing value without becoming a financial burden. Ignoring an urgent call to action In today’s economic environment, every expense should be scrutinized. However, cloud costs sometimes escape assessment due to their association with innovation and agility. Developers often prioritize speed and functionality, neglecting the potential financial implications of their decisions. Cloud-native containers and microservices have benefits but their substantial cost downsides are often unaddressed. This is where leadership must step in. Understanding the financial impacts of resource allocation enables organizations to make smarter choices and strike a balance between performance and cost. An increased focus on cloud cost management is a welcome change, but it’s only the beginning. Through strategic oversight and ongoing optimization, organizations can harness the true potential of the cloud while maintaining financial health. I’ve said it for years: A proactive approach to managing cloud expenses is essential for success and survival in this fast-paced digital landscape. Hopefully, enterprises are getting religion now about cloud cost management. However, I won’t believe it until I see it. Call me crazy.
Read More

The journey towards a knowledge graph for generative AI

How does the journey to a knowledge graph start with unstructured data—such as text, images, and other media? The evolution of web search engines offers an instructive example, showing how knowledge can be extracted from unstructured sources and refined over time into a structured, interconnected graph. As we will show in this post, this process underpins the journey from retrieval-augmented generation (RAG) systems to more sophisticated approaches like GraphRAG and Knowledge-GraphRAG. From isolated nodes to graph of knowledge and knowledge graph  Early search engines like AltaVista relied on simple keyword matching, treating web pages as isolated entities. However, web pages are interconnected through hyperlinks. Google transformed search by recognizing that the world wide web is not merely a collection of standalone pages but a vast network of interconnected knowledge—what we refer to as a graph of knowledge.  From AltaVista to Google and from strings to things. RelationalAI While this approach significantly enhanced search capabilities, it soon became apparent that to support more advanced functions like reasoning, a more robust solution was necessary: a structured, machine-readable framework. This shift in perspective culminated in Google’s 2012 introduction of the knowledge graph, encapsulated by the phrase “things not strings,” which aimed to connect entities rather than just words. The graph of knowledge vs. the knowledge graph: GoK is a broader, more conceptual idea focusing on interconnected information, without necessarily being highly structured. KG refers to a formal, structured, machine-readable network of entities and relationships, designed for advanced reasoning and AI tasks. RelationalAI Whereas the graph of knowledge (GoK) is a broader, more conceptual idea focusing on interconnected information, without necessarily being highly structured, the knowledge graph (KG) refers to a formal, structured, machine-readable network of entities and relationships, designed for advanced reasoning and AI tasks. Tim Berners-Lee, the inventor of the web, had long foreseen this need for a structured way to organize information, coining the term “semantic web” in his book Weaving the Web. While this vision of the semantic web took time to materialize, Google’s knowledge graph made it practical, setting the stage for the development of sophisticated AI systems that could reason over these knowledge networks. Similarly, companies like Amazon created a product graph, and the open-source community worked on initiatives like Wikidata, which organized Wikipedia into a massive, public knowledge graph. From knowledge graphs to question answering  The creation of knowledge graphs transformed how information was retrieved, organized, and connected, moving from simple keyword matching to sophisticated entity recognition. But this advancement didn’t stop at improving web search. It became a cornerstone in solving more complex problems in AI, particularly in the realm of question answering (QA) systems. QA systems are one of the most powerful applications in the generative AI space, requiring the ability to extract precise information from both structured and unstructured data. As the complexity of questions increases, so does the need for more structured, interconnected knowledge—just as the development of knowledge graphs addressed the need for a deeper, more context-aware web search. There are three common types of questions, each with varying levels of complexity and requirements for structured data: Single-point access questions: Simple fact-based queries that can be answered by retrieving a single text snippet. Multi-point access questions: Questions requiring multiple text snippets that must be retrieved and presented together for comprehensive answers. Advanced reasoning questions: More complex queries that necessitate integrating multiple pieces of information, often requiring symbolic reasoning that goes beyond the capabilities of standard language models. While the first two question types can often be answered using a graph of knowledge, the third type (advanced reasoning questions) demands a more structured approach—a true knowledge graph. Examples of different types of questions.  RelationalAI From RAG to GraphRAG: Answering questions over a GoK Retrieval-augmented generation has emerged as the state-of-the-art approach for question answering in the generative AI era. Like the early keyword-based search engines, RAG treats documents as independent entities, indexing each document segment separately. So while RAG is effective for simpler queries, the approach doesn’t leverage the deeper connections between information that exist across documents. From RAG to GraphRAG.  RelationalAI To address this limitation, Microsoft introduced the concept of GraphRAG in early 2024. GraphRAG organizes information into a graph of knowledge, enabling it to leverage relationships between pieces of information, much like how Google revolutionized web search by treating pages as part of an interconnected web. Large language models (LLMs) play a crucial role in this process. When presented with a set of documents, LLMs generate entities and relationships in the form of triplets. Although these triplets may contain noise or redundancy, they offer a robust method for organizing information effectively.  By treating text passages as nodes in a graph, GraphRAG enables graph operations like community detection, pattern extraction, and graph traversal. These operations allow for the synthesis of multiple pieces of information, which can then be fed into RAG models to generate richer, more accurate answers to multi-point questions. In short, GraphRAG helps build a graph of knowlege by connecting fragmented text into a graph-like structure, providing LLMs with more relevant, interconnected input to improve question answering performance. In the GraphRAG pipeline, documents are linked together (graph of knowledge) through entities and relations among them, followed by community detection, which results in a summary for each community of documents.  RelationalAI From GraphRAG to Knowledge-GraphRAG: Answering questions over a KG While GraphRAG relies on the reasoning capabilities of LLMs to connect text-based data in a graph of knowledge, the third type of question—those requiring deep reasoning—need more than a GoK. They require a fully structured knowledge graph, where facts, entities, and relationships are organized into a formal ontology. Knowledge graphs not only store factual information but also capture complex relationships and rules that are essential for advanced reasoning. In these scenarios, LLMs are still important, but their role shifts from generating or synthesizing content to querying the structured KG. The LLM retrieves entities and relations from the knowledge graph and formulates a query based on the structured knowledge within it. A specialized knowledge graph engine then executes the query, returning a precise and logical answer. This process is detailed in our recent  publication, QirK: Question Answering via Intermediate Representation on Knowledge Graphs, which outlines a framework for combining LLM capabilities with the logical power of knowledge graphs to answer complex queries. The framework supports question answering on top of the popular Wikidata knowledge graph. Example questions that can be answered are “Name a movie directed by Quentin Tarantino or Martin Scorsese that has Robert De Niro as a cast member”, “Which movie’s director is married to a cast member?”, and “List the movies in which both Robert De Niro and Al Pacino were casted”.  Qirk architecture.  RelationalAI The road to advanced AI reasoning Generative AI provides an unprecedented opportunity to reshape the way we organize and retrieve knowledge. The journey from unstructured data (texts, images, etc.) to a fully structured knowledge graph—rich in facts, logical constraints, and recursive rules—is complex and challenging, but the rewards are immense. Intermediate steps, such as the graph of knowledge, offer practical solutions that advance AI applications like question answering, even as we work toward the more ambitious goal of fully realized knowledge graphs. While the path to creating high-quality knowledge graphs may be long, tools like GraphRAG represent important milestones on this journey. By bridging the gap between unstructured text and structured knowledge, they pave the way for AI systems capable of answering increasingly complex questions with greater accuracy, making the vision of advanced, reasoning-powered QA systems a reality.Nikolaos Vasiloglou is VP of Research-ML for RelationalAI, the industry’s first knowledge graph coprocessor for the data cloud. Nikolaos has over 20 years of experience implementing high-value machine learning and AI solutions across various industries.   — Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.
Read More

CISA publishes security goals for software development process, product design

The US Cybersecurity & Infrastructure Security Agency (CISA) has published IT sector-specific goals (IT SSGs) to protect against cyber threats, including 11 software development process goals and seven product design goals. Published January 7, the Information Technology (IT) Sector-Specific Goals were based on CISA operational data and research on the current threat landscape. The IT SSGs are additional voluntary practices with high-impact security actions beyond cross-sector cybersecurity performance goals (CPGs). The number-one software development process goal cited is to separate all environments used in software development—including development, build, test, and distribution environments—to prevent unauthorized access to sensitive data and systems. The number-one goal for secure product design cited is to increase the use of multifactor authentication (MFA) to reduce the risk of password compromise or utilization of weak passwords. The goals were developed in collaboration with government, industry groups, and private sector groups. The complete list of security goals for the software development process: Separate all environments used in software development. Regularly log, monitor, and review trust relationships used for authorization and access across software development environments. Enforce multifactor authentication (MFA) across software development environments. Establish and enforce security requirements for software products used across software development environments. Securely store and transmit credentials used in software development environments. Implement effective perimeter and internal network monitoring solutions with streamlined, real-time alerting to aid responses to suspected and confirmed cyber incidents. Establish a software supply chain risk management program. Make a software bill of materials (SBOM) available to customers. Inspect source code for vulnerabilities through automated tools or comparable processes and mitigate known vulnerabilities prior to any release of products, versions, or update releases. Address identified vulnerabilities prior to product release. Publish a vulnerability disclosure policy. The complete list of security goals for software product design: Increase the use of multifactor authentication (MFA). Reduce default passwords. Reduce entire classes of vulnerabilities. Provide customers with security patching in a timely manner. Ensure customers understand when products are nearing end-of-life support and security patches will no longer be provided. Include common weakness enumeration (CWE) and common platform enumeration (CPE) fields in every common vulnerabilities exposures (CVE) record for the organization’s products. Increase the ability for customers to gather evidence of cybersecurity intrusions affecting the organization’s products.
Read More

Yes, you should use AI coding assistants—but not like that

If you’re exhausted by the constantly changing AI landscape, you’re not alone. In a thoughtful post, Microsoft Research brainiac Victor Dibia captures the “particular kind of fatigue that comes from trying to match the unprecedented pace of AI advancement.” However, if you’re a developer, you no longer have the option to sit out generative AI’s impact on software development. Yes, your mileage may vary depending on whether you’re a more experienced developer or a less seasoned one, but we’ve reached a point where you simply must be using AI to assist you in your work. But how? Applied AI engineer Sankalp Shubham details the evolution of AI-driven coding assistants, with excellent advice on how to use them effectively. Shubham likens coding assistance to a car: Features like autocomplete give you maximum control but move slowly (first gear), while more ambitious features like agentic mode “trade granular control for more speed and automation.” The irony is that more experienced developers tend to (rightly) play it safe in “first gear,” while less experienced developers give AI more control in order to go fast, breaking things in the process. More assistance, more problems This is such a critical point. AI is a must for software developers, but not because it removes work. Rather, it changes how developers should work. For those who just entrust their coding to a machine, well, the results are dire. Santiago Valdarrama calls it the “whack-a-model AI workflow: 1. Ask a model to generate some code. 2. The code has a bug. 3. Ask the model to fix the bug. 4. You now have two different bugs. 5. Ask the model to fix the bugs. 6. There’s a third bug now.” As he summarizes, “This is the sad reality for those who can’t understand the code their AI model generated.” AI, in short, helps with software development, but it doesn’t replace software developers. Without the intelligence applied by people, it’s prone to all sorts of mistakes that won’t get caught, which creates all sorts of issues. As Honeycomb CTO Charity Majors puts it, AI has done nothing “to aid in the work of managing, understanding, or operating … code. If anything, it has only made the hard jobs harder.” Use AI wrong and things get worse, not better. Stanford researcher Yegor Denisov-Blanch notes that his team has found that AI increases both the amount of code delivered and the amount of code that needs reworking, which means that “actual ‘useful delivered code’ doesn’t always increase” with AI. In short, “some people manage to be less productive with AI.” So how do you ensure you get more done with coding assistants, not less? Driving slowly gives greater control As Shubham reminds us with his car analogy, “The lower the gear in a car, the more control you have over the engine, but you go with less speed.” As applied to coding, “If you feel in control, go to a higher gear. If you are overwhelmed or stuck, go to a lower gear.” That’s the secret. It’s always personal to the developer in question and requires a level of self-awareness, but that’s the key. As Shubham says, “AI-assisted coding is all about grokking when you need to gain more granular control and when you need to let go of control to move faster,” recognizing that “higher-level gears leave more room for errors and trust issues.” More senior engineers seem to understand this, entrusting AI tools cautiously (i.e., using them to get more done while in “lower gears” like auto-complete). The problem is that junior engineers and non-engineers trust AI tools way more than they should. To some extent, we can blame years of marketing by low-code and no-code platforms that promise to turn everyone into a developer without any (or much) software knowledge. This has always been a false hope. Here’s the solution: If you want to use AI coding assistants, don’t use them as an excuse not to learn to code. The robots aren’t going to do it for you. The engineers who will get the most out of AI assistants are those who know software best. They’ll know when to give control to the coding assistant and how to constrain that assistance (perhaps to narrow the scope of the problem they allow it to work on). Less-experienced engineers run the risk of moving fast but then getting stuck or not recognizing the bugs that the AI has created. The TL;DR? AI can’t replace good programming, because it really doesn’t do good programming. It can still be very helpful, but its helpfulness correlates strongly with the relative expertise of the developer using it.
Read More

Open source trends for 2025 and beyond

Over the past decades, open source software (OSS) has transformed from being merely a cheaper option into the superior choice for enterprise infrastructure. Now it often provides higher quality, stronger security, better privacy, unparalleled extensibility, and access to innovation compared to proprietary rivals. It is no coincidence that 96% of all software today relies on open source, and large enterprises are increasingly inclined to invest in OSS-based solutions to capitalize on these advantages. For venture investors like myself, this shift in market preferences represents a significant opportunity to fund the next generation of OSS-based category leaders in enterprise software. And a few notable trends will likely shape how this market area will likely evolve in 2025 and beyond. Rise of open source AI The rapid development of foundational large language models, related AI infrastructure, and their applications has ignited debates around the crucial AI challenges. Many of these issues — such as transparency, adaptability, and security — can be addressed through openness. After the initial wave led by closed-source pioneers like OpenAI and Anthropic, a new cohort of open source AI models, including Meta’s Llama and Mistral AI, is now raising the tide and boosting the global AI ecosystem. While debates about the definition of Open Source AI continue, with the Open Source Initiative (OSI) recently publishing its first draft, this ambiguity hasn’t slowed the adoption of modern AI models. However, to maximize their value, enterprises have to customize AI to their specific needs — whether by building tailored AI infrastructure, fine-tuning models on proprietary data sets, or building AI agents for specialized tasks. Open source is exceptionally well-positioned to address these demands, and the future is going to be open. Each month, new AI infrastructure companies emerge, and the current top AI OSS projects developed by startups (measured by yearly active contributors on GitHub) consist of LangChain, LlamaIndex, Hugging Face, Dify, and Ollama. What makes the rise of Open Source AI particularly significant is its ability to influence and amplify other open-source trends. AI is generally changing how software is built and consumed, and that has important (mainly positive) consequences for open source.  Expanding to business application platforms Historically, open source has thrived in developer-centric areas such as software development tools and infrastructure, including databases. However, over the past two decades, many enterprise suites like ERP and CRM — which began as business applications — have evolved into essential platforms as new application layers have been built on top of them. Open source is actively capturing the modern enterprise infrastructure and has a strong chance to eventually disrupt closed-source ecosystems of legacy enterprise suite vendors with better alternatives. A great example is Odoo, an open-source ERP platform, which recently raised another funding round at a $5.3 billion valuation and challenges SAP’s dominance in certain niches. New notable players are emerging in similar areas: Twenty offers an open-source enterprise CRM (alternative to Salesforce), Plane provides an open-source project management system (alternative to Jira and Asana), and Cal.com offers a scheduling platform (alternative to Calendly). The rise of AI agents is accelerating this trend. To succeed at scale, these agents will require extensive customization and close integrations with internal enterprise data sources and workflows (as human employees have), driving the adoption of AI-native, adaptive, open-source business application platforms.  Mitigating risks in the software supply chain  With the average software application now relying on over 500 open-source dependencies, software supply chain security has become a critical concern for enterprises. Many OSS projects are developed by unpaid enthusiasts who lack the resources for ongoing maintenance, leading to potential vulnerabilities — as in the case of Apache Log4j. The adoption of AI coding tools, such as GitHub Copilot, will further accelerate code creation, increasing the overall code base and potentially worsening these security challenges. According to Gartner, the cost of software supply chain attacks is expected to rise from $46 billion in 2023 to $138 billion by 2031. To address these growing risks to IT infrastructure, enterprises will need to adopt next-gen tools that leverage both modern AI and OSS in software composition analysis, vulnerability detection, software bills of materials, alerting, observability, AIOps, and other areas of devops and devsecops.   Exploring new funding models  Sustainability remains one of the core challenges for the open-source ecosystem. While some projects can be commercialized — though that poses its own set of challenges — the majority of OSS cannot, and therefore continues to rely on unsustainable, non-profit sources of funding. In the world of commercial OSS organizations, discussions about the evolution of open-source licenses are set to intensify. Pressured by large cloud vendors, probably a few more tech companies will shift to source-available and other licenses that are not OSI-approved. The rise of AI adds another layer of complexity to these debates, but also boost the established open-core business model, where modern AI-based premium features on top of free OSS code could have much better monetization potential. For free community-driven OSS, a systemic, sustainable, and efficient funding model is still missing. This gap poses growing risks to the global software infrastructure. However, 2024 has introduced several promising ideas and experiments that may pave the way for viable solutions in 2025. One such initiative is the Open Source Pledge, which encourages companies to compensate OSS maintainers with at least $2,000 per full-time developer they employ. Another initiative involves index-based, programmatic funding to support the long tail of small but crucial OSS projects. Finally, a potentially transformative solution for sustainable funding of OSS can be the open source endowment. It’s a financing model that has sustained leading universities for centuries, and the global OSS community have a lot in common with them. In summary, 2025 promises to be an exciting year for the evolution of open source software. The changes will likely be driven by the increasing and interlinked adoption of AI and OSS across all levels of the enterprise tech stack, alongside the next-gen commercial and non-profit solutions addressing OSS sustainability.  Konstantin Vinogradov is a London-based general partner at Runa Capital. — New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.
Read More

The devops certifications tech companies want

Devops continues to expand in development environments everywhere from small startups to the largest global enterprises. The worldwide devops market, including products and services, was expected to increase from $10.56 billion in 2023 to $12.8 billion in 2024, according to market research firm Business Research Co. The market will see “exponential growth” in the next few years, the firm predicts, expanding to $29.79 billion in 2028. The growth of the devops market in the forecast period can be attributed to a focus on developer experience, the maturing of continuous integration and continuous delivery (CI/CD) pipelines and tools, more collaboration across development and operations teams, multi-cloud and hybrid cloud strategies, and an increased emphasis on observability. As devops adoption increases, there is more need for developers with demonstrated expertise in devops culture, tools, and best practices. I asked a variety of experts what hiring managers look for in a devops candidate. Certifications and the hiring process Certifications can serve as a strong differentiator in the hiring process, particularly for early-career professionals or those transitioning into devops roles,” says Iryna Melnyk, a marketing consultant at marketing services firm Jose Angelo Studios. “While practical experience often takes precedence, certifications signal a commitment to learning and validate foundational knowledge,” Melnyk says. “They act as a ‘foot in the door,’ especially for candidates competing for roles in organizations with structured hiring processes.” Certifications are particularly important in the hiring process when job candidates are early in their careers or transitioning into devops roles, says Rob Stevenson, founder of BackupVault, a provider of cloud-based data backup and protection services. “Certifications signal to employers that a candidate has formal training and hands-on experience with key tools and methodologies like CI/CD pipelines, containerization, and automation,” Stevenson says. For established devops professionals, certifications can validate an individual’s expertise and demonstrate a commitment to staying updated with industry standards, he says. How much of an impact devops certifications have on the hiring process often depends on the company and role, says Archie Payne, president of technical recruiting firm CalTek Staffing, which frequently works with devops candidates and the companies who hire them. “Some of our client companies hold certifications in very high regard, “ Payne says. “I have worked with some that require employees to get certified if they aren’t already when they’re hired, and for these employers candidates who hold certifications are often at the top of their list when we pass along resumes.” In general, “I would say that certifications are valuable because they prove that you have the skills required for the role,” Payne says. “Many of these credentials require a designated amount of professional experience in addition to passing an exam, so they function as a kind of shorthand to indicate a candidate’s likely suitability for the role.” If a hiring client doesn’t specify its stance on certifications, “I tend to consider these credentials equally alongside work experience and other forms of education like college degrees, and they can make an impact on who I recommend for a role,” Payne says. The level of consideration for certifications often depends on the maturity of the organization in adopting devops practices, says Vladislav Bilay, devops engineer at software development and consulting firm Aquiva Labs. “In enterprises with structured teams, certifications can act as a gatekeeper for promotion into technical roles,” Bilay says. “On the other hand, for smaller or highly dynamic companies, real-world experience and problem-solving abilities often carry more weight. However, even in these scenarios, certifications can tip the scales in favor of candidates with similar levels of experience because they demonstrate a commitment to continuous learning.” Benefits of devops certifications Having one or more devops certifications offers a variety of benefits for technology professionals seeking a role in software development or operations. Certifications often help professionals secure higher-paying roles, says Binod Singh, founder of cybersecurity technology provider Cross Identity. “For instance, a certified devops engineer or site reliability engineer can negotiate better salaries compared to uncertified peers,” he says. Certifications can also enhance the credibility of professionals in devops. “Certifications like AWS Certified Devops Engineer or Kubernetes certifications are recognized globally,” Singh says. “They showcase expertise in specific tools and methodologies that are in high demand.” Bilay agrees that certifications build credibility, especially for those new to the market or those moving to devops from other fields. “For example, someone with development experience can earn Kubernetes certifications like Certified Kubernetes Administrator (CKA) to demonstrate their operational capabilities in managing container workloads,” he says. “This can bridge the credibility gap with potential employers.” Certifications can help individuals to be better at collaboration, Singh says. “Devops certifications emphasize the cultural aspects of software development, helping professionals improve their collaboration with cross-functional teams,” he says. “For example, certified professionals often excel at facilitating smoother communication between development and operations teams, leading to faster product delivery.” Another possible advantage of devops certification is professional growth. “Earning certifications fosters a mindset of continuous learning, which aligns perfectly with the iterative nature of devops,” Melnyk says. “They provide a structured learning path, allowing professionals to gain a comprehensive understanding of specific tools or practices,” Bilay says. “For example, someone who earns an AWS Certified Devops Engineer – Professional certification gains a keen understanding of cloud deployment pipelines, container orchestration, and monitoring solutions like Amazon CloudWatch. This knowledge directly translates to better performance in cloud environments.” Another benefit is career mobility. “I’ve seen professionals leverage certifications such as AWS Certified Devops Engineer to pivot into higher-paying roles or new industries,” Melnyk says. In another case, “a colleague transitioned from system administration to a devops engineering role after earning the Kubernetes Administrator certification The benefits of devops certifications extend beyond just career advancement, Stevenson says. “For individuals, certifications build confidence and ensure a strong foundation in devops principles,” he says. “For example, earning a Terraform certification helps practitioners understand infrastructure as code at a deeper level, which is crucial for scalable and efficient deployments.” Getting a certification can be useful for professionals who want to advance from entry-level roles into more mid- and senior-level positions, Payne says. “I would also say it’s highly beneficial for those who have established careers in a related domain like software development, and want to transition into devops roles without the need to start from scratch,” Payne says. “Devops is a relatively new discipline and one that’s growing quickly, so there are a lot of career opportunities in the field. I’ve worked with several job seekers recently who were laid off from other tech roles and have struggled to find a new job, and pivoting into an in-demand field like devops can be one way to do so.” Devops certifications can also provide value to organizations as they look to expand their devops teams. “Certifications reassure employers of a candidate’s grasp of key tools and methodologies like CI/CD, automation, and container orchestration,” Melnyk says. Certifications also ensure that individuals are aligned with best practices, minimizing errors and improving collaboration, Stevenson says. “For instance, a certified professional familiar with Jenkins can optimize CI/CD pipelines, reducing software delivery time,” he says. “For employers, hiring certified professionals ensures competence, leading to smoother project execution” and better return on investment. For hiring managers, “these certifications reduce uncertainty when assessing technical skills, especially in environments where hands-on experience is difficult to assess through interviews alone,” Bilay says. Nothing like experience While certifications are valuable, they are not the only determining factor in whether someone will get hired for a job or offered a promotion. Devops certifications “are typically reviewed alongside the candidate’s experience in the evaluation process to gauge their readiness for the role and skills balance,” says Damien Filiatrault, CEO of Scalable Path, a software development staffing agency. “Although such certificates cannot replace actual practice, they can be tremendously useful in a competitive environment, especially when hiring managers are pressed for time in their recruitment efforts,” Filiatrault says. Employers also weigh problem-solving ability and a candidate’s track record with successful implementations, Stevenson says. “Certifications like AWS Certified Devops Engineer or Kubernetes Administrator might get your resume noticed,” he says. “But demonstrating how you’ve reduced deployment times or streamlined workflows carries more weight during interviews.” “Hiring managers often see certifications as a way to quickly assess technical competence in specific areas, such as CI/CD pipelines, containerization, or cloud infrastructure,” says Singh, who has worked closely with hiring managers to align technical expertise with business needs. “However, real-world experience combined with certifications carries the most weight during the decision-making process.” Popular devops certifications The value of a devops certification stems from the range of skills and platforms covered. The following are among the most popular according to experts. AWS Certified Devops Engineer – Professional Certifications like this one have gained popularity thanks to the ongoing growth of cloud services and the widespread adoption of cloud-native environments. The AWS Certified Devops Engineer – Professional showcases individuals’ technical expertise in provisioning, operating, and managing distributed application systems on the AWS platform, according to Amazon Web Services. The certification is intended for individuals with two or more years of experience. Certified Kubernetes Administrator (CKA) This certification is gaining popularity as containerization becomes more central to devops workflows. The CKA was created by the Linux Foundation and the Cloud Native Computing Foundation (CNCF) as a part of their ongoing effort to help develop the Kubernetes ecosystem. The purpose of the CKA program is to demonstrate that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators, according to the Cloud Native Computing Foundation, which is part of the Linux Foundation. Microsoft Certified Azure Devops Engineer Expert This certification demonstrates expertise in end-to-end devops practices on Microsoft’s Azure cloud platform. Skills measured include the ability to design and implement processes and communications and a source control strategy, build and release pipelines, develop a security and compliance plan, and implement an instrumentation strategy. Responsibilities for this role include delivering Microsoft devops solutions that provide continuous security, integration, testing, delivery, deployment, monitoring, and feedback. Professional Cloud Devops Engineer This is another certification that is in demand because of the growth of the cloud and cloud-native development environments. Certified Professional Cloud Devops Engineers deploy processes and capabilities throughout the systems development lifecycle using Google-recommended methodologies and tools, according to Google Cloud. They enable efficient software and infrastructure delivery while balancing reliability with delivery speed, the company says.
Read More

Oracle refuses to yield JavaScript trademark, Deno Land says

JavaScript runtime provider Deno Land’s efforts to get Oracle to yield the trademark for JavaScript have hit a snag, with Oracle refusing to voluntarily withdraw the trademark, Deno Land said. A Deno Land post on X on January 7 provided an update about Deno Land’s continuing efforts to free up the trademark, which Oracle took ownership of when it purchased Sun Microsystems in 2009. “Oracle has informed us they won’t voluntarily withdraw their trademark on ‘JavaScript.’ Next: they’ll file their answer and we’ll start discovery to show how ‘JavaScript’ is widely recognized as a generic term and not controlled by Oracle.” Deno Land filed a petition to cancel Oracle’s ownership of the JavaScript trademark with the United States Patent and Trademark Office (USPT0) in late-November. Deno Land argued that Oracle had abandoned the trademark and freeing it up would enable use of the name “JavaScript” without concerns of legal overreach. Deno Land also accused Oracle of committing fraud in its trademark renewal efforts in 2019 by submitting screen captures of the website of JavaScript runtime Node.js, even though Node.js was not affiliated with Oracle. Oracle on January 10 could not be reached for comment about the JavaScript trademark battle. Deno Land co-founder Ryan Dahl, creator of both the Deno and Node.js runtimes, said a formal answer from Oracle is expected before February 3, unless Oracle extends the deadline again. “After that, we will begin the process of discovery, which is where the real legal work begins. It will be interesting to see how Oracle argues against our claims — genericide, fraud on the USPTO, and non-use of the mark.” The legal process begins with a discovery conference by March 5, with discovery closing by September 1, followed by pretrial disclosure from October 16 to December 15. An optional request for an oral hearing is due by July 8, 2026. The dispute between Oracle and Deno Land could go on for quite a while.
Read More

Why JavaScript’s still on top in 2025

Welcome to a new year of programming and the brand new monthly list of JavaScript stories just for developers! Among the highlights so far: Svelte and SvelteKit have seen a slew of incremental improvements, Astro.js 5.0 just hit, and the Next.js team is working on a composable caching mechanism. Meanwhile, we revisit the old “Is JavaScript dead?” canard, respectfully consider TypeScript (a valid replacement if you’re looking), and put the whole thing to bed with a look at two recent developer community surveys. Also, scroll down for a couple of solid tutorials and a deep dive into SEO for web developers. Top picks for JavaScript readers on InfoWorld Just say no to JavaScriptStarting the new year with a bang, InfoWorld’s Nick Hodges vents his critiques about the lingua franca of the web—or as he puts it, the assembly language of web browsers. My take? The one thing that keeps developers loving JavaScript is its flexibility, and that’s not going anywhere. What is TypeScript? Strongly typed JavaScriptNick might not like JavaScript, but he has plenty of good things to say about TypeScript. If you want JavaScript’s ubiquity with the benefits of strong typing, 2025 might be the year to make your move. Here’s a good overview to get you started. JavaScript is still number one–JetBrains reportReports of JavaScript’s demise are greatly exaggerated, and the December 2024 JetBrains report seals it. Get the lowdown on why JavaScript is still the most popular language on the planet. Intro to Express.js: Endpoints, parameters, and routesLearn the basics of using the most popular JavaScript server—in fact, one of the most popular servers anywhere. This two-part tutorial demonstrates Express’ simple, direct approach to handling requests and delivering responses. (Part 2 covers templates, data persistence, and forms.) More good reads and JavaScript updates elsewhere State of JS 2024The results from the annual festival of self-reflection by and for the JavaScript community have hit the shelves. More framework stability, simpler tooling, and it turns out people still want static typing. It’s all here in the State of JS report. Run your Next.js SSR app on Deno DeployOne of the most interesting JavaScript projects is the Deno runtime, which includes a deployment service called Deno Deploy—now supporting server-side rendered Next.js apps. The must-have SEO checklist for developers for 2025SEO has become a crucial part of developing web apps, and this update is packed with practical tips for developers who want to get it right.
Read More

Ephemeral environments in cloud-native development

An emerging trend in cloud computing is using ephemeral environments for development and testing. Ephemeral environments are temporary, isolated spaces created for specific projects. They allow developers to swiftly spin up an environment, conduct testing, and then dismantle it once the task is complete. Although proponents tout the benefits of this model—flexibility, cost savings, and improved efficiency—it’s essential to take a step back and critically evaluate its pragmatic value for enterprises. This is especially important at a time when the hype surrounding cloud-native technologies often clouds judgment, causing organizations to act impulsively. Cloud-native approaches have significantly transformed how enterprises operate, delivering benefits and challenges. On the positive side, cloud-native approaches enable greater scalability and flexibility, allowing organizations to rapidly deploy applications and efficiently manage resources through microservices and containerization. This results in faster innovation cycles and improved time to market, fostering a culture of agility and responsiveness to market demands. However, there are notable downsides. Transitioning to cloud-native environments can complicate systems management and integrations, particularly for established enterprises with legacy systems. Cloud-native skills are in high demand but not widely available. Additionally, reliance on continuous integration/continuous delivery (CI/CD) practices can strain inadequately automated teams, leading to potential bottlenecks and an increased risk of deployment errors. Furthermore, the ephemeral nature of cloud-native environments can result in resource mismanagement if they’re not carefully governed, which can lead to unforeseen costs. Cloud-native strategies offer enhanced capabilities and competitive advantages, but careful planning and a robust operational framework are essential to mitigate their inherent challenges. The allure of ephemeral environments At first, ephemeral environments sound ideal. The capacity for rapid provisioning aligns seamlessly with modern agile development philosophies. However, deploying these spaces is fraught with complexities that require thorough consideration before wholeheartedly embracing them. Advocates of ephemeral environments often emphasize their cost-effectiveness compared to traditional shared development settings. They argue that organizations can reduce waste because resources are allocated only when needed. Theoretically, this could translate into substantial savings, particularly for large enterprises managing numerous microservices. However, the devil is in the details. The initial setup and ongoing management of ephemeral environments can still incur considerable costs, especially in organizations that lack effective automation practices. If one must spend significant time and resources establishing these environments and maintaining their life cycle, the expected savings can quickly diminish. Automation isn’t merely a buzzword; it requires investment in tools, training, and sometimes a cultural shift within the organization. Many enterprises may still be tethered to operational costs that can potentially undermine the presumed benefits. This seems to be a systemic issue with cloud-native anything. The challenge of integration Integrating ephemeral environments into existing workflows can become another hurdle. Many teams are accustomed to traditional workflows that offer visibility and control over the development phases. If not managed correctly, transitioning to a model where environments are rapidly spun up and dismantled can lead to confusion and fragmentation. Moreover, if a company lacks robust infrastructure-as-code strategies, initiating the necessary automation can become a bottleneck. The transition doesn’t just shift paradigms; it demands that developers, operations teams, and security personnel adjust their systems and methodologies. Organizations need to ask themselves: Do we have the ability and the resources to make such a shift effectively? Will we be left grappling with chaos? In many cases that I’ve seen, the latter is more likely. Another frequently overlooked aspect of ephemeral environments is their impact on quality assurance (QA) processes. The notion that temporary environments will streamline testing overlooks a crucial reality: It’s not just about the data. QA in software development requires consistent methodology and experience. The risk of reverting to a laissez-faire approach to testing is a significant concern. In essence, if using ephemeral environments encourages a disregard for best practices, it could compromise quality and ultimately harm the organization’s reputation and bottom line. Balancing innovation with caution As organizations consider adopting ephemeral environments in their cloud-native journey, they must maintain a critical eye. The allure of instant flexibility and cost savings can be tempting, but the practical implications demand careful analysis and, dare I say it, a business case. The success of ephemeral environments hinges on the technology itself and an organization’s ability to embrace the necessary operational shifts. This usually means a much more significant investment in automation. The promise of ephemerality is potent but should not blind enterprises to its complexities and risks. Ultimately, cloud-native transformation should prioritize a balanced approach that melds innovation with pragmatic considerations. This advice is for any endeavors that result in “all-cloud-native,” the magic phrase developers and architects use to get their way in meetings. The key to success in this arena is to ensure that the move toward ephemeral environments aligns with the company’s broader strategic goals. Enterprises can only reap the benefits of this evolving landscape if they do not fall prey to unbridled enthusiasm. Please take a step back, take a deep breath, and ensure that the promised land is truly attainable.
Read More

Cohere goes ‘North’ with agentic AI

Large language model (LLM) provider Cohere has unveiled its new agentic AI offering — North — a low-code platform that will allow enterprises to build and deploy agents across different business functions. The offering will compete with Microsoft’s autonomous agents, Google’s Vertex AI agents, and Salesforce’s Agentforce. Currently only being offered as part of an early access program, North will allow enterprises to build agents that help find relevant information across global knowledge repositories in multiple languages, conduct research & analysis, and perform complex tasks spanning various lines of business. “This includes agents for core business functions like HR, finance, customer support, and IT that allow teams to execute faster and achieve more,” Cohere said in a statement. The tech stack behind North One of the core tenets of North is Cohere’s proprietary multimodal AI search and discovery framework — Compass. Compass itself combines retrieval models such as Embed and Rerank, document parsing abilities, and a managed index. While the retrieval models help with RAG applications, the document parsing ability pre-processes documents and supports PDF, PPT, DOCX, and XLSX formats. The ability to support different formats is key, according to Aidan Gomez, CEO and co-founder of Cohere, as extracting information from fragmented data is essential for any AI system as it needs to surface insights for employees or end-users to take action on. The managed index inside compass works like a managed service that manages the index or the vector database to improve performance and reduce latency, the company said. Cohere said North will come with a security system that can imbibe an enterprise’s identity and access management rules and can be deployed in private cloud or on-premises. This, according to Gomez, was one of the prominent demands among customer enterprises as they want to ensure that sensitive data is not leaked, making the offering “well-suited for regulated industries where companies simply cannot risk their proprietary data.” Among the early customers of North is the Royal Bank of Canada, which is co-developing a North for Banking offering with Cohere. Focus on ease of building and deploying agents Cohere is positioning North for its ease of use for building and deploying agents, a growing concern among developers who are trying to build applications underpinned by generative AI. A survey conducted by IBM involving developers across at least 1,000 enterprises revealed that though almost everyone is exploring how to use agents in their workflows, at least 31% are concerned about their trustworthiness. Nearly 23% and 22% are concerned about cybersecurity threats and agents losing visibility into systems respectively, according to the survey. Cohere isn’t the only one focusing on the ease of use of agentic platforms. In December, Salesforce unveiled the next generation of its low-code agentic platform — Agentforce 2.0 — with an updated reasoning engine that offers the ability to build agents using natural language and new agentic skills that can perform more tasks without user intervention. Another major upgrade was Agentforce’s integration with MuleSoft, designed to help enterprises reduce the time and complexity of building a new custom agent and integrating it into a workflow. In the same vein, Cohere touts that North, which combines its proprietary AI search, LLMs, and agents, can be integrated “seamlessly” into any existing workflow out-of-the-box. “AI agents created with North can quickly and easily connect to the workplace tools and applications that employees regularly use,” Gomez wrote in a blog post, adding that North can also be integrated with in-house applications. However, he didn’t flesh out how exactly the entire integration process works. Microsoft, too, upgraded Copilot Studio — its agent-building platform — with the ability to allow enterprises to connect agents to third-party applications such as Salesforce, ServiceNow, and Zendesk.
Read More