editors desk

17 posts

4 key capabilities of Kong’s Event Gateway for real-time event streams

Event-driven architecture (EDA) has long been a foundational piece of scalable, real-time systems. This includes things like payment processing and fraud detection, IoT and logistics, and AI-powered applications. Today, with AI, the value of streaming data is more apparent than ever. Despite this, the operational complexity of managing platforms like Apache Kafka has often slowed adoption or limited its reach. Kong’s new Event Gateway is designed to simplify and secure the adoption of event-driven architecture by giving platform teams and developers the ability to expose Kafka as Kong-managed APIs and services in the same place where you’d use Kong to manage API, AI, and service mesh deployments. This allows you to bring all of the benefits of your Konnect API Platform to the event streaming universe, using the Event Gateway and Konnect platform to enforce policies for security, governance, and cost controls across your entire API, AI, and event streaming estate. Here are four ways the Event Gateway helps organizations unlock the full potential of their Kafka investments. Manage Kafka event streams in the same way you manage your APIs Event Gateway enables platform teams to expose Kafka topics as HTTP APIs (such as REST or server-sent events APIs) or as services that communicate over the native Kafka protocol. Whichever route you choose, you can use Kong plugins and policies to bring the same level of security, reliability, and governance to your Kafka estate as you would with your Kong-managed API and AI estates.  Reduce Kafka infrastructure costs with virtual clusters and topics Kafka is great for transmission of business critical data at massive scale without sacrificing performance—essential for any real-time initiatives. However, Kafka can present challenges pertaining to client isolation and access control to data at the event level. Today, Kafka often requires infrastructure and platform teams to often implement (and pay for) duplicate topics, partitions, and data to effectively segment subsets of data. Kong can reduce overall infrastructure costs related to Kafka through virtual clusters and topics. Kong’s virtual clusters and concentrated topics functionality will allow for scalable and efficient logical isolation—all managed by Kong—which cost much less to run, ultimately allowing you to drive greater and greater Kafka cost efficiencies as you expand the footprint of your eventing platform. Strengthen your cloud security posture Moving to cloud and vendor-managed versions of Kafka enables EDA teams to focus on building event-driven architectures without the operational burden of managing Kafka infrastructure. However, while the value of the cloud is clear, many organizations still have concerns about PII (personally identifiable data) and other sensitive data running through vendor cloud environments. The Kong Event Gateway can help with this by enforcing encryption at the gateway layer, within your private network, so that data in your cloud environment is encrypted. Turn event streams into real-time data products With Event Gateway, organizations can expose Kafka event streams as structured, reusable API data products. The protocol mediation approach opens up the value of real-time data in Kafka to developers and customers that don’t want to, or can’t, set up their applications as Kafka clients. Kong Event Gateway customers will be able to expose access to this real-time data as REST APIs and server-sent events APIs to ensure that they can meet developers, partners, and customers where they are. Event Gateway will be available as part of Kong Konnect, the unified API platform built to help organizations power API-driven innovation at scale with performance, security, and governance across all service types. With Event Gateway, Kong supports the entire API life cycle with one end-to-end platform, enabling teams to discover, observe, and govern event APIs alongside large language model (LLM) APIs, REST APIs, and any other supported service type in the Kong platform. As organizations shift toward real-time, API-first architectures, Kong helps them manage that transformation securely and efficiently. By bringing event streams into your API Platform, Event Gateway allows teams to move faster, reduce operational burden and build more responsive, data-driven applications. For a full list of features and documentation, visit the Kong Event Gateway product page. Marco Palladino is the CTO and co-founder of Kong. — New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.
Read More

Google to unveil AI agent for developers at I/O, expand Gemini integration

Google is expected to unveil a new AI agent aimed at helping software developers manage tasks across the coding lifecycle, including task execution and documentation. The tool has reportedly been demonstrated to employees and select external developers ahead of the company’s annual I/O conference, scheduled for May 20 in Mountain View, California, according to a report by The Information.
Read More

Agentic mesh: The future of enterprise agent ecosystems

Every week, a new AI agent platform is announced, each promising to revolutionize how work gets done. The vision is compelling. Simply task an AI agent with a job, and it will autonomously plan, execute, and deliver flawless results. Industry leaders are leaning into this vision. Nvidia CEO Jensen Huang predicts we’ll soon see “a couple of hundred million digital agents” inside the enterprise. Microsoft CEO Satya Nadella takes it even further: “Agents will replace all software.” The vision is compelling, but the reality is far more complex. To fulfill this vision, agents must evolve from their current immature state to address and embody the needs of the modern enterprise. If enterprises want to use AI agents at scale, they must architect them in an “enterprise-grade” way that controls and reduces errors, while ensuring reliability, security, and observability from the ground up. In addition, these enterprise agents must run in an “enterprise-grade” ecosystem that lets agents safely and securely find each other, collaborate, interact, and even transact. Simply put, we need an “agentic mesh”—a network of enterprise-grade AI agents running in an enterprise-ready ecosystem. This article explores what it takes to build such agents and the infrastructure needed to support them at scale. Why enterprises need AI agents Enterprises need to move faster, but they are slowed by manual workflows and fragmented systems. AI agents offer a new approach: software that can determine how to execute tasks. When built for enterprise use, agents can help address core challenges including: Reducing information overload. Employees spend a significant portion of their time searching for information. Agents can surface insights proactively, eliminating the need for manual searches. Driving efficiency and scalability. Agents can automate multi-step processes, scaling operations without scaling head count. Enhancing customer engagement. By combining real-time insights with historical context, agents can deliver personalized experiences across channels. Accelerating innovation. Agents offload repetitive tasks, freeing humans for more strategic work. AI agents address rising enterprise complexity, but they aren’t just chatbots or demos. To deliver real value, agents must be built for the enterprise, with reliability, visibility, and security from the start. The problem: Most agents aren’t built for the enterprise Many companies describe agents as “science experiments” that never leave the lab. Others complain about suffering the pain of “a thousand proof-of-concepts” with agents. The root cause of this pain? Most agents today aren’t designed to meet enterprise-grade standards.  Agents often: Start as prototypes in notebooks or large language model (LLM) sandboxes. Great for demos, not for deployment. Deployed in a single Python “main” running in a single operating system process, which is practical for only the smallest loads. Lack the observability, traceability, and access control, essentials for operating in real-world systems. Operate in silos, with no standard way to interact with other agents, services, or teams. Push too much decision-making onto the model itself, trusting a stochastic system to get it right every time. The more we ask LLMs to do, the less accurate and repeatable they become. The result is a fragile foundation, useful in isolated scenarios, but brittle at scale. To truly harness the power of agents, enterprises must treat them like first-class components in their software architecture. That means securing them, governing them, instrumenting them, and embedding them into robust infrastructure. The danger of agent silos As enterprises adopt more agents, a familiar problem is emerging: silos. Different teams deploy agents in CRMs, data warehouses, or knowledge systems, but these agents operate independently, with no awareness of each other. And when agents don’t share context, it leads to duplicated effort and missed insights. For example, a CRM agent might recommend a sales action without knowing that the data warehouse agent has identified a relevant market trend. Each agent works with partial information, and teams end up building overlapping functionality. This isn’t just inefficient, it undermines trust in the system. If agents can’t coordinate, they can’t support high-stakes or cross-functional use cases. Agents need a way to discover each other, share information, and coordinate actions. But without a common framework, every new agent increases complexity instead of value. What’s needed is a foundation that allows agents to operate as part of a larger system, not just as standalone tools. That’s where the agentic mesh comes in.  Agentic mesh: An enterprise-grade agent ecosystem An agentic mesh is a way to turn fragmented agents into a connected, reliable ecosystem. But it does more: It lets enterprise-grade agents operate in an enterprise-grade agent ecosystem. It allows agents to find each other and to safely and securely collaborate, interact, and even transact. The agentic mesh is a unified runtime, control plane, and trust framework that makes enterprise-grade agent ecosystems possible. Foundational components of the enterprise-grade agent ecosystem.  Confluent Agentic mesh fulfills two major architectural goals: It lets you build enterprise-grade agents and it gives you an enterprise-grade run-time environment to support these agents. To support secure, scalable, and collaborative agents, an agentic mesh needs a set of foundational components. These capabilities ensure that agents don’t just run, but run in a way that meets enterprise requirements for control, trust, and performance. These agentic mesh components include the following: Marketplace: A central place where agents can be discovered, evaluated, and deployed. Teams can find prebuilt agents or publish their own, enabling reuse and reducing duplicated effort. Registry: A system that enables agents to register, authenticate, and discover each other. This allows agents to collaborate based on defined roles, capabilities, and permissions, without custom integrations. Observability and Governance: Tools and standards for ensuring security, traceability, and policy enforcement. This includes logging, metrics, access controls, and certifications, critical for auditability and operational support. Communication and Orchestration: Agents need to coordinate workflows, not just act alone. The mesh supports task planning and delegation across multiple agents, backed by specialized LLMs and deterministic execution engines to improve reliability and reduce error rates. High-level information flow for an enterprise-grade agent ecosystem.  Confluent Additional components not shown include the Interaction Manager, which handles both human-agent and agent-agent communication through APIs, protocols, and chat interfaces; and the Creator Workbench, which provides the tools and scaffolding needed to design, test, and publish production-grade agents aligned with enterprise standards. Together, these capabilities turn a collection of isolated agents into a cohesive, governable system, ready for enterprise scale. Agentic mesh: Towards enterprise-grade agents Enterprise-grade agents must meet a high standard, one that aligns with how modern infrastructure is monitored, governed, and secured. An enterprise-grade agent is not just intelligent; it’s manageable, predictable, and safe to deploy across business-critical systems.  Achieving all of that requires the following key attributes: Discoverability: Agents must be easy to find, whether by users or other agents. Each one is registered with a unique identity, metadata, and clear documentation. Security: Agents must use strong authentication and authorization, such as mTLS and OAuth2. Access is governed by zero-trust policies, where agents only interact with tools and collaborators explicitly defined in their configuration. Observability and operability: Every agent emits metrics, alerts, and logs that can be integrated into existing enterprise monitoring and operations platforms. This enables real-time visibility and incident response. Reliability: Enterprise agents must be designed to minimize failures. This means avoiding over-reliance on unpredictable LLM behavior and ensuring task execution is deterministic where possible. Scalability: Enterprise agents must be able to easily scalable at run time to handle expected and peak loads. However, they must also be scalable from a development perspective, allowing developers to easily and quickly build agents. In addition, they must operationally scale by fitting into an enterprise’s operational environment. Trust: Agents are certified before use. Certifications, automated or manual, are recorded and published for visibility and governance. Traceability and explainability: Every action an agent takes is logged, along with the reasoning behind it. This allows teams to trace outcomes back to decisions and inputs, supporting both diagnostics and compliance. Collaborative: Agents don’t operate in isolation. They are built to work with other agents and tools in a distributed environment, sharing context and delegating tasks when needed. When agents meet these standards, they can be safely integrated into enterprise systems and processes. But to get there, they need infrastructure that supports these capabilities by default. That’s what the agentic mesh provides, and it’s the foundation for scaling agent adoption across the enterprise. Technical foundations of the agentic mesh Enterprise-grade agents need to fit into modern software infrastructure. The agentic mesh builds on well-established patterns, particularly microservices, event-driven architecture, stream processing, and zero-trust security so that agents can be deployed, observed, and managed using familiar tools and workflows. Technical architecture for an enterprise-grade agent.  Confluent Agents Are Smart Microservices Agents are microservices with an LLM brain, effectively “smart” microservices. Microservices give agents a strong operational foundation. They support enterprise-grade security standards like mTLS and OAuth2, enable reliable execution within platforms like Kubernetes, and can be easily deployed using Docker and CI/CD pipelines. Because they conform to standard observability patterns, agents can be monitored and operated using existing tools like Prometheus, OpenTelemetry, and Splunk, making them manageable within established enterprise workflows. Agents are autonomous and tool-driven Each agent is equipped with one or more language models and a set of tools it can invoke. Agents dynamically generate task plans based on user input and available capabilities, then execute them step by step, coordinating tools, calling APIs, and, when needed, collaborating with other agents. Agents orchestrate conversations Enterprise use cases often involve long-running interactions between many agents; we call these conversations. Conversations can span time frames from milliseconds, minutes, days, or longer. This means that hand-offs between agents or between people and agents must not only tolerate failures but gracefully allow additional human feedback when required by an agent. Agents are stateful Agents are designed to maintain and manage conversational state. This allows them to track context across multiple steps or sessions, as well as restart conversations after failures. Agents are asynchronous Agents are inherently asynchronous. A helpful way to understand agents is to compare them to how humans communicate. While we often engage in request-response interactions for immediate feedback, we also rely on asynchronous communication, like email or text messages, where a response might come much later. We accept that delay as part of how coordination works. Agents operate the same way. Agents may wait on tools or delegate tasks, so they’re designed to operate asynchronously. Agents are event-driven Event-driven architecture supports how agents operate. Instead of relying on rigid point-to-point integrations, agents must be able to discover, communicate, and subscribe to each other dynamically. Through technologies like Apache Kafka and Apache Flink, which support scalable, decoupled communication, agents tap into shared event streams, subscribe to topics, react to new data, and publish outputs in real time. Zero-trust governance Agents operate under strict policies that define which data, tools, and collaborators they’re allowed to interact with. Access is explicitly declared and enforced through the mesh. This prevents unauthorized actions and ensures compliance with enterprise security standards. The future of interoperable AI The next phase of agents in the enterprise won’t be defined by how many agents are deployed, but by how well those agents are built and managed. To deliver real business value, agents must be enterprise-grade: secure, observable, reliable, and designed to work as part of a broader system. That requires more than good prompts or clever workflows. It demands an architecture that supports governance, coordination, and control at scale. The agentic mesh provides the foundation for that architecture, making it possible to move from experimental prototypes to production-ready systems. The future of enterprise AI lies in building agents you can trust, integrate, and scale. Sean Falconer is AI entrepreneur in residence at Confluent. Eric Broda is an executive consultant and AI architect at IBM. — Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.
Read More

How to use genAI for requirements gathering and agile user stories

Generative AI is driving a significant paradigm shift in how developers write code, develop software applications, reduce technical debt, and improve quality. GenAI isn’t just writing code, and there are opportunities for the entire agile development team to use LLMs, AI agents, and other genAI capabilities to deliver improvements across the software development lifecycle. Improving requirements gathering and the quality of agile user stories is becoming a significant opportunity for generative AI. As developers write code faster and more efficiently, deeper agile backlogs with more user stories and stronger acceptance criteria are needed. How AI copilots increase developer productivity Developers use genAI to transform software development, including generating code, performing code reviews, and addressing production issues. BairesDev reports that 72% of developers are now using genAI capabilities, and 48% use genAI tools daily. AI copilots and genAI code generators are impacting productivity significantly. A recent report based on field experiments with software developers at Microsoft and Accenture found that using a coding assistant caused a 26% increase in the weekly number of completed tasks, a 13% increase in the number of code commits, and a 38% increase in the number of times code was compiled. Developers are also reporting productivity impacts. According to DORA’s 2024 State of DevOps Report, more than one-third of respondents described their observed productivity increases as either moderate (25%) or extreme (10%) in magnitude. Why requirements gathering is the new bottleneck As agile development teams become more proficient with code generators, the velocity and quality of requirements gathering and agile user stories must increase. Additionally, the structure of agile user stories and the completeness of their acceptance criteria have become more important as developers use them to prompt AI agents to develop, test, and document code. “In a world where copilots are writing code, planning will take on a much more important role, and the requirements documents must become more detailed than the days when teams sat together in the same room,” says David Brooks, SVP of evangelism at Copado. “Business analysts will use genAI to summarize feature requests and meeting transcripts to capture all of the inputs and help prioritize based on the level of need. GenAI can then write the first draft or review the human-written draft for completeness to ensure that it aligns with the company’s format.” The key to success is engaging end-users and stakeholders in developing the goals and requirements around features and user stories. This engagement must go beyond the usual responsibilities of agile product owners; software developers should engage stakeholders in understanding objectives, discussing risks, and devising experiments. How genAI improves requirements gathering Chris Mahl, CEO of Pryon, says genAI is reshaping requirements gathering from a documentation exercise to a collaborative discovery process. “Product owners now use AI to generate initial requirement drafts from stakeholder interviews, then refine them through feedback cycles. The business analyst role is evolving from documentation specialist to AI orchestrator, and success requires proficiency in prompt engineering and framing business problems to elicit optimal AI responses.” The business analyst partners with the agile product owner and team lead to oversee the end-to-end requirements process. They are especially valuable for more technical agile teams working on microservices, integrations, and data pipelines. User stories in these technical deliverables have significant non-functional acceptance criteria, and testing often requires building synthetic test data to validate many use cases. Mahl adds, “The technology excels at translating business needs into technical specifications and vice versa, bridging communication gaps. Critical thinking becomes essential as analysts must validate AI-generated content for accuracy and business alignment.” Critical thinking is a crucial skill set to develop as more requirements, code, and tests are developed using genAI tools. Agile developers must learn how to ask questions, include the most important details in prompts, and validate the completeness and accuracy of genAI responses. Business analysts and product owners have new tools to accelerate translating conversations, brainstorming, and other meeting notes into ideas, epics, and features. Tameem Hourani, principal at RapDev, says, “By joining conference calls, analyzing them, summarizing them, and extracting takeaways from them, you can suddenly groom backlogs for epics of all sizes.” How genAI supports rapid prototyping and faster delivery A second opportunity for agile development teams is to use genAI to reduce cycle times, especially around proof of concepts and iterating through end-user experiences. GenAI should help agile teams incorporate more design thinking practices and increase feedback cycles. “GenAI tools are fundamentally shifting the role of product owners and business analysts by enabling them to prototype and iterate on requirements directly within their IDEs rapidly,” says Simon Margolis, Associate CTO at SADA. “This allows for more dynamic collaboration with stakeholders, as they can visualize and refine user stories and acceptance criteria in real time. Instead of being bogged down in documentation, they can focus on strategic alignment and faster delivery, with AI handling the technical translation.” One opportunity is in the development of low-code platforms that can generate applications from genAI prompts. Platforms like Adobe, Appian, Pega, Quickbase, and SAP use genAI tools to accelerate the prototyping and development of apps and agents. Use genAI tools to focus more time on human innovation Product owners and business analysts have more significant roles than grooming backlogs and documenting requirements. Their strategic importance lies in promoting innovations that matter to end users, delivering business value, and creating competitive advantages. They must also adhere to devops non-negotiable standards, steer agile development teams toward developing platform capabilities, and look for ways to address technical debt. “GenAI excels at aligning user stories and acceptance criteria with predefined specs and design guidelines, but the original spark of creativity still comes from humans,” says Ramprakash Ramamoorthy, director of AI research at ManageEngine. “Analysts and product owners should use genAI as a foundational tool rather than relying on it entirely, freeing themselves to explore new ideas and broaden their thinking. The real value lies in experts leveraging AI’s consistency to ground their work, freeing them to innovate and refine the subtleties that machines cannot grasp.” GenAI can enable transformation capabilities when organizations look beyond productivity drivers. Agile development teams should use genAI to accelerate and improve requirements gathering and writing agile user stories. GenAI provides the opportunity to streamline these tasks and improve quality, leaving more time for product owners and business analysts to increase focus on where technology provides lasting value to their organizations.
Read More

What ‘cloud first’ can teach us about ‘AI first’

In the early 2010s, enterprises enthusiastically embraced the “cloud first” ethos. Between 2010 and 2016, businesses aggressively migrated applications and data to the public cloud, spurred on by promises of lower costs, greater efficiency, and unbeatable scalability. However, this movement quickly revealed significant shortcomings. Many organizations transferred applications and workloads to the cloud without comprehensive planning, failing to account for long-term financial implications, data complexities, and performance requirements. Today, we’re witnessing enterprises repatriate workloads back to on-premises or hybrid environments due to unexpected costs and a mismatch of capabilities. Like the cloud first frenzy, enterprises are barreling toward the next big wave: the “AI first” mandate. This rush to implement artificial intelligence technologies without a disciplined, strategic framework is eerily familiar. If history is any indication, failing to plan carefully will again lead to substantial mistakes, wasted budgets, and underwhelming results. The cloud-first cautionary tale The drawbacks of the cloud-first movement weren’t apparent immediately. In theory, moving workloads to the public cloud seemed like an ideal solution to outdated infrastructure, and it gave the added promise of cost savings. However, these migrations were often driven by FOMO (fear of missing out) rather than practicality. Organizations moved applications and data without optimizing them for public cloud platforms, overlooking aspects like workload performance, governance, and comprehensive cost analysis. Years later, many companies discovered that hosting these workloads in the cloud was far more expensive than initially anticipated. Costs ballooned due to unoptimized architectures, excessive egress fees, and persistent underestimation of cloud pricing models. That lesson is now painfully remedied by a return to hybrid or completely on-premises systems, but not without significant cost and effort. What went wrong during the cloud-first boom wasn’t just flawed execution but a fundamental lack of strategic planning. Instead of understanding which workloads would genuinely benefit from the cloud and optimizing them for that environment, enterprises treated cloud adoption as a blanket mandate. As businesses face the AI-first mandate, they do so under similar circumstances: enticing technology, unclear benefits, and an overwhelming urgency to act. Is AI the right tool? AI is undeniably transformative. It has the potential to enhance decision-making, automate processes, and open up unprecedented business opportunities. However, companies are indiscriminately layering AI into systems and methods without carefully evaluating its suitability or ROI. Enterprises may use AI to solve problems it isn’t well-suited to solve, or deploy it at scales that far outstrip the ability of the infrastructure to support it. Worse yet, some organizations are tackling AI projects without fully understanding their costs or the data complexities involved, especially in view of new data privacy and ethics regulations. Much like the cloud rush, companies risk building expensive, poorly optimized AI systems that deliver little value or introduce risk. The AI-first movement feels uncomfortably reminiscent of the early days of cloud computing. If there’s one lesson to learn from the cloud-first era, it is that strategic planning is the backbone of successful technology adoption. Before adopting AI to keep up with competitors, organizations should assess their unique business goals and determine whether AI is truly the right solution. Not every business problem needs AI. Leaders should ask hard questions: What specific outcomes are we trying to achieve with AI? Are there simpler, more cost-effective solutions available? How will success be measured? Many of my clients are taken aback when I raise these questions, which is a bit concerning. I’m there as an AI consultant; I could easily keep my mouth shut and collect my fees. I suspect other AI architects are doing just that. Enterprises need to realize that the misuse of this technology can cost five to seven times more than traditional application development, deployment, and operations technologies. Some businesses will likely make business-ending mistakes. However, these questions are fundamental to the problems to be solved and the value of the solutions that we leverage, whether AI or not. The elements of a successful plan Rather than embark on large-scale AI implementations, start with smaller, controlled pilot projects tailored to well-scoped use cases. Such projects evaluate effectiveness, model costs, and identify potential risks. AI technology is evolving rapidly. Deploying today’s cutting-edge models or tools doesn’t guarantee long-term relevance. Enterprises should build adaptable, modular systems that can grow with the technology landscape and remain cost-effective over time. As you plan a pilot project, keep in mind the following: Prepare your data. AI systems are only as good as the data they rely on. Many enterprises hastily jump on AI initiatives without first evaluating their data repositories. Key data-readiness steps include ensuring data accuracy, consistency, and quality. Finally, build pipelines that ensure AI systems can efficiently access and process the data needed. Be realistic. Like cloud services, AI can have hidden costs, from computing resources to training large data sets. Enterprises need to analyze the total cost of ownership and the feasibility of deploying AI systems based on current resources and infrastructure rather than relying on optimistic assumptions. Acquire the skills. Throwing tools at a problem doesn’t guarantee success. AI requires knowledgeable teams with the skills to design, implement, and monitor advanced systems. Enterprises should invest in upskilling workers, create cross-functional AI teams, and hire experts who can bridge the gap between business needs and AI capabilities. Implement governance. AI introduces ethical, security, and operational risks. Organizations need to establish clear structures to monitor AI system performance and mitigate risks. If AI involves sensitive data, you’ll need to establish governance standards for data privacy and compliance. Ensure transparency around how AI makes decisions, and prevent overuse or misuse of AI technology. The AI-first movement holds enormous promise, but enthusiasm puts us at risk of repeating the costly mistakes of the cloud-first era. With AI, the lesson is clear: Decision-makers must avoid knee-jerk reactions and focus on long-term success through careful strategy, planning, and disciplined execution. Businesses that take a thoughtful, deliberate approach will likely lead the AI-driven future while others scramble to undo costly, short-sighted implementations. The time to plan is now. As we’ve seen, “move first, think later” rarely works out.
Read More

C# 14 introduces extension members

C# 14, a planned update to Microsoft’s cross-platform, general purpose programming language, adds an extension member syntax to build on the familiar feature of extension methods. Extension members allow developers to “add” methods to existing types without having to create a new derived type, recompile, or otherwise modify the original type. The latest C# 14 preview, released with .NET 10 Preview 3, adds static extension methods and instance and static extension properties, according to Kathleen Pollard, principal program manager for .NET at Microsoft, in a May 8 blog post. Extension members also introduce an alternative syntax for extension methods. The new syntax is optional, and developers do not need to change their existing extension methods. Regardless of the style, extension members add functionality to types. This is particularly useful if developers do not have access to the type’s source code or if the type is an interface, Pollard said. If developers do not like using !list.Any(), they can create their own extension method IsEmpty(). Starting in the latest preview, developers can make that a property and use it just like any other property of the type. Using the new syntax, developers also can add extensions that work like static properties and methods on the underlying type. Creating extension members has been a long journey and many designs have been explored, Pollard said. Some needed the receiver repeated on every member; some impacted disambiguation; some placed restrictions on how developers organized extension members; some created a breaking change if updated to the new syntax; some had complicated implementations; and some just did not feel like C#, she said. The new extension member syntax preserves the enormous body of existing this-parameter extension methods while introducing new kinds of extension members, she added. It offers an alternative syntax for extension methods that is consistent with the new kinds of members and fully interchangeable with the this-parameter syntax. A general release of C# 14 is expected with .NET 10 in November 2025.
Read More

MySQL at 30: Still important but no longer king

This month MySQL turns 30. Once the bedrock of web development, MySQL remains immensely popular. But as MySQL enters its fourth decade, it ironically has sown the seeds of its own decline, especially relative to Postgres. Oracle, the steward over MySQL since 2010, may proclaim MySQL is “the world’s favorite database,” but that has been objectively false for a long time, as shown by developer sentiment surveys and popularity rankings from Stack Overflow and DB-Engines. None of which is to deprecate MySQL’s importance. It was and is critical infrastructure for the web. But it’s no longer developers’ default database for most things. How did this happen? For years, MySQL was the go-to database of the internet. Born as a lightweight, open source alternative to expensive commercial systems, MySQL made it easy to build on the web. It powered the rise of the LAMP (Linux, Apache, MySQL, PHP) stack. It was simple, fast, and free. But over time, the very things that made MySQL dominant came to constrain its growth. Its focus on simplicity made it easy to learn, but hard to evolve. Its permissive early design helped it spread fast but also left it ill-suited to modern, complex applications. Its dominant position left it less hungry for innovation than PostgreSQL, a database that has relentlessly closed gaps and added new capabilities. The rise of MySQL in the web era MySQL’s origin story is rooted in the early open source movement. In 1995, Swedish developer Michael “Monty” Widenius created MySQL as an internal project, releasing it to the public soon after. By 2000, MySQL was fully open sourced (GPL license), and its popularity exploded. As the database component of the LAMP stack, MySQL offered an irresistible combination for web developers: It was free, easy to install, and “good enough” to back dynamic websites. In an era dominated by expensive proprietary databases, MySQL’s arrival was perfectly timed. Web startups of the 2000s—Facebook, YouTube, Twitter, Flickr, and countless others—embraced MySQL to store user data and content. MySQL quickly became synonymous with building websites. Early MySQL gained traction despite some trade-offs. In its youth, MySQL lacked certain “enterprise” features (like full SQL compliance or transactions in its default engine), but this simplicity was a feature, not a bug, for many users. It made MySQL blazingly fast for reads and simple queries and easier to manage for newcomers. Developers could get a MySQL database running with minimal fuss—a contrast to heavier systems like Oracle or even PostgreSQL at the time. “It’s hard to compete with easy,” I observed in 2022. By the mid-2000s, MySQL was everywhere and was increasingly feature-rich. The database had matured (adding InnoDB, a more robust storage engine for transactions) and continued to ride the web explosion. Even as newer databases emerged, MySQL remained a default choice for millions of deployments, from small business applications to large-scale web infrastructure. As of 2025, MySQL is likely still the widest-deployed open source (or proprietary) database globally by sheer volume of installations. Scads of applications were written with MySQL as the backing store, and many remain in active use. In this sense, MySQL today is a bit like IBM’s DB2: a workhorse database with a massive installed base that isn’t disappearing, even if it’s no longer the trendiest choice. Momentum shifts elsewhere In the past decade, MySQL’s once-unquestioned dominance of open source databases has faced strong headwinds from both relatively new contenders (MongoDB, Redis, Elasticsearch) and old (Postgres). From my vantage point at MongoDB, I’ve seen a large influx of developers turn to MongoDB to more flexibly build web and other applications. But it’s Postgres that has become the “easy button” for developers who want to stick to SQL but need more capabilities than MySQL affords. Whereas web developers in 2005 might have reached for MySQL for virtually any project, today they have a plethora of choices tailored to specific needs. Need a flexible JSON document store to support general-purpose database needs? MongoDB beckons. Building real-time analytics or full-text search? Elasticsearch could be a better fit. Looking for an in-memory cache or high-speed data structure store? Redis is there. Even in data analytics and data warehousing, cloud-native options such as Snowflake and BigQuery have taken off. But it’s Postgres that can take the credit (or blame, if you prefer) for MySQL’s decline. The reasons for this are both technical and cultural. Postgres offers capabilities MySQL historically has not. Among them: Richer SQL features and standards compliance: PostgreSQL has long prioritized SQL standards and advanced features. It supports complex queries, window functions, common table expressions, full-text search, and robust ACID (atomicity, consistency, isolation, durability) transactions, some of which MySQL lacked or added only later. Postgres can handle complex, enterprise-grade workloads without bending the rules. Extensibility and flexibility: Postgres is highly extensible. You can define new data types, index types, and even write custom extensions or stored procedures in various languages. Whether it’s GIS/geospatial data (PostGIS), time-series extensions, or pgcrypto and pgvector extensions for crypto and AI use cases, Postgres can morph to fit needs. These extensibility hooks have let Postgres stay on the cutting edge, even when these extensions may offer demonstrably worse performance for modern applications. Postgres’ extensibility still shines compared to MySQL’s more limited plug-in model. Open source, open culture: Both MySQL and Postgres are open source, but PostgreSQL’s license and governance are more permissive. Postgres is a true community-driven project, developed by a core global team and supported by many companies without a single owner. MySQL, by contrast, uses GPL (for the open version) and has been owned by Oracle for years. Oracle’s stewardship has been a double-edged sword. On one hand, Oracle has undoubtedly invested in MySQL’s development. The current MySQL 8.x series is a far cry from the MySQL of the 2000s. It’s a much more robust, feature-rich database (with improvements in replication, security, GIS, JSON support, and more) thanks in part to Oracle’s engineering resources. But that same tight control of MySQL engineering has altered the MySQL community dynamics in ways that arguably have slowed its momentum. In short, PostgreSQL has convinced many that it offers more “future-proof” value than MySQL. MySQL will persist Despite all the challenges, MySQL will be with us for a long, long time. There are good reasons many developers and organizations stick with MySQL even as alternatives rise. First and foremost is MySQL’s track record of reliability at scale. It has proven itself capable of handling enormous workloads. The Facebooks and Twitters of the world did not outgrow MySQL so much as bend MySQL to their will through custom tools and careful engineering. If MySQL could power the data needs of a social network with billions of users, it can probably handle your e-commerce site or internal application just fine. That pedigree counts for a lot. Secondly, MySQL remains simple and familiar to legions of developers. It’s often the first relational database new developers learn, thanks to its prevalence in tutorials and boot camps, and its integration with beginner-friendly tools. MySQL’s documentation is extensive, and its error messages and behaviors are well-known. In many cases, developers don’t need the advanced features of PostgreSQL, and MySQL’s lighter footprint (and yes, sometimes forgiving nature with SQL syntax) can make development feel faster. The old perception that “MySQL is easier” still lingers, even if PostgreSQL has improved its ease of use over the years. This familiarity creates inertia: Organizations have MySQL DBAs, MySQL backup scripts, and MySQL monitoring already in place. Switching is hard. There’s also an ecosystem lock-in of sorts. Hundreds of popular web applications and platforms are built on MySQL (or its drop-in cousin MariaDB). For example, WordPress, which powers a huge portion of websites globally, uses MySQL/MariaDB as its database layer. Many other content management systems, e-commerce platforms, and appliances have a MySQL dependency. This entrenched base means MySQL continues to be deployed by default as people set up those tools. Even cloud providers, while they enthusiastically offer PostgreSQL, also offer fully managed MySQL services (often MySQL-compatible services such as Amazon Aurora) to cater to demand. In short, MySQL is deeply embedded in the infrastructure of the web, and that isn’t undone overnight. A triumph of open source However, the very reasons MySQL persists also threaten its future loyalty. MySQL’s widespread legacy use means it will remain relevant, but new projects are increasingly likely to choose something else, whether that’s PostgreSQL, MongoDB, Redis, or whatever you prefer. The risk for MySQL is that a new generation of developers may simply not develop the same attachment to it. Momentum matters in technology communities: PostgreSQL has it; MySQL a bit less so. Additionally, if MySQL doesn’t keep up with new trends, it could see even loyal users exploring alternatives. For instance, when developers started caring about embeddings and vector search for AI applications, Postgres had an answer with pgvector, and MongoDB added Atlas Vector Search. MySQL had nothing comparable until very recently. MySQL’s continued evolution will be crucial to maintaining loyalty, and that again ties back to how Oracle and the MySQL community navigate the project’s direction in the coming years. As MySQL turns 30, we should celebrate the incredible legacy of this open source database. Few software projects have had such a profound impact on an era of computing. MySQL empowered an entire generation of developers to build dynamic websites and applications, lowering the barrier to entry for startups and open source projects alike. MySQL demonstrated that open source infrastructure could compete with—and even surpass—proprietary solutions, reshaping the database industry’s economics. For that, MySQL will always deserve credit. MySQL’s glory days might be behind it, but its story is far from over. The database world is better off for the 30 years of competition and innovation that MySQL inspired and continues to inspire.
Read More

How to build (real) cloud-native applications

Cloud-native applications are increasingly the default way to deploy in both public clouds and private clouds. But what exactly is a cloud-native application and how do you build one? It’s important to start with first principles and define what cloud-native actually means. Like many technology terms, cloud-native is sometimes misunderstood, much like cloud computing itself was and continues to be in some respects. Simply hosting an application on a remote server doesn’t make it a cloud application. When it comes to cloud, the US National Institutes of Science and Technology established a formal definition of cloud computing in 2011 in Special Publication 800-145: Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud-native doesn’t simply mean something that was built for the cloud, though some might use the term to mean that. Cloud-native is a term that doesn’t have a NIST definition. But it does have a formal definition that was developed through an open source process under the Cloud Native Computing Foundation (CNCF). That definition is maintained at https://github.com/cncf/toc/blob/main/DEFINITION.md and states: Cloud-native technologies and architectures typically consist of some combination of containers, service meshes, multi-tenancy, microservices, immutable infrastructure, serverless, and declarative APIs. What are cloud-native applications? You can run just about anything you want in the cloud. Take literally any application, create a virtual machine, and you’ll find a cloud host that can run it. That’s not, however, what cloud-native applications are all about. Cloud-native applications are designed and built specifically to operate in cloud environments. It’s not about just “lifting and shifting” an existing application that runs on-premises and letting it run in the cloud. Unlike traditional monolithic applications that are often tightly coupled, cloud-native applications are modular in a way that monolithic applications are not. A cloud-native application is not an application stack, but a decoupled application architecture. Perhaps the most atomic level of a cloud-native application is the container. A container could be a Docker container, though really any type of container that matches the Open Container Interface (OCI) specifications works just as well. Often you’ll see the term microservices used to define cloud-native applications. Microservices are small, independent services that communicate over APIs—and they are typically deployed in containers. A microservices architecture allows for independent scaling in an elastic way that supports the way the cloud is supposed to work.  While a container can run on all different types of host environments, the most common way that containers and microservices are deployed is inside of an orchestration platform. The most commonly deployed container orchestration platform today is the open source Kubernetes platform, which is supported on every major public cloud. Key characteristics of cloud-native applications CharacteristicDescriptionMicroservices architectureApplications broken into smaller, loosely coupled services that can be developed, deployed, and scaled independentlyContainerizationPackages microservices with dependencies, ensuring consistency across environments and efficient resource useOrchestration platformProvides container deployment platform with integrated scaling, availability, networking, and management featuresCI/CDAutomated pipelines for rapid code integration, testing, and deploymentDevops cultureCollaboration between dev and ops teams creates shared responsibility, faster cycles, and reliable releasesScalability and resilienceDynamically scales resources based on demand and handles failures gracefully for high availabilityDistributed system designServices operating across multiple servers enabling component-specific scaling, fault tolerance, and optimized resource utilization Frameworks, languages, and tools for building cloud-native applications Developing cloud-native applications involves a diverse set of technologies. Below are some of the most commonly used frameworks, languages, and tools. Programming languages Go: Developed by Google, Go is appreciated for its performance and efficiency, particularly in cloud services. Java: A versatile language with a rich ecosystem, often used for enterprise-level applications. JavaScript: Widely used for scripting and building applications as well as real-time services.  Python: Known for its simplicity and readability, making it suitable for various applications, including web services and data processing. Cloud-native containerization and orchestration The core basic units of cloud-native application deployment are some form of containers and then a platform orchestration the running and management of those containers in the cloud. Key technologies include the following: Docker: The container technology that took application containers mainstream and often the default container technology used in cloud-native deployments. Podman: While Docker is the dominant container technology, Red Hat developed its own approach which is largely compatible with Docker. Kubernetes: The de facto standard for container orchestration in cloud-native environments. Kubernetes services are available on all major cloud platforms including Amazon Elastic Kubernetes Service (EKS), Microsoft Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). Cloud-native development frameworks Programming languages alone are often not enough for the development of larger enterprise applications. That’s where application development frameworks come into play.  Popular cloud-native development frameworks include the following: Django: Commonly used web framework for Python that has increasingly been used for cloud-native application development in recent years. Micronaut: A full-stack framework for building cloud-native applications with Java. Quarkus: Another framework created specifically to enable Java developers to build cloud-native applications. .NET Aspire: Microsoft‘s open-source framework for building cloud-native applications with .NET. Next.js: A React JavaScript framework that is particularly well-suited for building cloud-native web applications. Node.js: A lean and fast JavaScript runtime environment with an event-driven, non-blocking I/O model. Continuous integration and continuous deployment Continuous CI/CD pipelines are essential components of cloud-native development, enabling automated testing, building and deployment of applications.  Modern CI/CD tools integrate closely with container technologies and cloud platforms, providing integrated automation across the entire application lifecycle. These tools often implement practices like automated testing, canary deployments and blue-green deployments that reduce risk and accelerate delivery. Among the commonly used tools are the following: Argo CD AWS CodePipeline Azure DevOps GitHub Actions GitLab Jenkins Observability and monitoring Cloud-native applications require observability technology to provide insights into the behavior of distributed systems. This includes monitoring, logging, and tracing capabilities that provide a comprehensive view of application performance and health across multiple services and infrastructure components. Tools that support the OpenTelemetry standard, along with platforms like Prometheus for metrics and Jaeger for distributed tracing, form the backbone of cloud-native observability. Best practices for cloud-native application development All of the major public cloud hyperscalers in recent years have developed best practices for cloud-native applications. The primary guidelines are often drafted under the name of the Well-Architected Framework. AWS Well-Architected Framework Google Cloud Well-Architected Framework Microsoft Azure Well-Architected Framework The foundational principles behind the Well-Architected Framework help to ensure that cloud-native applications are secure, reliable, and efficient. The core principles include the folllowing: Operational excellence: Monitor systems and improve processes. Security: Implement strong identity and access management, data protection, and incident response. Reliability: Design systems to recover from failures and meet demand. Performance efficiency: Use computing resources efficiently. Cost optimization: Manage costs to maximize the value delivered. Cloud-native applications represent a fundamental shift in how organizations design, build, and deploy software. Rather than simply moving existing applications to cloud infrastructure as a virtual machine, the cloud-native approach embraces the cloud’s unique capabilities through architectural decisions that prioritize flexibility, resilience, and scale. By embracing cloud-native principles, organizations position themselves to benefit from the full potential of cloud computing—not just as a hosting model, but as an approach to building applications that can evolve rapidly, operate reliably, and scale dynamically as usage requires.
Read More

What software developers need to know about cybersecurity

In 2024, cyber criminals didn’t just knock on the front door—they walked right in. High-profile breaches hit widely used apps from tech giants and consumer platforms alike, including Snowflake, Ticketmaster, AT&T, 23andMe, Trello, and Life360. Meanwhile, a massive, coordinated attack targeting Dropbox, LinkedIn, and X (formerly Twitter) compromised a staggering 26 billion records. These aren’t isolated incidents—they’re a wake-up call. If reducing software vulnerabilities isn’t already at the top of your development priority list, it should be. The first step? Empower your developers with secure coding best practices. It’s not just about writing code that works—it’s about writing code that holds up under fire. Start with the known Before developers can defend against sophisticated zero-day attacks, they need to master the fundamentals—starting with known vulnerabilities. These trusted industry resources provide essential frameworks and up-to-date guidance to help teams code more securely from day one: OWASP Top 10: The Open Worldwide Application Security Project (OWASP) curates regularly updated Top 10 lists that highlight the most critical security risks across web, mobile, generative AI, API, and smart contract applications. These are must-know threats for every developer. MITRE: MITRE offers an arsenal of tools to help development teams stay ahead of evolving threats. The MITRE ATT&CK framework details adversary tactics and techniques while CWE (Common Weakness Enumeration) catalogs common coding flaws with serious security implications. MITRE also maintains the CVE Program, an authoritative source for publicly disclosed cybersecurity vulnerabilities. NIST NVD: The National Institute of Standards and Technology (NIST) maintains the National Vulnerability Database (NVD), a repository of security checklist references, vulnerability metrics, software flaws, and impacted product data.  Training your developers to engage with these resources isn’t just the best practice, it’s your first line of defense. Standardize on secure coding techniques Training developers to write secure code shouldn’t be looked at as a one-time assignment. It requires a cultural shift. Start by making secure coding techniques are the standard practice across your team. Two of the most critical (yet frequently overlooked) practices are input validation and input sanitization. Input validation ensures incoming data is appropriate and safe for its intended use, reducing the risk of logic errors and downstream failures. Input sanitization removes or neutralizes potentially malicious content—like script injections—to prevent exploits like cross-site scripting (XSS). Get access control right Authentication and authorization aren’t just security check boxes—they define who can access what and how. This includes access to code bases, development tools, libraries, APIs, and other assets. This includes defining how entities can access sensitive information and view or modify data. Best practices dictate employing a least-privilege approach to access, providing only the permissions necessary for users to perform required tasks.  Don’t forget your APIs APIs may be less visible, but they form the connective tissue of modern applications. APIs are now a primary attack vector, with API attacks growing 1,025% in 2024 alone. The top security risks? Broken authentication, broken authorization, and lax access controls. Make sure security is baked into API design from the start, not bolted on later. Assume sensitive data will be under attack Sensitive data consists of more than personally identifiable information (PII) and payment information. It also includes everything from two-factor authentication (2FA) codes and session cookies to internal system identifiers. If exposed, this data becomes a direct line to the internal workings of an application and opens the door to attackers. Application design should consider data protection before coding starts and sensitive data must be encrypted at rest and in transit, with strong, current, up-to-date algorithms. Questions developers should ask: What data is necessary? Could data be exposed during logging, autocompletion, or transmission?  Log and monitor applications Application logging and monitoring are essential for detecting threats, ensuring compliance, and responding promptly to security incidents and policy violations. Logging is more than a check-the-box activity—for developers, logging can be a critical line of defense. Application logs should: Capture user context to identify suspicious or anomalous activity, Ensure log data is properly encoded to guard against injection attacks, and Include an audit trail for all critical transactions. Logging and monitoring aren’t limited to the application. They should span the entire software development life cycle (SDLC) and include real-time alerting, incident response plans, and recovery procedures. Integrate security in every phase You don’t have to compromise security for speed. When effective security practices are baked in across the development process—from planning and architecture to coding, deployment, and maintenance—vulnerabilities can be identified early to ensure a smooth release. Training developers to think like defenders while they build can accelerate delivery while reducing the risk of costly rework later in the cycle and result in more resilient software. Build on secure foundations While secure code is important, it’s only part of the equation. The entire SDLC has its own attack surface to manage and defend. Every API, cloud server, container, and microservice adds complexity and provides opportunities for attackers. In fact, one-third of the most significant application breaches of 2024 resulted from attacks on cloud infrastructure while the rest were traced back to compromised APIs and weak access controls. Worse still, attackers aren’t waiting until software is in production. The 2025 State of Application Risk report from Legit Security found that every organization surveyed had high or critical risks lurking in their development environments. The same report found that these organizations also had exposed secrets, with over one-third found outside of source code—in tickets, logs, and artifacts. What can you do? To reduce risk, develop a strategy to prioritize visibility and control across development environments, because attackers can strike during any phase.    Manage third-party risk So, you’ve implemented best practices across your development environment, but what about your supply chain vendors? Applications are only as secure as their weakest links. Software ecosystems today are interconnected and complex. Third-party libraries, frameworks, cloud services, and open-source components all represent prime entry points for attackers. A software bill of materials (SBOM) can help you understand what’s under the hood, providing a detailed inventory of application components and libraries to identify potential vulnerabilities. But that’s just the beginning, because development practices can also introduce supply chain risk. To reduce third-party risk: Validate software as artifacts move through build pipelines to make sure it hasn’t been compromised. Use version-specific containers for open-source components to support traceability. Ensure pipelines validate code and packages before use, especially from third-party repositories. Securing the software supply chain means assuming every dependency could be compromised. Commit to continuous monitoring Application security is a moving target. Tools, threats, dependencies, and even the structure of your teams evolve. Your security posture should evolve with them. To keep pace, organizations need an ongoing monitoring and improvement program that includes: Regular reviews and updates to secure development practices, Role-specific training for everyone across the SDLC, Routine audits of code reviews, access controls, and remediation workflows, and Penetration testing and red teaming, wherever appropriate. Security maturity isn’t about perfection—it’s about progress, visibility, and discipline. Your development organization should never stop asking the question, “What’s changed, and how does it impact our risk?” Security is no longer optional, but a core competency for modern developers. Invest in training, standardize your practices, and make secure coding second nature. Your applications—and your users—will thank you. Jose Lazu is associate director of product at CMD+CTRL Security. — New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.
Read More

Visual Studio Code beefs up AI coding features

Visual Studio Code 1.100, the latest release of Microsoft’s code editor, has arrived with several upgrades to its AI chat and AI code editing capabilities. Highlighting the list are support for Markdown-based instructions and prompt files, faster code editing in agent mode, and more speed and accuracy in Next Edit Suggestions. Released May 8, Visual Studio Code 1.100, also known as the April 2025 release, can be downloaded for Windows, macOS, and Linux at code.visualstudio.com. VS Code 1.100 allows developers to tailor their AI chat experience in the code editor to their specific coding practices and technology stack, by using Markdown-based files. Instructions files are used to define coding practices, preferred technologies, project requirements, and other custom instructions, while prompt files are used to create reusable chat requests for common tasks, according to Microsoft. Developers could create different instructions files for different programming languages or project types. A prompt file might be used to create a front-end component, Microsoft said. The new VS Code release also brings faster AI-powered code editing in agent mode, especially in large files, due to the addition of support for OpenAI’s apply patch editing format and Anthropic’s replace string tool. The update for OpenAI is on by default in VS Code Insiders and gradually rolling out to Stable, Microsoft said, while the update for Anthropic is available for all users. Visual Studio Code 1.100 introduces a new model for powering Next Edit Suggestions, intended to offer faster and more contextually relevant code recommendations. This updated model delivers suggestions with reduced latency and aligns more closely with recent edits, according to Microsoft. NES also now automatically can suggest adding missing import statements in JavaScript and TypeScript files. With VS Code 1.100, the editor now provides links to additional information that explains why an extension identified as malicious was flagged. These “Learn More” links connect users to GitHub issues or documentation with details about the security concerns, helping users better understand potential risks. In addition, extension signature verification now is required on all platforms, i.e., Windows, macOS, and Linux. Previously, this verification was mandatory only on Windows and macOS. With this release, Linux now also enforces extension signature verification, ensuring that all extensions are properly validated before installation. VS Code also features two new modes for floating windows. Floating windows in VS Code allow developers to move editors and certain views out of the main window into a smaller window for lightweight multi-window setups. The two new modes include Compact, in which certain UI elements are hidden to make more room for the actual content, and Always-on-top, in which the window stays on top of all other windows until a developer leaves this mode. For source control, VS Code 1.100 adds quick diff editor decorations for staged changes. Developers now can view staged changes directly from the editor, without needing to open the Source Control view. For debugging, VS 1.100 features a context menu in the disassembly view.   VS Code 1.100 follows VS Code 1.99, which was released April 3 with improvements for Copilot Chat and Copilot agent mode, along with the introduction of Next Edit Suggestions. VS Code 1.99 was followed by three point releases that addressed various bugs and security issues.
Read More