Browsing Category

Uncategorized

32 posts

AWS Transform now supports agentic modernization of custom code

Does AI-generated code add to, or reduce, technical debt? Amazon Web Services is aiming to reduce it with the addition of new capabilities to AWS Transform, its AI-driven service for modernizing legacy code, applications, and infrastructure. “Modernization is no longer optional for enterprises these days,” said Akshat Tyagi, associate practice leader at HFS Research. They need cleaner code and updated SDKs to run AI workloads, tighten security, and meet new regulations, he said, but their inability to modernize custom code quickly and with little manual effort is one of the major drivers of technical debt. AWS Transform was introduced in May to accelerate the modernization of VMware systems and  Windows .Net and mainframe applications using agentic AI. Now, at AWS re:Invent, it’s getting some additional capabilities in those areas — and new custom code modernization features besides. New mainframe modernization agents add functions including activity analysis to help decide whether to modernize or retire code; blueprints to identify the business functions and flows hidden in legacy code; and automated test plan generation. AWS Transform for VMware gains new functionality including an on-premises discovery tool; support for configuration migration of network security tools from Cisco ACI, Fortigate, and Palo Alto Networks; and a migration planning agent that draws business context from unstructured documents, files, chats and business rules. The company is also inviting partners to integrate their proprietary migration tools and agents with its platform through a new AWS Transform composability initiative. Accenture, Capgemini, and Pegasystems are the first on board. Customized modernization for custom code On top of that, there’s a whole new agent, AWS Transform custom, designed to reduce the manual effort involved in custom code modernization by learning a custom pattern and operationalizing it throughout the target codebase or SDK. In order to feed the agent the unique pattern, enterprise teams can use natural-language instructions, internal documentation, or example code snippets that illustrate how specific upgrades should be performed. AWS Transform custom then applies these patterns consistently across large, multi-repository codebases, automatically identifying similar structures and making the required changes at scale; developers can then review and fine-tune the output, which the agent adapts and operationalizes, allowing it to continually refine its accuracy, the company said. Generic is no longer good enough Tyagi said that the custom code modernization approach taken by AWS is better than most generic modernization tools, which rely solely on pre-packaged rules for modernization. “Generic modernization tools no longer cut it. Every day we come across enterprises complaining that the legacy systems are now so intertwined that pre-built transformation rules are now bound to fail,” he said. Pareekh Jain, principal analyst at Pareekh Consulting, said Transform custom’s ability to support custom SDK modernization will also act as a value driver for many enterprises. “SDK mismatch is a major but often hidden source of tech debt. Large enterprises run hundreds of microservices on mismatched SDK versions, creating security, compliance, and stability risks,” Jain said. “Even small SDK changes can break pipelines, permissions, or runtime behavior, and keeping everything updated is one of the most time-consuming engineering tasks,” he said. Similarly, enterprises will find support for modernization of custom infrastructure-as-code (IaC) particularly valuable, Tyagi said, because it tends to fall out of date quickly as cloud services and security rules evolve. Large organizations, the analyst noted, often delay touching IaC until something breaks, since these files are scattered across teams and full of outdated patterns, making it difficult and error-prone to clean up manually. For many enterprises, 20–40% of modernization work is actually refactoring IaC, Jain said. Not a magic button However, enterprises shouldn’t see AWS Transform’s new capabilities as a magic button to solve their custom code modernization issues. Its reliability will depend on codebase consistency, the quality of examples, and the complexity of underlying frameworks, said Jain. But, said Tyagi, real-world code is rarely consistent. “Each individual writes it with their own methods and perceptions or habits. So the tool might get some parts right and struggle with others. That’s why you still need developers to review the changes, and this is where human intervention becomes significant,” Tyagi said. There is also upfront work, Jain said: Senior engineers must craft examples and review output to ground the code modernization agent and reduce hallucinations. The new features are now available and can be accessed via AWS Transform’s conversational interface on the web and the command line interface (CLI).
Read More

AWS unveils Frontier AI agents for software development

Amazon Web Services has unveiled a new class of AI agents, called frontier agents, which the company said can work for hours or days without intervention. The first three agents are focused on software development tasks. The three agents announced December 2 include the Kiro autonomous agent, AWS Security Agent, and AWS Devops Agent, each focused on a different aspect of the software development life cycle. AWS said these agents represent a step-function change in what can be done with agents, moving from assisting with individual tasks to completing complex projects autonomously like a member of the user’s team. The Kiro autonomous agent is a virtual developer that maintains context and learns over time while working independently, so users can focus on their biggest priorities. The AWS Security Agent serves as virtual security engineer that helps build secure applications by being a security consultant for app design, code reviews, and penetration testing. And the AWS DevOps Agent is a virtual operations team member that helps resolve and proactively prevent incidents while continuously improving an applications’ reliability and performance, AWS said. All three agents are available in preview. The Kiro agent is a shared resource working alongside the entire team, building a collective understanding of the user’s codebase, products, and standards. It connects to a team’s repos, pipelines, and tools such as Jira and GitHub to maintain context as work progresses. Kiro previously was positioned as an agentic AI-driven IDE. The AWS Security Agent, meanwhile, helps build applications that are secure from the start across AWS, multi-cloud, and hybrid environments. AWS Devops Agent is on call when incidents happen, instantly responding to issues and usings its knowledge of an application and relationship between components to find the root cause of a problem with an application going down, according to AWS. AWS said the Frontier agents were the result of examining its own development teams building services at Amazon scale and uncovering three critical insights to increase value. First, by learning what agents were and were not good at, the team could switch from babysitting every small task to directing agents toward broad, goal-driven outcomes. Second, the velocity of teams was tied to how many agentic tasks could be run at the same time. Third, the longer agents could operate on their own, the better. The AWS team realized it needed the same capabilities across very aspect of the software development life cycle, such as security and operations, or risk creating new bottlenecks.
Read More

The ripple effects of a VPN ban

Michigan and Wisconsin are considering proposals that would ban the use of virtual private networks (VPNs) by requiring internet providers to block these encrypted connections. The stated rationale is to control how users access certain online materials, but such a ban would upend the technical foundation of modern work, learning, and communication far beyond any single issue. VPNs are not simply niche tools or workarounds. They’re the invisible infrastructure that underpins the security, productivity, and connectivity of countless institutions and individuals worldwide. If states implement a broad VPN ban, the day-to-day operations of businesses, schools, and residents would be severely affected. The wide reliance on VPNs Nearly every organization, from large multinational tech companies to small accounting firms, relies on VPNs to protect sensitive operations. In a world of distributed teams, cloud-based applications, and bring-your-own-device workplaces, the only way to keep sensitive company data secure as it moves across public networks is through encrypted VPN connections. Cloud computing forms the foundation of most business activities. Whether employees are accessing files, databases, or proprietary applications, they often do so through the cloud. Remote workers, traveling employees, or anyone logging in from outside the office requires a VPN to establish a secure connection and protect their activity and the company’s sensitive assets from cyberthreats. Removing VPNs cuts the essential link between remote users and their digital workspace. The consequences would be immediate and serious: Companies would need to recall staff to physical offices, risking the loss of talent and drops in productivity, or shifting entire operations to more tech-friendly locations. For smaller businesses without the resources to handle these sudden challenges, the impact could be existential. VPNs are as essential to educational institutions as they are to businesses. Universities, colleges, and even K-12 districts use VPNs to allow students and faculty to access research databases, library archives, and administrative systems from anywhere in the world. The University of Michigan’s own VPN is a crucial tool that enables students and staff to connect securely even when using non-university internet providers. A ban would prevent students from doing coursework remotely, block faculty from accessing grading portals or academic data anywhere off campus, and make it extremely difficult for school IT teams to maintain security. Academic collaboration—both with colleagues at other institutions within the state and with international peers—would be hindered, isolating campuses at a time when global connectivity has never been more important. Losing critical privacy and access For regular internet users, VPNs are a fundamental privacy and security tool similar to having a phone number or locking your mailbox. They prevent third parties from tracking your activity, profiling your location, or creating a detailed record of your browsing history. Public Wi-Fi at coffee shops, airports, or hotels remains a top target for attackers. VPNs mitigate many of these risks, providing users with an important layer of protection. Users traveling across states or countries rely on VPNs to securely access their home services, bank accounts, and private communications. Freelancers, consultants, medical professionals, and legal experts—anyone who frequently moves between client sites—would be unable to securely connect to their own files or confidential portals. From a purely technical perspective, attempts to restrict VPNs create problems that are much bigger than the ones they claim to fix. Websites cannot reliably tell whether a VPN connection is coming from a particular state or even another country. If just a few states ban VPNs, sites that face legal risks are likely to block all VPN access globally to avoid accidental violations. This means VPN users everywhere could lose access to vital sites and services simply because of a law in one state. Such broad effects show how a technical policy, made without understanding operational realities, can cause widespread disruption across the internet. Productivity and security at risk The unintended consequences of a VPN ban reach well beyond state borders and far beyond the original lawmaking intentions. Without VPNs: Businesses lose the option of remote work—and with it, the flexibility and efficiency today’s economy requires. Educational institutions and students are cut off from essential resources and collaboration tools. Everyday users are exposed to cyberthreats, tracking, and data breaches when using public networks. Vulnerable populations, such as journalists, advocates, and individuals relying on privacy for their safety, are deprived of vital digital protections. Additionally, VPNs are the foundation of many compliance systems, including those overseeing financial data, health records, and legal documents. A ban could lead to legal and regulatory issues for companies trying to stay in good standing. Informed policy and practical solutions The debates in Michigan and Wisconsin over VPN access aren’t just about a single technology. They grapple with how societies balance security, productivity, privacy, and economic competitiveness in the digital age. Instead of limiting key security tools, states should focus on promoting cybersecurity education, strengthening tech infrastructure, and implementing smart digital policies that acknowledge the vital role VPNs play in modern life. The digital world requires thoughtful legislation that helps people and organizations thrive online rather than broad bans that make the internet less useful, secure, and productive for everyone. If Wisconsin and Michigan truly aim to attract business, research, and innovation, maintaining secure, private, and open access to essential technologies like VPNs is a key step.
Read More

How to ensure your enterprise data is ‘AI ready’

Many organizations are experimenting with AI agents to determine which job roles to focus on, when to automate actions, and what steps require a human in the middle. AI agents connect the power of large language models with APIs, enabling them to take action and integrate seamlessly into employee workflows and customer experiences in a variety of domains: Field operations AI agents can help outline the steps to address a service call. HR agents partner with job recruiters to schedule interviews for top applicants. Finance AI agents help respond to daily challenges in managing supply chain, procurement, and accounts receivable. Coding agents are integrated into AI-assisted development platforms that facilitate vibe coding and accelerate application development. AI agents are integrating into the workplace, where they participate in meetings, summarize discussions, create follow-up tasks, and schedule the next meetings. World-class IT organizations are adapting their strategies and practices to develop AI agents while mitigating the risks associated with rapid deployments. “Building a world-class IT team means leading the conversation on risk,” says Rani Johnson, CIO of Workday. “We work closely with our legal, privacy, and security teams to set a clear adoption risk tolerance that aligns with our overall strategy.” A key question for every technology, data, and business leader is whether the underlying data that AI agents tap into is “AI-ready.” According to Ocient’s Beyond Big Data report, 97% of leaders report notable increases in data processing due to AI, but only 33% have fully prepared for the escalating scale and complexity of the AI-driven workplace. Establishing data’s AI readiness is critical, as most AI agents leverage enterprise data to provide business, industry, and role-specific responses and recommendations. I asked business and technology leaders how they were evaluating AI agents for data readiness in domains such as sales, HR, finance, and IT operations. Seven critical practices emerged. Centralize data and intelligence IT departments have invested significantly in centralizing data into data warehouses and data lakes, and in connecting resources with data fabrics. However, data is not equivalent to intelligence, as much of the data science and computational work occurs downstream in a sprawl of SaaS tools, data analytics platforms, and other citizen data science tools. Worse, numerous spreadsheets, presentations, and other unstructured documents are often poorly categorized and lack unified search capabilities. “Instead of endlessly moving and transforming data, we need to bring intelligence directly to where the data lives, creating a journey to enterprise-ready data with context, trust, and quality built in at the source,” says Sushant Tripathi, VP and North America transformation lead at TCS. “This connected organizational intelligence weaves into the fabric of an enterprise, transforming fragmented information into trusted and unified assets so that AI agents can act with the speed and context of your best people, at enterprise scale.” Even as IT looks to centralize data and intelligence, a backlog of data debt creates risks when using it in AI agents. “AI-ready data must go beyond volume and accuracy and be unified, trusted, and governed to foster reliable AI,” says Dan Yu, CMO of SAP data and analytics. “With the right business data fabric architecture, organizations can preserve context, mitigate bias, and embed accountability into every layer of AI. This foundation ensures accurate, auditable decisions and enables AI to scale and adapt on semantically rich, governed data products, delivering durable business value.” Recommendation: Most organizations will have a continuous backlog of dataops and data debt to address. Product-based IT organizations should manage data resources as products and develop roadmaps aligned with their AI priorities. Ensure compliance with regulations and security standards When it comes to data security, Jack Berkowitz, chief data officer at Securiti, advises starting by answering who should have access to any given piece of information flowing in or out of the genAI application, whether sensitive information is included in the content, and how this data and information are being processed or queried. He says, “As we move to agentic AI, which is actively able to do processing and take decisions, putting static or flat guardrails in place will fail.” Guardrails are needed to help prevent rogue AI agents and to use data in areas where the risks outweigh the benefits. “Most enterprises have a respectable security base with a secure SDLC, encryption at rest and in transit, role-based access control, data loss prevention, and adherence to regulations such as GDPR, HIPAA, and CCPA,” says Joanne Friedman, CEO of ReilAI. “That’s sufficient for traditional IT, but insufficient for AI, where data mutates quickly, usage patterns are emergent, and model behavior must be governed—not guessed.” Recommendation: Joanne recommends establishing the following four pillars of AI risk-ready data: Define an AI bill of materials. Use a risk management framework such as NIST AI RMF or ISO 42001. Treat genAI prompts as data and protect against prompt injection, data leakage, and related abuses. Document AI with model cards and datasheets for datasets, including intended use, limitations, and other qualifications. Define contextual metadata and annotations AI language models can be fed multiple documents and data sources with conflicting information. When an employee’s prompt results in an erroneous response or hallucinations, they can respond with clarifications to close the gap. However, with AI agents integrated into employee workflows and customer journeys, the stakes of poor recommendations and incorrect actions are significantly higher. An AI agent’s accuracy improves when documents and data sources include rich metadata and annotations, signaling how to use the underlying information responsively. “The AI needs to be able to understand the meaning behind the data by adding a semantic layer, which is like a universal dictionary for your data,” says Andreas Blumauer, SVP growth and marketing at Graphwise. “This layer uses consistent labels, metadata, and annotations to tell the AI what each piece of data represents, linking it directly to your business concepts and questions. This is also where you include specific industry knowledge, or domain knowledge models, so the AI understands the context of your business.” Recommendation: Leverage industry-specific taxonomies and categorization standards, then apply a metadata standard such as Dublin Core, Schema.org, PROV-O, or XMP. Review the statistical significance of unbiased data Surveys are a primary tool of market research. Researchers determine the questions and answers of the survey according to best practices that minimize the exposure of biases to the respondent. For example, asking employees who use the service desk, “How satisfied are you with our excellent help desk team’s quick response times?” is biased because the words excellent and quick in the question imply a subjective standard. Another challenge for researchers is ensuring a significant sample size for all respondent segments. For example, it would be misleading to report on executive response to the service desk survey if only a handful of people in this segment responded to it. When reviewing data for use in AI, it is even more important to consider statistical significance and data biases, especially when the data in question underpins an AI agent’s decision-making. “AI-ready data requires more than conventional quality frameworks, demanding statistical rigor that encompasses comprehensive bias audits with equalized odds, distributional stability testing, and causal identifiability frameworks that enable counterfactual reasoning,” says Shanti Greene, head of data science at AnswerRocket and adjunct professor at Washington University. “Organizations pursuing transformational outcomes through sophisticated generative models paradoxically remain constrained by data infrastructures exhibiting insufficient volume for edge-case coverage. AI systems remain bounded by statistical foundations, proving that models trained on deficient data can generate confident hallucinations that masquerade as authoritative intelligence.” Recommendation: Understanding and documenting data biases should be a data governance non-negotiable. Applicable common fairness metrics include demographic parity and equalized odds, while p-value testing is used for statistical significance testing. Benchmark and review data quality metrics Data quality metrics focus on a dataset’s accuracy, completeness, consistency, timeliness, uniqueness, and validity. JG Chirapurath, president of DataPelago, recommends tracking the following: Data completeness: Fewer than 5% of entries for any critical field may be blank or missing to be considered complete. Statistical drift: If any key statistic changes by more than 2% compared to expected values, the data is flagged for human review. Bias ratios: If a group or segment experiences outcomes that are more than 20% different from those of another group or segment, the data is flagged for human review. Golden data sets: AI outputs must achieve greater than 90% agreement with human-verified ground truth on sample subsets. Rajeev Butani, chairman and CEO of MediaMint, adds, “Organizations can measure readiness with metrics like null and duplicate rates, schema and taxonomy consistency, freshness against SLAs, and reconciliation variance between booked, delivered, and invoiced records. Bias and risk can be tested through consent coverage, PII exposure scores, and retention or deletion checks.” Recommendation: Selecting data quality metrics and calculating a composite data health score is a common feature of data catalogs that helps build trust in using datasets for AI and decision-making. Data governance leaders should communicate target benchmarks and establish a review process for datasets that fall below data quality standards. Establish data classification, lineage, and provenance Looking beyond data quality, key data governance practices include classifying data for IP and privacy, and establishing data’s lineage and provenance. “The future is about governing AI agents as non-human identities that are registered, accountable, and subject to the same discipline as people in an identity system,” says Matt Carroll, founder and CEO of Immuta. “This requires classifying information into risk tiers, building in checkpoints for when human oversight is essential, and allowing low-risk interactions to flow freely.” Geoff Webb, VP of product and portfolio marketing at Conga, shares two key metrics that must be carefully evaluated before trusting the results of any agentic workflows. Data provenance refers to the origin of the data. Can the source be trusted, and how did that data become part of the dataset you are using? The chronology of the data refers to how old it is. Prevent training models using data that is no longer relevant to the objectives, or that may reflect outdated working practices, non-compliant processes, or simply poor business practices from the past. Recommendation: Regulated industries have a long history of maturing data governance practices. For companies lagging in these disciplines, data classification is an essential starting point. Create human-in-the-middle feedback loops As organizations use more datasets in AI, it is essential to have ongoing validation of the AI language model and agent’s accuracy by subject matter experts and other end-users. Dataops should extend feedback on AI to the underlying data sources to help prioritize improvements and identify areas to be enriched with new datasets. “In our call centers, we’re not just listening to customer interactions, we’re also feeding that qualitative data back into engineering teams to reshape how experiences are designed,” says Ryan Downing, VP and CIO of enterprise business solutions at Principal Financial Group. “We measure how people interact with AI-infused solutions and how those interactions correlate with downstream behaviors, for example, whether someone still needed to call us after using the mobile app.” Recommendation: Unstructured datasets and those capturing people’s opinions and sentiments are most prone to variance that statistical methods may not easily validate. When people report odd responses from AI models built on this data, it’s essential to trace back to the root causes in the data, especially since many AI models are not fully explainable. Automate a data readiness checklist Guy Adams, CTO of DataOps.live, says “AI-ready data isn’t just good data; it’s good data that’s been productized, governed, and delivered with the correct context so it can be trusted by AI systems today—and reused for the AI use cases we haven’t even imagined yet.” Organizations that heavily invest in AI agents and other AI capabilities will first ensure their data is ready and then automate a checklist for ongoing validation. The bar should be raised for any dataset’s AI readiness when that data is used for more mission-critical workflows and revenue-impacting customer experiences at greater scales.
Read More