Month: October 2025

13 posts

Will JavaFX return to Java?

Just as a proposal to return JavaFX to the Java Development Kit has drawn interest in the OpenJDK community, Oracle too says it wants to make the Java-based rich client application more approachable within the JDK. JavaFX was removed from the JDK with Java 11 more than seven years ago. An October 29 post by Bruce Haddon on an OpenJDK discussion list argues that the reasons for the separation of JavaFX from the JDK—namely, that JavaFX contributed greatly to the bloat of the JDK, that the separation allowed the JDK and JavaFX to evolve separately, and that the development and maintenance of JavaFX had moved from Oracle to Gluon—are much less applicable today. Haddon notes that JDK bloat has been addressed by modularization, that the JDK and the JavaFX releases have kept in lockstep, and that both Java and JavaFX developments are available in open source (OpenJDK and OpenJFX), so integrating the releases would still permit community involvement and innovation. “Further, it would be of great convenience to developers not to have to make two installations and then configure their IDEs to access both libraries (not really easy in almost all IDEs, requiring understanding of many otherwise ignorable options of each IDE),” Haddon wrote. “It is both my belief and my recommendation that the time has come for the re-integration of JavaFX (as the preferred GUI feature) with the rest of the JDK.” In response to an InfoWorld inquiry, Oracle on October 30 released the following statement from Donald Smith, Oracle vice president of Java product management: “Oracle continues to lead and be active in the OpenJFX Project. While we don’t have specific announcements or plans currently, we are investigating options for improving the approachability of JavaFX with the JDK.” JavaFX was launched in 2007 by Sun Microsystems. It now is billed as an open source, next-generation client application platform for desktop, mobile, and embedded systems built on Java. JavaFX releases for Linux, macOS, and Windows can be downloaded from Gluon.
Read More

OpenAI launches Aardvark to detect and patch hidden bugs in code

OpenAI has unveiled Aardvark, a GPT-5-powered autonomous agent designed to act like a human security researcher capable of scanning, understanding, and patching code with the reasoning skills of a professional vulnerability analyst. Announced on Thursday and currently available in private beta, Aardvark is being positioned as a major leap toward AI-driven software security. Unlike conventional scanners that mechanically flag suspicious code, Aardvark attempts to analyze how and why code behaves in a particular way. “OpenAI Aardvark is different as it mimics a human security researcher,” said Pareekh Jain, CEO at EIIRTrend. “It uses LLM-powered reasoning to understand code semantics and behavior, reading and analyzing code the way a human security researcher would.” By embedding itself directly into the development pipeline, Aardvark aims to turn security from a post-development concern into a continuous safeguard that evolves with the software itself, Jain added. From code semantics to validated patches What makes Aardvark unique, OpenAI noted, is its combination of reasoning, automation, and verification. Rather than simply highlighting potential vulnerabilities, the agent promises multi-stage analysis–starting by mapping an entire repository and building a contextual threat model around it. From there, it continuously monitors new commits, checking whether each change introduces risk or violates existing security patterns. Additionally, upon identifying a potential issue, Aardvark attempts to validate the exploitability of the finding in a sandboxed environment before flagging it. This validation step could prove transformative. Traditional static analysis tools often overwhelm developers with false alarms–issues that may look risky but aren’t truly exploitable. “The biggest advantage is that it will reduce false positives significantly,” noted Jain. “It’s helpful in open source codes and as part of the development pipeline.” Once a vulnerability is confirmed, Aardvark integrates with Codex to propose a patch, then re-analyzes the fix to ensure it doesn’t introduce new problems. OpenAI claims that in benchmark tests, the system identified 92 percent of known and synthetically introduced vulnerabilities across test repositories, a promising indication that AI may soon shoulder part of the burden of modern code auditing. Securing open source and shifting security left Aardvark’s role extends beyond enterprise environments. OpenAI has already deployed it across open-source repositories, where it claims to have discovered multiple real-world vulnerabilities, ten of which have received official CVE identifiers. The LLM giant said it plans to provide pro-bono scanning for selected non-commercial open-source projects, under a coordinated disclosure framework that gives maintainers time to address the flaws before public reporting. This approach aligns with a growing recognition that software security isn’t just a private-sector problem, but a shared ecosystem responsibility. “As security is becoming increasingly important and sophisticated, these autonomous security agents will be helpful to both big and small enterprises,” Jain added. OpenAI’s announcement also reflects a broader industry concept known as “shifting security left,” embedding security checks directly into development, rather than treating them as end-of-cycle testing. With over 40,000 CVE-listed vulnerabilities reported annually and the global software supply chain under constant attack, integrating AI into the developer workflow could help balance velocity with vigilance, the company added.
Read More

Learning from the AWS outage: Actions and resources

It has become cliché to say that the cloud is the backbone of digital transformation, but cloud outages like the recent AWS incident make enterprise dependence on the cloud painfully clear. Last week’s AWS outage impacted thousands of businesses worldwide, from SaaS providers to e-commerce companies. Revenue streams paused or evaporated, customer experiences soured, and brand reputations were at stake. For enterprises that suffer direct financial losses from any outage, the frustration runs deep. As someone who has advised organizations on cloud architecture for decades, I often hear the same question after these events: What can we do to recover our losses and prevent devastating disruptions in the future? The first step for any enterprise is to gather the facts about the outage and its impact. Cloud providers like AWS are quick to produce incident reports and public updates that usually detail what went wrong, how long it took to resolve, and which services were affected. It’s easy to get distracted by blame, but understanding the technical and contractual realities gives you your best shot at effective recourse. For enterprises, the key information to collect is: What services or workloads were impacted and for how long? What were the direct business consequences? Missed transactions, customer attrition, or downstream costs? What does your service-level agreement (SLA) actually guarantee, and did the outage breach those guarantees? It’s not enough to know that “the cloud was down.” The specifics—duration, affected zones, the criticality of business functionality—will determine your next steps. Cloud SLAs and compensation Here’s one of the harsh realities I’ve encountered: Most enterprises overestimate what their public cloud agreements guarantee. AWS, Azure, and Google Cloud (along with other hyperscalers) offer clear-cut SLAs, but the compensation for outages is almost always limited and rarely covers your actual business losses. Typically, SLAs offer service credits based on a percentage of your affected monthly usage. For example, if your web application is unavailable for two hours and the SLA states “99.99% uptime,” you might receive a percentage credit for future usage. These credits are better than nothing, but for enterprises facing six-figure losses from a major outage, they are a mere drop in the bucket. It’s important to recognize that compensation usually requires you to file a claim, often within a limited timeframe, and depends on your ability to demonstrate direct impact. Providers will not cover consequential or indirect damage such as lost sales, contractual penalties from your own clients, or damage to your brand. These are your problems, not theirs. Although this is difficult to accept, understanding it up front is better than being caught off guard. Limits of legal recourse Could you go further and pursue legal action? The answer is rarely satisfying. The standard cloud contract, designed by swarms of well-paid lawyers, strongly limits the provider’s liability. Most terms of service explicitly exclude responsibility for consequential and indirect losses and cap direct damages at the amount you paid in the previous month. Unless the provider acted in bad faith or with gross negligence—which is very hard to prove—courts tend to uphold these contracts. Occasionally, if your outage has broader impacts, such as a widely used financial platform that prompts regulatory scrutiny, high-profile cases may occur. But for most companies, the only realistic recourse is through the SLA credit process. Pursuing a lawsuit not only incurs substantial legal costs, but it is rarely worth your time compared to the minor damages you might recover. Assess your business continuity strategy The next step is to evaluate your organization’s risk profile and cloud architecture. In the tech world, the saying “Don’t put all your eggs in one basket” matters as much for computing as for investments. While cloud engineering teams often believe in the robust, distributed nature of the public cloud, outages expose uncomfortable truths: Single-region deployments, insufficient failover mechanisms, and a lack of multicloud or hybrid strategies often leave businesses vulnerable. It is critical to conduct an honest post-mortem. Which systems failed and why? Did you rely solely on a single cloud provider or region without proper replication or fallback? Did your own resilience measures, such as automated failover, work in practice as well as in planning? Many organizations realize too late that their cloud backup was misconfigured, that critical systems lacked redundant design, or that their disaster recovery playbooks were outdated or untested. These gaps turn a provider’s outage into a companywide crisis. Three steps to true resilience In the aftermath of a public cloud outage, enterprises must eventually move beyond seeking compensation and develop meaningful protection strategies. Drawing on lessons from this and previous incidents, here are three essential steps every organization should take. First, review your architecture and deploy real redundancy. Leverage multiple availability zones within your primary cloud provider and seriously consider multiregion and even multicloud resilience for your most critical workloads. If your business cannot tolerate extended downtime, these investments are no longer optional. Second, review and update your incident response and disaster recovery plans. Theoretical processes aren’t enough. Regularly test and simulate outages at the technical and business process levels. Ensure that playbooks are accurate, roles and responsibilities are clear, and every team knows how to execute under stress. Fast, coordinated responses can make the difference between a brief disruption and a full-scale catastrophe. Third, understand your cloud contracts and SLAs and negotiate better terms if possible. Speak with your providers about custom agreements if your scale can justify them. Document outages carefully and file claims promptly. More importantly, factor the actual risks—not just the “guaranteed” uptime—into your business and customer SLAs. Cloud outages are no longer rare. As enterprises deepen their reliance on the cloud, the risks rise. The most resilient businesses will treat each outage as a crucial learning opportunity to strengthen both technical defenses and contractual agreements before the next problem occurs. As always, the best offense is a strong defense.
Read More