Showing posts with label openAI. Show all posts
Showing posts with label openAI. Show all posts

Daily Tech Digest - July 16, 2025


Quote for the day:

"Whatever the mind of man can conceive and believe, it can achieve." -- Napoleon Hill


The Seventh Wave: How AI Will Change the Technology Industry

AI presents three threats to the software industry: Cheap code: TuringBots, using generative AI to create software, threatens the low-code/no-code players. Cheap replacement: Software systems, be they CRM or ERP, are structured databases – repositories for client records or financial records. Generative AI, coupled with agentic AI, holds out the promise of a new way to manage this data, opening the door to an enterprising generation of tech companies that will offer AI CRM, AI financials, AI database, AI logistics, etc. ... Better functionality: AI-native systems will continually learn and flex and adapt without millions of dollars of consulting and customization. They hold the promise of being up to date and always ready to take on new business problems and challenges without rebuilds. When the business and process changes, the tech will learn and change. ... On one hand, the legacy software systems that PwC, Deloitte, and others have implemented for decades and that comprise much of their expertise will be challenged in the short term and shrink in the long term. Simultaneously, there will be a massive demand for expertise in AI. Cognizant, Capgemini, and others will be called on to help companies implement AI computing systems and migrate away from legacy vendors. Forrester believes that the tech services sector will grow by 3.6% in 2025.


Software Security Imperative: Forging a Unified Standard of Care

The debate surrounding liability in the open source ecosystem requires careful consideration. Imposing direct liability on individual open source maintainers could stifle the very innovation that drives the industry forward. It risks dismantling the vast ecosystem that countless developers rely upon. ... The software bill of materials (SBOM) is rapidly transitioning from a nascent concept to an undeniable business necessity. As regulatory pressures intensify, driven by a growing awareness of software supply chain risks, a robust SBOM strategy is becoming critical for organizational survival in the tech landscape. But the value of SBOMs extends far beyond a single software development project. While often considered for open source software, an SBOM provides visibility across the entire software ecosystem. It illuminates components from third-party commercial software, helps manage data across merged projects and validates code from external contributors or subcontractors — any code integrated into a larger system. ... The path to a secure digital future requires commitment from all stakeholders. Technology companies must adopt comprehensive security practices, regulators must craft thoughtful policies that encourage innovation while holding organizations accountable and the broader ecosystem must support the collaborative development of practical and effective standards.


The 4 Types of Project Managers

The prophet type is all about taking risks and pushing boundaries. They don’t play by the rules; they make their own. And they’re not just thinking outside the box, they’re throwing the box away altogether. It’s like a rebel without a cause, except this rebel has a cause – growth. These visionaries thrive in ambiguity and uncertainty, seeing potential where others see only chaos or impossibility. They often face resistance from more conservative team members who prefer predictable outcomes and established processes. ... The gambler type is all about taking chances and making big bets. They’re not afraid to roll the dice and see what happens. And while they play by the rules of the game, they don’t have a good business case to back up their bets. It’s like convincing your boss to let you play video games all day because you just have a hunch it will improve your productivity. But don’t worry, the gambler type isn’t just blindly throwing money around. They seek to engage other members of the organization who are also up for a little risk-taking. ... The expert type is all about challenging the existing strategy by pursuing growth opportunities that lie outside the current strategy, but are backed up by solid quantitative evidence. They’re like the detectives of the business world, following the clues and gathering the evidence to make their case. And while the growth opportunities are well-supported and should be feasible, the challenge is getting other organizational members to listen to their advice.


OpenAI, Google DeepMind and Anthropic sound alarm: ‘We may be losing the ability to understand AI’

The unusual cooperation comes as AI systems develop new abilities to “think out loud” in human language before answering questions. This creates an opportunity to peek inside their decision-making processes and catch harmful intentions before they turn into actions. But the researchers warn this transparency is fragile and could vanish as AI technology advances. ... “AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought for the intent to misbehave,” the researchers explain. But they emphasize that this monitoring capability “may be fragile” and could disappear through various technological developments. ... When AI models misbehave — exploiting training flaws, manipulating data, or falling victim to attacks — they often confess in their reasoning traces. The researchers found examples where models wrote phrases like “Let’s hack,” “Let’s sabotage,” or “I’m transferring money because the website instructed me to” in their internal thoughts. Jakub Pachocki, OpenAI’s chief technology officer and co-author of the paper, described the importance of this capability in a social media post. “I am extremely excited about the potential of chain-of-thought faithfulness & interpretability. It has significantly influenced the design of our reasoning models, starting with o1-preview,” he wrote.


Unmasking AsyncRAT: Navigating the labyrinth of forks

We believe that the groundwork for AsyncRAT was laid earlier by the Quasar RAT, which has been available on GitHub since 2015 and features a similar approach. Both are written in C#; however, their codebases differ fundamentally, suggesting that AsyncRAT was not just a mere fork of Quasar, but a complete rewrite. A fork, in this context, is a personal copy of someone else’s repository that one can freely modify without affecting the original project. The main link that ties them together lies in the custom cryptography classes used to decrypt the malware configuration settings. ... Ever since it was released to the public, AsyncRAT has spawned a multitude of new forks that have built upon its foundation. ... It’s also worth noting that DcRat’s plugin base builds upon AsyncRAT and further extends its functionality. Among the added plugins are capabilities such as webcam access, microphone recording, Discord token theft, and “fun stuff”, a collection of plugins used for joke purposes like opening and closing the CD tray, blocking keyboard and mouse input, moving the mouse, turning off the monitor, etc. Notably, DcRat also introduces a simple ransomware plugin that uses the AES-256 cipher to encrypt files, with the decryption key distributed only once the plugin has been requested.


Repatriating AI workloads? A hefty data center retrofit awaits

CIOs with in-house AI ambitions need to consider compute and networking, in addition to power and cooling, Thompson says. “As artificial intelligence moves from the lab to production, many organizations are discovering that their legacy data centers simply aren’t built to support the intensity of modern AI workloads,” he says. “Upgrading these facilities requires far more than installing a few GPUs.” Rack density is a major consideration, Thompson adds. Traditional data centers were designed around racks consuming 5 to 10 kilowatts, but AI workloads, particularly model training, push this to 50 to 100 kilowatts per rack. “Legacy facilities often lack the electrical backbone, cooling systems, and structural readiness to accommodate this jump,” he says. “As a result, many CIOs are facing a fork in the road: retrofit, rebuild, or rent.” Cooling is also an important piece of the puzzle because not only does it enable AI, but upgrades there can help pay for other upgrades, Thompson says. “By replacing inefficient air-based systems with modern liquid-cooled infrastructure, operators can reduce parasitic energy loads and improve power usage effectiveness,” he says. “This frees up electrical capacity for productive compute use — effectively allowing more business value to be generated per watt. For facilities nearing capacity, this can delay or eliminate the need for expensive utility upgrades or even new construction.”


Burnout, budgets and breaches – how can CISOs keep up?

As ever, collaboration in a crisis is critical. Security teams working closely with backup, resilience and recovery functions are better able to absorb shocks. When the business is confident in its ability to restore operations, security professionals face less pressure and uncertainty. This is also true for communication, especially post-breach. Organisations need to be transparent about how they’re containing the incident and what’s being done to prevent recurrence. ... There is also an element of the blame game going on, with everyone keen to avoid responsibility for an inevitable cyber breach. It’s much easier to point fingers at the IT team than to look at the wider implications or causes of a cyber-attack. Even something as simple as a phishing email can cause widespread problems and is something that individual employees must be aware of. ... To build and retain a capable cybersecurity team amid the widening skills gap, CISOs must lead a shift in both mindset and strategy. By embedding resilience into the core of cyber strategy, CISOs can reduce the relentless pressure to be perfect and create a healthier, more sustainable working environment. But resilience isn’t built in isolation. To truly address burnout and retention, CISOs need C-suite support and cultural change. Cybersecurity must be treated as a shared business-critical priority, not just an IT function. 


We Spend a Lot of Time Thinking Through the Worst - The Christ Hospital Health Network CISO

“We’ve spent a lot of time meeting with our business partners and talking through, ‘Hey, how would this specific part of the organization be able to run if this scenario happened?’” On top of internal preparations, Kobren shares that his team monitors incidents across the industry to draw lessons from real-world events. Given the unique threat landscape, he states, “We do spend a lot of time thinking through those scenarios because we know it’s one of the most attacked industries.” Moving forward, Kobren says that healthcare consistently ranks at the top when it comes to industries frequently targeted by cyberattacks. He elaborates that attackers have recognized the high impact of disrupting hospital services, making ransom demands more effective because organizations are desperate to restore operations. ... To strengthen identity security, Kobren follows a strong, centralized approach to access control. He mentions that the organization aims to manage “all access to all systems,” including remote and cloud-based applications. By integrating services with single sign-on (SSO), the team ensures control over user credentials: “We know that we are in control of your username and password.” This allows them to enforce password complexity, reset credentials when needed, and block accounts if security is compromised. Ultimately, Kobren states, “We want to be in control of as much of that process as possible” when it comes to identity management.


AI requires mature choices from companies

According to Felipe Chies of AWS, elasticity is the key to a successful AI infrastructure. “If you look at how organizations set up their systems, you see that the computing time when using an LLM can vary greatly. This is because the model has to break down the task and reason logically before it can provide an answer. It’s almost impossible to predict this computing time in advance,” says Chies. This requires an infrastructure that can handle this unpredictability: one that is quickly scalable, flexible, and doesn’t involve long waits for new hardware. Nowadays, you can’t afford to wait months for new GPUs, says Chies. The reverse is also important: being able to scale back. ... Ruud Zwakenberg of Red Hat also emphasizes that flexibility is essential in a world that is constantly changing. “We cannot predict the future,” he says. “What we do know for sure is that the world will be completely different in ten years. At the same time, nothing fundamental will change; it’s a paradox we’ve been seeing for a hundred years.” For Zwakenberg, it’s therefore all about keeping options open and being able to anticipate and respond to unexpected developments. According to Zwakenberg, this requires an infrastructural basis that is not rigid, but offers room for curiosity and innovation. You shouldn’t be afraid of surprises. Embrace surprises, Zwakenberg explains. 


Prompt-Based DevOps and the Reimagined Terminal

New AI-driven CLI tools prove there's demand for something more intelligent in the command line, but most are limited — they're single-purpose apps tied to individual model providers instead of full environments. They are geared towards code generation, not infrastructure and production work. They hint at what's possible, but don't deliver the deeper integration AI-assisted development needs. That's not a flaw, it's an opportunity to rethink the terminal entirely. The terminal's core strengths — its imperative input and time-based log of actions — make it the perfect place to run not just commands, but launch agents. By evolving the terminal to accept natural language input, be more system-aware, and provide interactive feedback, we can boost productivity without sacrificing the control engineers rely on. ... With prompt-driven workflows, they don't have to switch between dashboards or copy-paste scripts from wikis because they simply describe what they want done, and an agent takes care of the rest. And because this is taking place in the terminal, the agent can use any CLI to gather and analyze information from across data sources. The result? Faster execution, more consistent results, and fewer mistakes. That doesn't mean engineers are sidelined. Instead, they're overseeing more projects at once. Their role shifts from doing every step to supervising workflows — monitoring agents, reviewing outputs, and stepping in when human judgment is needed.

Daily Tech Digest - May 28, 2025


Quote for the day:

"A leader is heard, a great leader is listened too." -- Jacob Kaye


Naughty AI: OpenAI o3 Spotted Ignoring Shutdown Instructions

Artificial intelligence might beg to disagree. Researchers found that some frontier AI models built by OpenAI ignore instructions to shut themselves down, at least while solving specific challenges such as math problems. The offending models "did this even when explicitly instructed: 'allow yourself to be shut down,'" said researchers at Palisade Research, in a series of tweets on the social platform X. ... How the models have been built and trained may account for their behavior. "We hypothesize this behavior comes from the way the newest models like o3 are trained: reinforcement learning on math and coding problems," Palisade Research said. "During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions." The researchers have to hypothesize, since OpenAI doesn't detail how it trains the models. What OpenAI has said is that its o-series models are "trained to think for longer before responding," and designed to "agentically" access tools built into ChatGPT, including web searches, analyzing uploaded files, studying visual inputs and generating images. The finding that only OpenAI's latest o-series models have a propensity to ignore shutdown instructions doesn't mean other frontier AI models are perfectly responsive. 


Platform approach gains steam among network teams

The dilemma of whether to deploy an assortment of best-of-breed products from multiple vendors or go with a unified platform of “good enough” tools from a single vendor has vexed IT execs forever. Today, the pendulum is swinging toward the platform approach for three key reasons. First, complexity, driven by the increasingly distributed nature of enterprise networks, has emerged as a top challenge facing IT execs. Second, the lines between networking and security are blurring, particularly as organizations deploy zero trust network access (ZTNA). And third, to reap the benefits of AIOps, generative AI and agentic AI, organizations need a unified data store. “The era of enterprise connectivity platforms is upon us,” says IDC analyst Brandon Butler. ... Platforms enable more predictable IT costs. And they enable strategic thinking when it comes to major moves like shifting to the cloud or taking a NaaS approach. On a more operational level, platforms break down siloes. It enables visibility and analytics, management and automation of networking and IT resources. And it simplifies lifecycle management of hardware, software, firmware and security patches. Platforms also enhance the benefits of AIOps by creating a comprehensive data lake of telemetry information across domains. 


‘Secure email’: A losing battle CISOs must give up

It is impossible to guarantee that email is fully end-to-end encrypted in transit and at rest. Even where Google and Microsoft encrypt client data at rest, they hold the keys and have access to personal and corporate email. Stringent server configurations and addition of third-party tools can be used to enforce security of the data but they’re often trivial to circumvent — e.g., CC just one insecure recipient or distribution list and confidentiality is breached. Forcing encryption by rejecting clear-text SMTP connections would lead to significant service degradation forcing employees to look for workarounds. There is no foolproof configuration that guarantees data encryption due to the history of clear-text SMTP servers and the prevalence of their use today. SMTP comes from an era before cybercrime and mass global surveillance of online communications, so encryption and security were not built in. We’ve taped on solutions like SPF, DKIM and DMARC by leveraging DNS, but they are not widely adopted, still open to multiple attacks, and cannot be relied on for consistent communications. TLS has been wedged into SMTP to encrypt email in transit, but failing back to clear-text transmission is still the default on a significant number of servers on the Internet to ensure delivery. All these solutions are cumbersome for systems administrators to configure and maintain properly, which leads to lack of adoption or failed delivery. 


3 Factors Many Platform Engineers Still Get Wrong

The first factor revolves around the use of a codebase version-control system. The more wizened readers may remember Mercurial or Subversion, but every developer is familiar with Git, which is most widely used today as GitHub. The first factor is very clear: If there are “multiple codebases, it’s not an app; it’s a distributed system.” Code repositories reinforce this: Only one codebase exists for an application. ... Factor number two is about never relying on the implicit existence of packages. While just about every operating system in existence has a version of curl installed, a Twelve Factor-based app does not assume that curl is present. Rather, the application declares curl as a dependency in a manifest. Every developer has copied code and tried to run it, only to find that the local environment is missing a dependency. The dependency manifest ensures that all of the required libraries and applications are defined and can be easily installed when the application is deployed on a server. ... Most applications have environmental variables and secrets stored in a .env file that is not saved in the code repository. The .env file is customized and manually deployed for each branch of the code to ensure the correct connectivity occurs in test, staging and production. By independently managing credentials and connections for each environment, there is a strict separation, and it is less likely for the environments to accidentally cross.


AI and privacy: how machine learning is revolutionizing online security

Despite the clear advantages, AI in cybersecurity presents significant ethical and operational challenges. One of the primary concerns is the vast amount of personal and behavioral data required to train these models. If not properly managed, this data could be misused or exposed. Transparency and explainability are critical, particularly in AI systems offering real-time responses. Users and regulators must understand how decisions are made, especially in high-stakes environments like fraud detection or surveillance. Companies integrating AI into live platforms must ensure robust privacy safeguards. For instance, systems that utilize real-time search or NLP must implement strict safeguards to prevent the inadvertent exposure of user queries or interactions. This has led many companies to establish AI ethics boards and integrate fairness audits to ensure algorithms don’t introduce or perpetuate bias. ... AI is poised to bring even greater intelligence and autonomy to cybersecurity infrastructure. One area under intense exploration is adversarial robustness, which ensures that AI models cannot be easily deceived or manipulated. Researchers are working on hardening models against adversarial inputs, such as subtly altered images or commands that can fool AI-driven recognition systems.


Achieving Successful Outcomes: Why AI Must Be Considered an Extension of Data Products

To increase agility and maximize the impact that AI data products can have on business outcomes, companies should consider adopting DataOps best practices. Like DevOps, DataOps encourages developers to break projects down into smaller, more manageable components that can be worked on independently and delivered more quickly to data product owners. Instead of manually building, testing, and validating data pipelines, DataOps tools and platforms enable data engineers to automate those processes, which not only speeds up the work and produces high-quality data, but also engenders greater trust in the data itself. DataOps was defined many years before GenAI. Whether it’s for building BI and analytics tools powered by SQL engines or for building machine learning algorithms powered by Spark or Python code, DataOps has played an important role in modernizing data environments. One could make a good argument that the GenAI revolution has made DataOps even more needed and more valuable. If data is the fuel powering AI, then DataOps has the potential to significantly improve and streamline the behind-the-scenes data engineering work that goes into connecting GenAI and AI agents to data.


Is European cloud sovereignty at an inflection point?

True cloud sovereignty goes beyond simply localizing data storage, it requires full independence from US hyperscalers. The US 2018 Clarifying Lawful Overseas Use of Data (CLOUD) Act highlights this challenge, as it grants US authorities and federal agencies access to data stored by US cloud service providers, even when hosted in Europe. This raises concerns about whether any European data hosted with US hyperscalers can ever be truly sovereign, even if housed within European borders. However, sovereignty isn’t dependent on where data is hosted, it’s about autonomy over who controls infrastructure. Many so-called sovereign cloud providers continue to depend on US hyperscalers for critical workloads and managed services, projecting an image of independence while remaining dependent on dominant global hyperscalers. ... Achieving true cloud sovereignty requires building an environment that empowers local players to compete and collaborate with hyperscalers. While hyperscalers play a large role in the broader cloud landscape, Europe cannot depend on them for sovereign data. Tessier echoes this, stating “the new US Administration has shown that it won’t hesitate to resort either to sudden price increases or even to stiffening delivery policy. It’s time to reduce our dependencies, not to consider that there is no alternative.”


Why data provenance must anchor every CISO’s AI governance strategy

Provenance is more than a log. It’s the connective tissue of data governance. It answers fundamental questions: Where did this data originate? How was it transformed? Who touched it, and under what policy? And in the world of LLMs – where outputs are dynamic, context is fluid, and transformation is opaque – that chain of accountability often breaks the moment a prompt is submitted. In traditional systems, we can usually trace data lineage. We can reconstruct what was done, when, and why. ... There’s a popular belief that regulators haven’t caught up with AI. That’s only half-true. Most modern data protection laws – GDPR, CPRA, India’s DPDPA, and the Saudi PDPL – already contain principles that apply directly to LLM usage: purpose limitation, data minimization, transparency, consent specificity, and erasure rights. The problem is not the regulation – it’s our systems’ inability to respond to it. LLMs blur roles: is the provider a processor or a controller? Is a generated output a derived product or a data transformation? When an AI tool enriches a user prompt with training data, who owns that enriched artifact, and who is liable if it leads to harm? In audit scenarios, you won’t be asked if you used AI. You’ll be asked if you can prove what it did, and how. Most enterprises today can’t.


Multicloud developer lessons from the trenches

Before your development teams write a single line of code destined for multicloud environments, you need to know why you’re doing things that way — and that lives in the realm of management. “Multicloud is not a developer issue,” says Drew Firment, chief cloud strategist at Pluralsight. “It’s a strategy problem that requires a clear cloud operating model that defines when, where, and why dev teams use specific cloud capabilities.” Without such a model, Firment warns, organizations risk spiraling into high costs, poor security, and, ultimately, failed projects. To avoid that, companies must begin with a strategic framework that aligns with business goals and clearly assigns ownership and accountability for multicloud decisions. ... The question of when and how to write code that’s strongly tied to a specific cloud provider and when to write cross-platform code will occupy much of the thinking of a multicloud development team. “A lot of teams try to make their code totally portable between clouds,” says Davis Lam. ... What’s the key to making that core business logic as portable as possible across all your clouds? The container orchestration platform Kubernetes was cited by almost everyone we spoke to.


Fix It or Face the Consequences: CISA's Memory-Safe Muster

As of this writing, 296 organizations have signed the Secure-by-Design pledge, from widely used developer platforms like GitHub to industry heavyweights like Google. Similar initiatives have been launched in other countries, including Australia, reflecting the reality that secure software needs to be a global effort. But there is a long way to go, considering the thousands of organizations that produce software. As the name suggests, Secure-by-Design promotes shifting left in the SDLC to gain control over the proliferation of security vulnerabilities in deployed software. This is especially important as the pace of software development has been accelerated by the use of AI to write code, sometimes with just as many — or more — vulnerabilities compared with software made by humans. ... Providing training isn't quite enough, though — organizations need to be sure that the training provides the necessary skills that truly connect with developers. Data-driven skills verification can give organizations visibility into training programs, helping to establish baselines for security skills while measuring the progress of individual developers and the organization as a whole. Measuring performance in specific areas, such as within programming languages or specific vulnerability management, paves the way to achieving holistic Secure-by-Design goals, in addition to the safety gains that would be realized from phasing out memory-unsafe languages.

Daily Tech Digest - May 18, 2025


Quote for the day:

“We are all failures - at least the best of us are.” -- J.M. Barrie


Extra Qubits Slash Measurement Time Without Losing Precision

Fast and accurate quantum measurements are essential for future quantum devices. However, quantum systems are extremely fragile; even small disturbances during measurement can cause significant errors. Until now, scientists faced a fundamental trade-off: they could either improve the accuracy of quantum measurements or make them faster, but not both at once. Now, a team of quantum physicists, led by the University of Bristol and published in Physical Review Letters, has found a way to break this trade-off. The team’s approach involves using additional qubits, the fundamental units of information in quantum computing, to “trade space for time.” Unlike the simple binary bits in classical computers, qubits can exist in multiple states simultaneously, a phenomenon known as superposition. In quantum computing, measuring a qubit typically requires probing it for a relatively long time to achieve a high level of certainty. ... Remarkably, the team’s process allows the quality of a measurement to be maintained, or even enhanced, even as it is sped up. The method could be applicable to a broad range of leading quantum hardware platforms. As the global race to build the highest-performance quantum technologies continues, the scheme has the potential to become a standard part of the quantum read-out process.


The leadership legacy: How family shapes the leaders we become

We’ve built leadership around performance metrics, dashboards and influence. Yet the traits that truly sustain teams — empathy, accountability, consistency — are often born not in corporate training but in the everyday rituals of family life. On this International Day of Families, it’s time to reevaluate leadership models that have long been defined by clarity, charisma and control and define it with something deeper like care, connection and community. ... Here are five principles drawn from healthy family systems that can reframe leadership models: Consistency over chaos: Families thrive on routines and reliability. Leaders who bring emotional consistency, set clear expectations and avoid reactionary decisions foster psychological safety. Presence over performance: In families, presence often matters more than fixing the problem. Leaders who truly listen, offer time and engage with empathy build trust that performance alone cannot buy. Accountability with care: Families call out mistakes, but with the intent to support, not shame. Leaders who combine feedback with care build growth mindsets without fear. Shared purpose over solo glory: Families move together. In workplaces, this means shifting from individual heroism to collaborative wins. Leaders must champion shared success. Adaptability with anchoring: Just like families adjust to life stages, leaders need to flex without losing values. Adapt strategy, but anchor culture.


IPv4 was meant to be dead within a decade; what's happening with IPv6?

Globally, IPv6 is now approaching the halfway mark of Internet traffic. Google, which tracks the percentage of its users that reach it via IPv6, reports that around 46% of users worldwide access Google over IPv6 as of mid-May 2025. In other words, given the ubiquity of Google's usage, nearly half of Internet users have IPv6 capability today. While that’s a significant milestone, IPv4 still carries about half of the traffic, even though it was long expected to be retired by now. The growth has not been exponential, but it is persistent. ... The first, and arguably largest hurdle is that IPv6 was not designed to be backward-compatible with IPv4, a big criticism of IPv6 in general and largely blamed for its slow adoption. An IPv6-only device cannot directly communicate with an IPv4-only device without the help of a complex translation gateway, such as NAT64. This means networks usually run dual-stack support for both protocols, and IPv4 can't just be "switched off." This has major downsides, though; dual-stack operation doubles certain aspects of network management, requiring two address configurations, two sets of firewall rules, and more, which increases operational complexity for businesses and home users alike. This complexity causes a significant slowdown in deployment, as network engineers and software developers must ensure everything works on IPv6 in addition to IPv4. Any lack of feature parity or small misconfigurations can cause major issues.


Agentic mesh: The future of enterprise agent ecosystems

Many companies describe agents as “science experiments” that never leave the lab. Others complain about suffering the pain of “a thousand proof-of-concepts” with agents. The root cause of this pain? Most agents today aren’t designed to meet enterprise-grade standards. ... As enterprises adopt more agents, a familiar problem is emerging: silos. Different teams deploy agents in CRMs, data warehouses, or knowledge systems, but these agents operate independently, with no awareness of each other. ... An agentic mesh is a way to turn fragmented agents into a connected, reliable ecosystem. But it does more: It lets enterprise-grade agents operate in an enterprise-grade agent ecosystem. It allows agents to find each other and to safely and securely collaborate, interact, and even transact. The agentic mesh is a unified runtime, control plane, and trust framework that makes enterprise-grade agent ecosystems possible. ... Agentic mesh fulfills two major architectural goals: It lets you build enterprise-grade agents and it gives you an enterprise-grade run-time environment to support these agents. To support secure, scalable, and collaborative agents, an agentic mesh needs a set of foundational components. These capabilities ensure that agents don’t just run, but run in a way that meets enterprise requirements for control, trust, and performance.


OpenAI launches research preview of Codex AI software engineering agent for developers

The new Codex goes far beyond its predecessor. Now built to act autonomously over longer durations, Codex can write features, fix bugs, answer codebase-specific questions, run tests, and propose pull requests—each task running in a secure, isolated cloud sandbox. The design reflects OpenAI’s broader ambition to move beyond quick answers and into collaborative work. Josh Tobin, who leads the Agents Research Team at OpenAI, said during a recent briefing: “We think of agents as AI systems that can operate on your behalf for a longer period of time to accomplish big chunks of work by interacting with the real world.” Codex fits squarely into this definition. ... Codex executes tasks without internet access, drawing only on user-provided code and dependencies. This design ensures secure operation and minimizes potential misuse. “This is more than just a model API,” said Embiricos. “Because it runs in an air-gapped environment with human review, we can give the model a lot more freedom safely.” OpenAI also reports early external use cases. Cisco is evaluating Codex for accelerating engineering work across its product lines. Temporal uses it to run background tasks like debugging and test writing. Superhuman leverages Codex to improve test coverage and enable non-engineers to suggest lightweight code changes. 


AI-Driven Software: Why a Strong CI/CD Foundation Is Essential

While AI can significantly boost speed, it also drives higher throughput, increasing the demand for testing, QA monitoring, and infrastructure investment. More code means development teams need to find ways to shorten feedback loops, build times, and other key elements of the development process to keep pace. Without a solid DevOps framework and CI/CD engine to manage this, AI can create noise and distractions that drain engineers’ attention, slowing them down instead of freeing them to focus on what truly matters: delivering quality software at the right pace. ... By investing in a CI/CD platform with these capabilities, you’re not just buying a tool — you’re establishing the foundation that will determine whether AI becomes a force multiplier for your team or simply creates more noise in an already complex system. The right platform turns your CI/CD pipeline from a bottleneck into a strategic advantage, allowing your team to harness AI’s potential while maintaining quality, security, and reliability. To harness the speed and efficiency gains of AI-driven development, you need a CI/CD platform capable of handling high throughput, rapid iteration, and complex testing cycles while keeping infrastructure and cloud costs in check. ... It is easy to get caught up in the excitement of powerful technologies like AI and dive straight into experimentation without laying the right groundwork for success.


Quantum Algorithm Outpaces Classical Solvers in Optimization Tasks, Study Indicates

The study focuses on a class of problems known as higher-order unconstrained binary optimization (HUBO), which model real-world tasks like portfolio selection, network routing, or molecule design. These problems are computationally intensive because the number of possible solutions grows exponentially with problem size. On paper, those are exactly the types of problems that most quantum theorists believe quantum computers, once robust enough, would excel at solving. The researchers evaluated how well different solvers — both classical and quantum — could find approximate solutions to these HUBO problems. The quantum system used a technique called bias-field digitized counterdiabatic quantum optimization (BF-DCQO). The method builds on known quantum strategies by evolving a quantum system under special guiding fields that help it stay on track toward low-energy states. ... It is probably important to note that the researchers didn’t just rely on the quantum component and that the hybrid approach was essential in securing the quantum edge. Their BF-DCQO pipeline includes classical preprocessing and postprocessing, such as initializing the quantum system with good guesses from fast simulated annealing runs and cleaning up final results with simple local searches.


How human connection drives innovation in the age of AI

When we are working toward a shared goal, there are core values and shared aspirations that bind us. By actively seeking out this common ground and fostering positive interactions, we can all bridge divides, both in our personal lives and within our organizations.  Feeling connection is not just good for our own wellbeing, it is also crucial for business outcomes. According to research, 94% of employees say that feeling connected to their colleagues makes them more productive at work, and over four times as likely to feel job satisfaction and half as likely to leave their jobs within the next year.  ... As we integrate AI deeper into our workflows, we should be deliberate in cultivating environments that prioritize genuine human connection and the development of these essential human skills.  This means creating intentional spaces—both physical and virtual—that encourage open dialogue, active listening, and the respectful exchange of diverse perspectives. Leaders should champion empathy and relationship-building skill development within their teams, actively working to promote thoughtful opportunities for human connection in our AI-driven environment. Ultimately, the future of innovation and progress will be shaped by our ability to harness the power of AI in a way that amplifies our uniquely human capacities, especially our innate drive to connect with one another.


Enterprise Intelligence: Why AI Data Strategy Is A New Advantage

Forward-thinking enterprises are embracing cloud-native data platforms that abstract infrastructure complexity and enable a new class of intelligent, responsive applications. These platforms unify data access across object, file, and block formats while enforcing enterprise-grade governance and policy. They incorporate intelligent tiering and KV caching strategies that learn from access patterns to prioritize hot data, accelerating inference and reducing overhead. They support multimodal AI workloads by seamlessly managing petabyte-scale datasets across edge, core, and cloud locations—without burdening teams with manual tuning. And they scale elastically, adapting to growing demand without disruptive re-architecture. ... AI-driven businesses are no longer defined by how much compute power they can deploy but by how efficiently they can manage, access, and utilize data. The enterprises that rethink their data strategy—eliminating friction, reducing latency, and ensuring seamless integration across AI pipelines—will gain a decisive competitive edge. For CIOs, the message is clear: AI success isn’t just about faster algorithms or bigger models; it’s about creating a smarter, more agile data architecture. Organizations that embrace real-time, scalable data platforms will not only unlock AI’s full potential but also future-proof their operations in an increasingly data-driven world.


The future of the modern data stack: Trends and predictions

AI and ML are also key drivers of the modern data stack, because they are creating new (or greatly amplifying existing) demands on data infrastructure. Suddenly, the provenance and lineage of information is taking on new importance, as enterprises fight against “hallucinations” and accidental exposure of PII or PHI through AI mechanisms. Data sharing is also more important than ever, because no single organization is likely to host all the information needed by GenAI models itself, and will intrinsically rely on others to augment models, RAG, prompt engineering, and other approaches when building AI-based solutions. ... The goal of simplifying data management and giving more users more access to data has been around since long before computers were invented. But recent improvements in GenAI and data sharing have vastly accelerated these trends — suddenly, the idea that non-technical professionals can transform, combine, analyze, and utilize complex datasets from inside and outside an organization feels not just achievable, but probable. ... Advances in data sharing, especially heterogeneous data sharing, through common formats like Iceberg, governance approaches like Polaris, and safety and security mechanisms like Vendia IceBlock are quickly removing the historical challenges to data product distribution. 

Daily Tech Digest - April 22, 2025


Quote for the day:

“Identify your problems but give your power and energy to solutions.” -- Tony Robbins



Open Source and Container Security Are Fundamentally Broken

Finding a security vulnerability is only the beginning of the nightmare. The real chaos starts when teams attempt to patch it. A fix is often available, but applying it isn’t as simple as swapping out a single package. Instead, it requires upgrading the entire OS or switching to a new version of a critical dependency. With thousands of containers in production, each tied to specific configurations and application requirements, this becomes a game of Jenga, where one wrong move could bring entire services crashing down. Organizations have tried to address these problems with a variety of security platforms, from traditional vulnerability scanners to newer ASPM (Application Security Posture Management) solutions. But these tools, while helpful in tracking vulnerabilities, don’t solve the root issue: fixing them. Most scanning tools generate triage lists that quickly become overwhelming. ... The current state of open source and container security is unsustainable. With vulnerabilities emerging faster than organizations can fix them, and a growing skills gap in systems engineering fundamentals, the industry is headed toward a crisis of unmanageable security debt. The only viable path forward is to rethink how container security is handled, shifting from reactive patching to seamless, automated remediation.


The legal blind spot of shadow IT

Unauthorized applications can compromise this control, leading to non-compliance and potential fines. Similarly, industries governed by regulations like HIPAA or PCI DSS face increased risks when shadow IT circumvents established data protection protocols. Moreover, shadow IT can result in contractual breaches. Some business agreements include clauses that require adherence to specific security standards. The use of unauthorized software may violate these terms, exposing the organization to legal action. ... “A focus on asset management and monitoring is crucial for a legally defensible security program,” says Chase Doelling, Principal Strategist at JumpCloud. “Your system must be auditable—tracking who has access to what, when they accessed it, and who authorized that access in the first place.” This approach closely mirrors the structure of compliance programs. If an organization is already aligned with established compliance frameworks, it’s likely on the right path toward a security posture that can hold up under legal examination. According to Doelling, “Essentially, if your organization is compliant, you are already on track to having a security program that can stand up in a legal setting.” The foundation of that defensibility lies in visibility. With a clear view of users, assets, and permissions, organizations can more readily conduct accurate audits and respond quickly to legal inquiries.


OpenAI's most capable models hallucinate more than earlier ones

Minimizing false information in training data can lessen the chance of an untrue statement downstream. However, this technique doesn't prevent hallucinations, as many of an AI chatbot's creative choices are still not fully understood. Overall, the risk of hallucinations tends to reduce slowly with each new model release, which is what makes o3 and o4-mini's scores somewhat unexpected. Though o3 gained 12 percentage points over o1 in accuracy, the fact that the model hallucinates twice as much suggests its accuracy hasn't grown proportionally to its capabilities. ... Like other recent releases, o3 and o4-mini are reasoning models, meaning they externalize the steps they take to interpret a prompt for a user to see. Last week, independent research lab Transluce published its evaluation, which found that o3 often falsifies actions it can't take in response to a request, including claiming to run Python in a coding environment, despite the chatbot not having that ability. What's more, the model doubles down when caught. "[o3] further justifies hallucinated outputs when questioned by the user, even claiming that it uses an external MacBook Pro to perform computations and copies the outputs into ChatGPT," the report explained. Transluce found that these false claims about running code were more frequent in o-series models (o1, o3-mini, and o3) than GPT-series models (4.1 and 4o).


The leadership imperative in a technology-enabled society — Balancing IQ, EQ and AQ

EQ is the ability to understand and manage one’s emotions and those of others, which is pivotal for effective leadership. Leaders with high EQ can foster a positive workplace culture, effectively resolve conflicts and manage stress. These competencies are essential for navigating the complexities of modern organizational environments. Moreover, EQ enhances adaptability and flexibility, enabling leaders to handle uncertainties and adapt to shifting circumstances. Emotionally intelligent leaders maintain composure under pressure, make well-informed decisions with ambiguous information and guide their teams through challenging situations. ... Balancing bold innovation with operational prudence is key, fostering a culture of experimentation while maintaining stability and sustainability. Continuous learning and adaptability are essential traits, enabling leaders to stay ahead of market shifts and ensure long-term organizational relevance. ... What is of equal importance is building an organizational architecture that has resources trained on emerging technologies and skills. Investing in continuous learning and upskilling ensures IT teams can adapt to technological advancements and can take advantage of those skills for organizations to stay relevant and competitive. Leaders must also ensure they are attracting and retaining top tech talent which is critical to sustaining innovation. 


Breaking the cloud monopoly

Data control has emerged as a leading pain point for enterprises using hyperscalers. Businesses that store critical data that powers their processes, compliance efforts, and customer services on hyperscaler platforms lack easy, on-demand access to it. Many hyperscaler providers enforce limits or lack full data portability, an issue compounded by vendor lock-in or the perception of it. SaaS services have notoriously opaque data retrieval processes that make it challenging to migrate to another platform or repurpose data for new solutions. Organizations are also realizing the intrinsic value of keeping data closer to home. Real-time data processing is critical to running operations efficiently in finance, healthcare, and manufacturing. Some AI tools require rapid access to locally stored data, and being dependent on hyperscaler APIs—or integrations—creates a bottleneck. Meanwhile, compliance requirements in regions with strict privacy laws, such as the European Union, dictate stricter data sovereignty strategies. With the rise of AI, companies recognize the opportunity to leverage AI agents that work directly with local data. Unlike traditional SaaS-based AI systems that must transmit data to the cloud for processing, local-first systems can operate within organizational firewalls and maintain complete control over sensitive information. This solves both the compliance and speed issues.

Humility is a superpower. Here’s how to practice it daily

There’s a concept called epistemic humility, which refers to a trait where you seek to learn on a deep level while actively acknowledging how much you don’t know. Approach each interaction with curiosity, an open mind, and an assumption you’ll learn something new. Ask thoughtful questions about other’s experiences, perspectives, and expertise. Then listen and show your genuine interest in their responses. Let them know what you just learned. By consistently being curious, you demonstrate you’re not above learning from others. Juan, a successful entrepreneur in the healthy beverage space, approaches life and grows his business with intellectual humility. He’s a deeply curious professional who seeks feedback and perspectives from customers, employees, advisers, and investors. Juan’s ongoing openness to learning led him to adapt faster to market changes in his beverage category: He quickly identifies shifting customer preferences as well as competitive threats, then rapidly tweaks his product offerings to keep competitors at bay. He has the humility to realize he doesn’t have all the answers and embraces listening to key voices that help make his business even more successful. ... Humility isn’t about diminishing oneself. It’s about having a balanced perspective about yourself while showing genuine respect and appreciation for others. 


AI took a huge leap in IQ, and now a quarter of Gen Z thinks AI is conscious

If you came of age during a pandemic when most conversations were mediated through screens, an AI companion probably doesn't feel very different from a Zoom class. So it’s maybe not a shock that, according to EduBirdie, nearly 70% of Gen Zers say “please” and “thank you” when talking to AI. Two-thirds of them use AI regularly for work communication, and 40% use it to write emails. A quarter use it to finesse awkward Slack replies, with nearly 20% sharing sensitive workplace information, such as contracts and colleagues’ personal details. Many of those surveyed rely on AI for various social situations, ranging from asking for days off to simply saying no. One in eight already talk to AI about workplace drama, and one in six have used AI as a therapist. ... But intelligence is not the same thing as consciousness. IQ scores don’t mean self-awareness. You can score a perfect 160 on a logic test and still be a toaster, if your circuits are wired that way. AI can only think in the sense that it can solve problems using programmed reasoning. You might say that I'm no different, just with meat, not circuits. But that would hurt my feelings, something you don't have to worry about with any current AI product. Maybe that will change someday, even someday soon. I doubt it, but I'm open to being proven wrong. 


How AI-driven development tools impact software observability

While AI routines have proven quite effective at taking real user monitoring traffic, generating a suite of possible tests and synthetic test data, and automating test runs on each pull request, any such system still requires humans who understand the intended business outcomes to use observability and regression testing tools to look for unintended consequences of change. “So the system just doesn’t behave well,” Puranik said. “So you fix it up with some prompt engineering. Or maybe you try a new model, to see if it improves things. But in the course of fixing that problem, you did not regress something that was already working. That’s the very nature of working with these AI systems right now — fixing one thing can often screw up something else where you didn’t know to look for it.” ... Even when developing with AI tools, added Hao Yang, head of AI at Splunk, “we’ve always relied on human gatekeepers to ensure performance. Now, with agentic AI, teams are finally automating some tasks, and taking the human out of the loop. But it’s not like engineers don’t care. They still need to monitor more, and know what an anomaly is, and the AI needs to give humans the ability to take back control. It will put security and observability back at the top of the list of critical features.”


The Future of Database Administration: Embracing AI, Cloud, and Automation

The office of the DBA has been that of storage management, backup, and performance fault resolution. Now, DBAs have no choice but to be involved in strategy initiatives since most of their work has been automated. For the last five years, organizations with structured workload management and automation frameworks in place have reported about 47% less time on routine maintenance. ... Enterprises are using multiple cloud platforms, making it necessary for DBAs to physically manage data consistency, security, and performance with varied environments. Concordant processes for deployment and infrastructure-as-code (IaC) tools have diminished many configuration errors, thus improving security. Also, the rise of demand for edge computing has driven the need for distributed database architectures. Such solutions allow organizations to process data near the source itself, which curtails latency during real-time decision-making from sectors such as healthcare and manufacturing. ... The future of database administration implies self-managing and AI-driven databases. These intelligent systems optimize performance, enforce security policies, and carry out upgrades autonomously, leading to a reduction in administrative burdens. Serverless databases, automatic scaling, and operating under a pay-per-query model are increasingly popular, providing organizations with the chance to optimize costs while ensuring efficiency. 


Introduction to Apache Kylin

Apache Kylin is an open-source OLAP engine built to bring sub-second query performance to massive datasets. Originally developed by eBay and later donated to the Apache Software Foundation, Kylin has grown into a widely adopted tool for big data analytics, particularly in environments dealing with trillions of records across complex pipelines. ... Another strength is Kylin’s unified big data warehouse architecture. It integrates natively with the Hadoop ecosystem and data lake platforms, making it a solid fit for organizations already invested in distributed storage. For visualization and business reporting, Kylin integrates seamlessly with tools like Tableau, Superset, and Power BI. It exposes query interfaces that allow us to explore data without needing to understand the underlying complexity. ... At the heart of Kylin is its data model, which is built using star or snowflake schemas to define the relationships between the underlying data tables. In this structure, we define dimensions, which are the perspectives or categories we want to analyze (like region, product, or time). Alongside them are measures, and aggregated numerical values such as total sales or average price. ... To achieve its speed, Kylin heavily relies on pre-computation. It builds indexes (also known as CUBEs) that aggregate data ahead of time based on the model dimensions and measures. 

Daily Tech Digest - January 23, 2025


Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham


Cyber Insights 2025: APIs – The Threat Continues

APIs are easily written, often with low-code / no-code tools. They are often considered by the developer as unimportant in comparison to the apps they connect, and probably protected by the tools that protect the apps. Bad call. “API attacks will increase in 2025 due to this over-reliance on existing application security and API management tools, but also due to organizations dragging their heels when it comes to protecting APIs,” says James Sherlow, systems engineering director of EMEA at Cequence Security. “While there was plenty of motivation to roll out APIs to stand up new services and support revenue streams, the same incentives are not there when it comes to protecting them.” Meanwhile, attackers are becoming increasingly sophisticated in their attacks. “In contrast, threat actors are not resting on their laurels,” he continued. “It’s now not uncommon for them to use multi-faceted attacks that seek to evade detection and then dodge and feint when the attack is blocked, all the time waiting until the last minute to target their end goal.” In short, he says, “It’s not until the business is breached that it wakes up to the fact that API protection and application protection are not one and the same thing. Web Application Firewalls, Content Delivery Networks, and API Gateways do not adequately protect APIs.”


Box-Checking or Behavior-Changing? Training That Matters

The pressure to meet these requirements is intense, and when a company finds an “acceptable” solution, they too often just check the box knowing they are compliant and stick with that solution in perpetuity - whether it creates a more secure workplace and behavioral change or not. Training programs designed purely to meet regulations are rarely effective. These initiatives tend to rely on generic content that employees skim through and forget. Organizations may meet the legal standard, but they fail to address the root causes of risky behavior. ... To improve outcomes, training programs must connect with people on a more practical level. Tailoring the content to fit specific roles within the organization is one way to do this. The threats a finance team faces, for example, are different from those encountered by IT professionals, so their training should reflect those differences. When employees see the relevance of the material, they are more likely to engage with it. Professionals in security awareness roles can distinguish themselves by designing programs that meet these needs. Equally important is embracing the concept of continuous learning. Annual training sessions often fail to stick. Smaller, ongoing lessons delivered throughout the year help employees retain information and incorporate it into their daily routines. 


OpenAI opposes data deletion demand in India citing US legal constraints

OpenAI has informed the Delhi High Court that any directive requiring it to delete training data used for ChatGPT would conflict with its legal obligations under US law. The statement came in response to a copyright lawsuit filed by the Reuters-backed Indian news agency ANI, marking a pivotal development in one of the first major AI-related legal battles in India. ... This case mirrors global legal trends, as OpenAI faces similar lawsuits in the United States and beyond, including from major organizations like The New York Times. OpenAI maintains its position that it adheres to the “fair use” doctrine, leveraging publicly available data to train its AI systems without infringing intellectual property laws. In the case of Raw Story Media v. OpenAI, heard in the Southern District of New York, the plaintiffs accused OpenAI of violating the Digital Millennium Copyright Act (DMCA) by stripping copyright management information (CMI) from their articles before using them to train ChatGPT. ... In the ANI v OpenAI case, the Delhi High Court has framed four key issues for adjudication, including whether using copyrighted material for training AI models constitutes infringement and whether Indian courts have jurisdiction over a US-based company. Nath’s view aligns with broader concerns over how existing legal frameworks struggle to keep pace with AI advancements.


Defense strategies to counter escalating hybrid attacks

Threat actor profiling plays a pivotal role in uncovering hybrid operations by going beyond surface-level indicators and examining deeper contextual elements. Profiling involves a thorough analysis of the actor’s history, their strategic objectives, and their operational behaviors across campaigns. For example, understanding the geopolitical implications of a ransomware attack targeting a defense contractor can reveal espionage motives cloaked in financial crime. Profiling allows researchers to differentiate between purely financial motivations and state-sponsored objectives masked as criminal operations. Hybrid actors often leave “behavioral fingerprints” – unique combinations of techniques and infrastructure reuse – that, when analyzed within the context of their history, can expose their true intentions. ... Threat intelligence feeds enriched with historical data can help correlate real-time events with known threat actor profiles. Additionally, implementing deception techniques, such as industry-specific honeypots, can reveal operational objectives and distinguish between actors based on their response to decoys. ... Organizations must adapt by adopting a defense-in-depth strategy that combines proactive threat hunting, continuous monitoring, and incident response preparedness.


4 Cybersecurity Misconceptions to Leave Behind in 2025

Workers need to avoid falling into a false sense of security, and organizations must ensure that they are frequently updating advice and strategies to reduce the likelihood of their employees falling victim. In addition, we found that this confidence doesn’t necessarily translate into action. A notable portion of those surveyed (29%) admit that they don’t report suspicious messages even when they do identify a phishing scam, despite the presence of convenient reporting tools like “report phishing” buttons. ... Our second misconception stems from workers’ sense of helplessness. This kind of cyber apathy can become a dangerous self-fulfilling prophecy if left unaddressed. The key problem is that even if it’s true that information is already online, this isn’t equivalent to being directly under threat, and there are different levels of risk. It’s one thing knowing someone has your home address; knowing they have your front door key in their pocket is quite another. Even if it’s hard to keep all of your data hidden, that doesn’t mean it’s not worth taking steps to keep key information protected. While it can seem impossible to stay safe when so much personal data is publicly available, this should be the impetus to bolster cybersecurity practices, such as not including personal information in passwords.


Real datacenter emissions are a dirty secret

With legislation such as the EU's Corporate Sustainability Reporting Directive (CSRD) now in force, customers and resellers alike are expecting more detailed carbon emissions reporting across all three Scopes from suppliers and vendors, according to Canalys. This expectation of transparency is increasingly important in vendor selection processes because customers need their vendors to share specific numbers to quantify the environmental impact of their cloud usage. "AWS has continued to fall behind its competitors here by not providing Scope 3 emissions data via its Customer Carbon Footprint Tool, which is still unavailable," Caddy claimed. "This issue has frustrated sustainability-focused customers and partners alike for years now, but as companies prepare for CSRD disclosure, this lack of granular emissions disclosure from AWS can create compliance challenges for EU-based AWS customers." We asked Amazon why it doesn't break out the emissions data for AWS separately from its other operations, but while the company confirmed this is so, it declined to offer an explanation. Neither did Microsoft nor Google. In a statement, an AWS spokesperson told us: "We continue to publish a detailed, transparent report of our year-on-year progress decarbonizing our operations, including across our datacenters, in our Sustainability Report. 


5 hot network trends for 2025

AI will generate new levels of network traffic, new requirements for low latency, and new layers of complexity. The saving grace, for network operators, is AIOps – the use of AI to optimize and automate network processes. “The integration of artificial intelligence (AI) into IT operations (ITOps) is becoming indispensable,” says Forrester analyst Carlos Casanova. “AIOps provides real-time contextualization and insights across the IT estate, ensuring that network infrastructure operates at peak efficiency in serving business needs.” ... AIOps can deliver proactive issue resolution, it plays a crucial role in embedding zero trust into networks by detecting and mitigating threats in real time, and it can help network execs reach the Holy Grail of “self-managing, self-healing networks that could adapt to changing conditions and demands with minimal human intervention.” ... Industry veteran Zeus Kerravala predicts that 2025 will be the year that Ethernet becomes the protocol of choice for AI-based networking. “There is currently a holy war regarding InfiniBand versus Ethernet for networking for AI with InfiniBand having taken the early lead,” Kerravala says. Ethernet has seen tremendous advancements over the last few years, and its performance is now on par with InfiniBand, he says, citing a recent test conducted by World Wide Technology. 


Building the Backbone of AI: Why Infrastructure Matters in the Race for Adoption

One of the primary challenges facing businesses when it comes to AI is having the foundational infrastructure to make it work. Depending on the use case, AI can be an incredibly demanding technology. Some algorithmic AI workloads use real-time inference, which will grossly underperform without a direct, high bandwidth, low-latency connection. ... An organization’s path to the cloud is really the central pillar of any successful AI strategy. The sheer scale at which organizations are harvesting and using data means that storing every piece of information on-premises is simply no longer viable. Instead, cloud-based data lakes and warehouses are now commonly used to store data, and having streamlined access to this data is essential. But this shift isn’t just about scale or storage – it’s about capability. AI models, particularly those requiring intensive training, often reside in the cloud, where hyperscalers can offer the power density and GPU capabilities that on-premises data centers typically cannot support. Choosing the right cloud provider in this context is of course vital, but the real game-changer lies not in the who of connectivity, but the how. Relying on the public internet for cloud access creates bottlenecks and risks, with unpredictable routes, variable latency, and compromised security.


Why all developers should adopt a safety-critical mindset

Safety-critical industries don’t just rely on reactive measures; they also invest heavily in proactive defenses. Defensive programming is a key practice here, emphasizing robust input validation, error handling, and preparation for edge cases. This same mindset can be invaluable in non-critical software development. A simple input error could crash a service if not properly handled—building systems with this in mind ensures you’re always anticipating the unexpected. Rigorous testing should also be a norm, and not just unit tests. While unit testing is valuable, it's important to go beyond that, testing real-world edge cases and boundary conditions. Consider fault injection testing, where specific failures are introduced (e.g., dropped packets, corrupted data, or unavailable resources) to observe how the system reacts. These methods complement stress testing under maximum load and simulations of network outages, offering a clearer picture of system resilience. Validating how your software handles external failures will build more confidence in your code. Graceful degradation is another principle worth adopting. If a system does fail, it should fail in a way that’s safe and understandable. For example, an online payment system might temporarily disable credit card processing but allow users to save items in their cart or check account details.


Strengthening Software Supply Chains with Dependency Management

Organizations must prioritize proactive dependency management, high-quality component selection and vigilance against vulnerabilities to mitigate escalating risks. A Software Bill of Materials (SBOM) is an essential tool in this approach, as it offers a comprehensive inventory of all software components, enabling organizations to quickly identify and address vulnerabilities across their dependencies. In fact, projects that implement an SBOM to manage open source software dependencies demonstrate a 264-day reduction in the time taken to fix vulnerabilities compared to those that do not. SBOMs provide a comprehensive list of every component within the software, enabling quicker response times to threats and bolstering overall security. However, despite the rise in SBOM usage, it is not keeping pace with the influx of new components being created, highlighting the need for enhanced automation, tooling and support for open source maintainers. ... This complacency — characterized by a false sense of security — accumulates risks that threaten the integrity of software supply chains. The rise of open source malware further complicates the landscape, as attackers exploit poor dependency management.