Showing posts with label quantum computing. Show all posts
Showing posts with label quantum computing. Show all posts

Daily Tech Digest - July 27, 2025


Quote for the day:

"The only way to do great work is to love what you do." -- Steve Jobs


Amazon AI coding agent hacked to inject data wiping commands

The hacker gained access to Amazon’s repository after submitting a pull request from a random account, likely due to workflow misconfiguration or inadequate permission management by the project maintainers. ... On July 23, Amazon received reports from security researchers that something was wrong with the extension and the company started to investigate. Next day, AWS released a clean version, Q 1.85.0, which removed the unapproved code. “AWS is aware of and has addressed an issue in the Amazon Q Developer Extension for Visual Studio Code (VSC). Security researchers reported a potential for unapproved code modification,” reads the security bulletin. “AWS Security subsequently identified a code commit through a deeper forensic analysis in the open-source VSC extension that targeted Q Developer CLI command execution.” “After which, we immediately revoked and replaced the credentials, removed the unapproved code from the codebase, and subsequently released Amazon Q Developer Extension version 1.85.0 to the marketplace.” AWS assured users that there was no risk from the previous release because the malicious code was incorrectly formatted and wouldn’t run on their environments.


How to migrate enterprise databases and data to the cloud

Migrating data is only part of the challenge; database structures, stored procedures, triggers and other code must also be moved. In this part of the process, IT leaders must identify and select migration tools that address the specific needs of the enterprise, especially if they’re moving between different database technologies (heterogeneous migration). Some things they’ll need to consider are: compatibility, transformation requirements and the ability to automate repetitive tasks.  ... During migration, especially for large or critical systems, IT leaders should keep their on-premises and cloud databases synchronized to avoid downtime and data loss. To help facilitate this, select synchronization tools that can handle the data change rates and business requirements. And be sure to test these tools in advance: High rates of change or complex data relationships can overwhelm some solutions, making parallel runs or phased cutovers unfeasible. ... Testing is a safety net. IT leaders should develop comprehensive test plans that cover not just technical functionality, but also performance, data integrity and user acceptance. Leaders should also plan for parallel runs, operating both on-premises and cloud systems in tandem, to validate that everything works as expected before the final cutover. They should engage end users early in the process in order to ensure the migrated environment meets business needs.


Researchers build first chip combining electronics, photonics, and quantum light

The new chip integrates quantum light sources and electronic controllers using a standard 45-nanometer semiconductor process. This approach paves the way for scaling up quantum systems in computing, communication, and sensing, fields that have traditionally relied on hand-built devices confined to laboratory settings. "Quantum computing, communication, and sensing are on a decades-long path from concept to reality," said Miloš Popović, associate professor of electrical and computer engineering at Boston University and a senior author of the study. "This is a small step on that path – but an important one, because it shows we can build repeatable, controllable quantum systems in commercial semiconductor foundries." ... "What excites me most is that we embedded the control directly on-chip – stabilizing a quantum process in real time," says Anirudh Ramesh, a PhD student at Northwestern who led the quantum measurements. "That's a critical step toward scalable quantum systems." This focus on stabilization is essential to ensure that each light source performs reliably under varying conditions. Imbert Wang, a doctoral student at Boston University specializing in photonic device design, highlighted the technical complexity.


Product Manager vs. Product Owner: Why Teams Get These Roles Wrong

While PMs work on the strategic plane, Product Owners anchor delivery. The PO is the guardian of the backlog. They translate the product strategy into epics and user stories, groom the backlog, and support the development team during sprints. They don’t just manage the “what” — they deeply understand the “how.” They answer developer questions, clarify scope, and constantly re-evaluate priorities based on real-time feedback. In Agile teams, they play a central role in turning strategic vision into working software. Where PMs answer to the business, POs are embedded with the dev team. They make trade-offs, adjust scope, and ensure the product is built right. ... Some products need to grow fast. That’s where Growth PMs come in. They focus on the entire user lifecycle, often structured using the PIRAT funnel: Problem, Insight, Reach, Activation, and Trust (a modern take on traditional Pirate Metrics, such as Acquisition, Activation, Retention, Referral, and Revenue). This model guides Growth PMs in identifying where user friction occurs and what levers to pull for meaningful impact. They conduct experiments, optimize funnels, and collaborate closely with marketing and data science teams to drive user growth. 


Ransomware payments to be banned – the unanswered questions

With thresholds in place, businesses/organisations may choose to operate differently so that they aren’t covered by the ban, such as lowering turnover or number of employees. All of this said, rules like this could help to get a better picture of what’s going on with ransomware threats in the UK. Arda Büyükkaya, senior cyber threat intelligence analyst at EclecticIQ, explains more: “As attackers evolve their tactics and exploit vulnerabilities across sectors, timely intelligence-sharing becomes critical to mounting an effective defence. Encouraging businesses to report incidents more consistently will help build a stronger national threat intelligence picture something that’s important as these attacks grow more frequent and become sophisticated. To spare any confusion, sector-specific guidance should be provided by government on how resources should be implemented, making resources clear and accessible. “Many victims still hesitate to come forward due to concerns around reputational damage, legal exposure, or regulatory fallout,” said Büyükkaya. “Without mechanisms that protect and support victims, underreporting will remain a barrier to national cyber resilience.” Especially in the earlier days of the legislation, organisations may still feel pressured to pay in order to keep operations running, even if they’re banned from doing so.


AI Unleashed: Shaping the Future of Cyber Threats

AI optimizes reconnaissance and targeting, giving hackers the tools to scour public sources, leaked and publicly available breach data, and social media to build detailed profiles of potential targets in minutes. This enhanced data gathering lets attackers identify high-value victims and network vulnerabilities with unprecedented speed and accuracy. AI has also supercharged phishing campaigns by automatically crafting phishing emails and messages that mimic an organization’s formatting and reference real projects or colleagues, making them nearly indistinguishable from genuine human-originated communications. ... AI is also being weaponized to write and adapt malicious code. AI-powered malware can autonomously modify itself to slip past signature-based antivirus defenses, probe for weaknesses, select optimal exploits, and manage its own command-and-control decisions. Security experts note that AI accelerates the malware development cycle, reducing the time from concept to deployment. ... AI presents more than external threats. It has exposed a new category of targets and vulnerabilities, as many organizations now rely on AI models for critical functions, such as authentication systems and network monitoring. These AI systems themselves can be manipulated or sabotaged by adversaries if proper safeguards have not been implemented.


Agile and Quality Engineering: Building a Culture of Excellence Through a Holistic Approach

Agile development relies on rapid iteration and frequent delivery, and this rhythm demands fast, accurate feedback on code quality, functionality, and performance. With continuous testing integrated into automated pipelines, teams receive near real-time feedback on every code commit. This immediacy empowers developers to make informed decisions quickly, reducing delays caused by waiting for manual test cycles or late-stage QA validations. Quality engineering also enhances collaboration between developers and testers. In a traditional setup, QA and development operate in silos, often leading to communication gaps, delays, and conflicting priorities. In contrast, QE promotes a culture of shared ownership, where developers write unit tests, testers contribute to automation frameworks, and both parties work together during planning, development, and retrospectives. This collaboration strengthens mutual accountability and leads to better alignment on requirements, acceptance criteria, and customer expectations. Early and continuous risk mitigation is another cornerstone benefit. By incorporating practices like shift-left testing, test-driven development (TDD), and continuous integration (CI), potential issues are identified and resolved long before they escalate. 


Could Metasurfaces be The Next Quantum Information Processors?

Broadly speaking, the work embodies metasurface-based quantum optics which, beyond carving a path toward room-temperature quantum computers and networks, could also benefit quantum sensing or offer “lab-on-a-chip” capabilities for fundamental science Designing a single metasurface that can finely control properties like brightness, phase, and polarization presented unique challenges because of the mathematical complexity that arises once the number of photons and therefore the number of qubits begins to increase. Every additional photon introduces many new interference pathways, which in a conventional setup would require a rapidly growing number of beam splitters and output ports. To bring order to the complexity, the researchers leaned on a branch of mathematics called graph theory, which uses points and lines to represent connections and relationships. By representing entangled photon states as many connected lines and points, they were able to visually determine how photons interfere with each other, and to predict their effects in experiments. Graph theory is also used in certain types of quantum computing and quantum error correction but is not typically considered in the context of metasurfaces, including their design and operation. The resulting paper was a collaboration with the lab of Marko Loncar, whose team specializes in quantum optics and integrated photonics and provided needed expertise and equipment.


New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples

When faced with a complex problem, current LLMs largely rely on chain-of-thought (CoT) prompting, breaking down problems into intermediate text-based steps, essentially forcing the model to “think out loud” as it works toward a solution. While CoT has improved the reasoning abilities of LLMs, it has fundamental limitations. In their paper, researchers at Sapient Intelligence argue that “CoT for reasoning is a crutch, not a satisfactory solution. It relies on brittle, human-defined decompositions where a single misstep or a misorder of the steps can derail the reasoning process entirely.” ... To move beyond CoT, the researchers explored “latent reasoning,” where instead of generating “thinking tokens,” the model reasons in its internal, abstract representation of the problem. This is more aligned with how humans think; as the paper states, “the brain sustains lengthy, coherent chains of reasoning with remarkable efficiency in a latent space, without constant translation back to language.” However, achieving this level of deep, internal reasoning in AI is challenging. Simply stacking more layers in a deep learning model often leads to a “vanishing gradient” problem, where learning signals weaken across layers, making training ineffective. 


For the love of all things holy, please stop treating RAID storage as a backup

Although RAID is a backup by definition, practically, a backup doesn't look anything like a RAID array. That's because an ideal backup is offsite. It's not on your computer, and ideally, it's not even in the same physical location. Remember, RAID is a warranty, and a backup is insurance. RAID protects you from inevitable failure, while a backup protects you from unforeseen failure. Eventually, your drives will fail, and you'll need to replace disks in your RAID array. This is part of routine maintenance, and if you're operating an array for long enough, you should probably have drive swaps on a schedule of several years to keep everything operating smoothly. A backup will protect you from everything else. Maybe you have multiple drives fail at once. A backup will protect you. Lord forbid you fall victim to a fire, flood, or other natural disaster and your RAID array is lost or damaged in the process. A backup still protects you. It doesn't need to be a fire or flood for you to get use out of a backup. There are small issues that could put your data at risk, such as your PC being infected with malware, or trying to write (and replicate) corrupted data. You can dream up just about any situation where data loss is a risk, and a backup will be able to get your data back in situations where RAID can't. 

Daily Tech Digest - July 26, 2025


Quote for the day:

"Small daily improvements over time lead to stunning results." -- Robin Sharma


Data Engineering in the Age of AI: Skills To Master Now

Streaming requires a new mindset. You must reason about event time compared to processing time, manage watermarking and windowing and guarantee exactly-once semantics even when things change midstream. These design patterns must be built into your pipelines from the beginning. ... Agentic AI stretches the typical data engineer’s streaming data skill set because it is no longer about a single model running in isolation. Today, we see networks of perception agents, reasoning agents and execution agents working together, each handling tasks and passing insights to the next in real time. If you know only how to schedule batch ETL jobs or deploy an inference server, you’re missing a core skill: how to build high-throughput, low-latency pipelines that keep these agents reliable and responsive in production. ... A single slow or broken stream can cause cascading failures in multiagent systems. Use schema registries, enforce data contracts and apply exactly-once semantics to maintain trust in your streaming infrastructure. ... Communication presents another challenge. Data scientists often discuss “precision” as a metric that data engineers must translate into reality. Implement evaluation scores like factual consistency checks, entity precision comparisons and human-in-the-loop review pipelines.


Anthropic unveils ‘auditing agents’ to test for AI misalignment

Anthropic said the first environment it developed tests an agent’s ability to complete an alignment audit for an intentionally misaligned model. For this test, the researchers brought the investigator agent to the environment and equipped it with similar tools that human teams use. The researchers instructed the agent to identify the root cause of the misalignment and determine the specific behaviors that were misaligned. According to the researchers, the investigator agent successfully found the root cause of the issues 10-13% of the time. Still, this performance improved to 42% “when using a super-agent approach that aggregates findings across multiple investigations.” ... The next step in most alignment audits is typically a behavioral evaluation, which determines whether the model consistently misbehaves in various settings. To test its evaluation agent, the researchers designed 10 models “given system prompt instructions to exhibit a specific behavior, such as excessive deference to the user.” They ran the agent five times per model and saw that the agent correctly finds and flags at least one quirk of the model. However, the agent sometimes failed to identify specific quirks consistently. It had trouble evaluating subtle quirks, such as self-promotion and research-sandbagging, as well as quirks that are difficult to elicit, like the Hardcode Test Cases quirk.


The agentic experience: Is MCP the right tool for your AI future?

As enterprises race to operationalize AI, the challenge isn't only about building and deploying large language models (LLMs), it's also about integrating them seamlessly into existing API ecosystems while maintaining enterprise level security, governance, and compliance. Apigee is committed to lead you in this journey. Apigee streamlines the integration of gen AI agents into applications by bolstering their security, scalability, and governance. While the Model Context Protocol (MCP) has emerged as a de facto method of integrating discrete APIs as tools, the journey of turning your APIs into these agentic tools is broader than a single protocol. This post highlights the critical role of your existing API programs in this evolution and how ... Leveraging MCP services across a network requires specific security constraints. Perhaps you would like to add authentication to your MCP server itself. Once you’ve authenticated calls to the MCP server you may want to authorize access to certain tools depending on the consuming application. You may want to provide first class observability information to track which tools are being used and by whom. Finally, you may want to ensure that whatever downstream APIs your MCP server is supplying tools for also has minimum guarantees of security like already outlined above


AI Innovation: 4 Steps For Enterprises To Gain Competitive Advantage

A skill is a single ability, such as the ability to write a message or analyze a spreadsheet and trigger actions from that analysis. An agent independently handles complex, multi-step processes to produce a measurable outcome. We recently announced an expanded network of Joule Agents to help foster autonomous collaboration across systems and lines of business. This includes out-of-the-box agents for HR, finance, supply chain, and other functions that companies can deploy quickly to help automate critical workflows. AI front-runners, such as Ericsson, Team Liquid, and Cirque du Soleil, also create customized agents that can tackle specific opportunities for process improvement. Now you can build them with Joule Studio, which provides a low-code workspace to help design, orchestrate, and manage custom agents using pre-defined skills, models, and data connections. This can give you the power to extend and tailor your agent network to your exact needs and business context. ... Another way to become an AI front-runner is to tackle fragmented tools and solutions by putting in place an open, interoperable ecosystem. After all, what good is an innovative AI tool if it runs into blockers when it encounters your other first- and third-party solutions? 


Hard lessons from a chaotic transformation

The most difficult part of this transformation wasn’t the technology but getting people to collaborate in new ways, which required a greater focus on stakeholder alignment and change management. So my colleague first established a strong governance structure. A steering committee with leaders from key functions like IT, operations, finance, and merchandising met biweekly to review progress and resolve conflicts. This wasn’t a token committee, but a body with authority. If there were any issues with data exchange between marketing and supply chain, they were addressed and resolved during the meetings. By bringing all stakeholders together, we were also able to identify discrepancies early on. For example, when we discovered a new feature in the inventory system could slow down employee workflows, the operations manager reported it, and we immediately adjusted the rollout plan. Previously, such issues might not have been identified until after the full rollout and subsequent finger-pointing between IT and business departments. The next step was to focus on communication and culture. From previous failed projects, we knew that sending a few emails wasn’t enough, so we tried a more personal approach. We identified influential employees in each department and recruited them as change champions.


Benchmarks for AI in Software Engineering

HumanEval and SWE-bench have taken hold in the ML community, and yet, as indicated above, neither is necessarily reflective of LLMs’ competence in everyday software engineering tasks. I conjecture one of the reasons is the differences in points of view of the two communities! The ML community prefers large-scale, automatically scored benchmarks, as long as there is a “hill climbing” signal to improve LLMs. The business imperative for LLM makers to compete on popular leaderboards can relegate the broader user experience to a secondary concern. On the other hand, the software engineering community needs benchmarks that capture specific product experiences closely. Because curation is expensive, the scale of these benchmarks is sufficient only to get a reasonable offline signal for the decision at hand (A/B testing is always carried out before a launch). Such benchmarks may also require a complex setup to run, and sometimes are not automated in scoring; but these shortcomings can be acceptable considering a smaller scale. For exactly these reasons, these are not useful to the ML community. Much is lost due to these different points of view. It is an interesting question as to how these communities could collaborate to bridge the gap between scale and meaningfulness and create evals that work well for both communities.


Scientists Use Cryptography To Unlock Secrets of Quantum Advantage

When a quantum computer successfully handles a task that would be practically impossible for current computers, this achievement is referred to as quantum advantage. However, this advantage does not apply to all types of problems, which has led scientists to explore the precise conditions under which it can actually be achieved. While earlier research has outlined several conditions that might allow for quantum advantage, it has remained unclear whether those conditions are truly essential. To help clarify this, researchers at Kyoto University launched a study aimed at identifying both the necessary and sufficient conditions for achieving quantum advantage. Their method draws on tools from both quantum computing and cryptography, creating a bridge between two fields that are often viewed separately. ... “We were able to identify the necessary and sufficient conditions for quantum advantage by proving an equivalence between the existence of quantum advantage and the security of certain quantum cryptographic primitives,” says corresponding author Yuki Shirakawa. The results imply that when quantum advantage does not exist, then the security of almost all cryptographic primitives — previously believed to be secure — is broken. Importantly, these primitives are not limited to quantum cryptography but also include widely-used conventional cryptographic primitives as well as post-quantum ones that are rapidly evolving.


It’s time to stop letting our carbon fear kill tech progress

With increasing social and regulatory pressure, reluctance by a company to reveal emissions is ill-received. For example, in Europe the Corporate Sustainability Reporting Directive (CSRD) currently requires large businesses to publish their emissions and other sustainability datapoints. Opaque sustainability reporting undermines environmental commitments and distorts the reference points necessary for net zero progress. How can organisations work toward a low-carbon future when its measurement tools are incomplete or unreliable? The issue is particularly acute regarding Scope 3 emissions. Scope 3 emissions often account for the largest share of a company’s carbon footprint and are those generated indirectly along the supply chain by a company’s vendors, including emissions from technology infrastructure like data centres. ... It sounds grim, but there is some cause for optimism. Most companies are in a better position than they were five years ago and acknowledge that their measurement capabilities have improved. We need to accelerate the momentum of this progress to ensure real action. Earth Overshoot Day is a reminder that climate reporting for the sake of accountability and compliance only covers the basics. The next step is to use emissions data as benchmarks for real-world progress.


Why Supply Chain Resilience Starts with a Common Data Language

Building resilience isn’t just about buying more tech, it’s about making data more trustworthy, shareable, and actionable. That’s where global data standards play a critical role. The most agile supply chains are built on a shared framework for identifying, capturing, and sharing data. When organizations use consistent product and location identifiers, such as GTINs (Global Trade Item Numbers) and GLNs (Global Location Numbers) respectively, they reduce ambiguity, improve traceability, and eliminate the need for manual data reconciliation. With a common data language in place, businesses can cut through the noise of siloed systems and make faster, more confident decisions. ... Companies further along in their digital transformation can also explore advanced data-sharing standards like EPCIS (Electronic Product Code Information Services) or RFID (radio frequency identification) tagging, particularly in high-volume or high-risk environments. These technologies offer even greater visibility at the item level, enhancing traceability and automation. And the benefits of this kind of visibility extend far beyond trade compliance. Companies that adopt global data standards are significantly more agile. In fact, 58% of companies with full standards adoption say they manage supply chain agility “very well” compared to just 14% among those with no plans to adopt standards, studies show.


Opinion: The AI bias problem hasn’t gone away you know

When we build autonomous systems and allow them to make decisions for us, we enter a strange world of ethical limbo. A self-driving car forced to make a similar decision to protect the driver or a pedestrian in a case of a potentially fatal crash will have much more time than a human to make its choice. But what factors influence that choice? ... It’s not just the AI systems shaping the narrative, raising some voices while quieting others. Organisations made up of ordinary flesh-and-blood people are doing it too. Irish cognitive scientist Abeba Birhane, a highly-regarded researcher of human behaviour, social systems and responsible and ethical artificial intelligence was asked to give a keynote recently for the AI for Good Global Summit. According to her own reports on Bluesky, a meeting was requested just hours before presenting her keynote: “I went through an intense negotiation with the organisers (for over an hour) where we went through my slides and had to remove anything that mentions ‘Palestine’ ‘Israel’ and replace ‘genocide’ with ‘war crimes’…and a slide that explains illegal data torrenting by Meta, I also had to remove. In the end, it was either remove everything that names names (Big Tech particularly) and remove logos, or cancel my talk.” 

Daily Tech Digest - July 05, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


The Hidden Data Cost: Why Developer Soft Skills Matter More Than You Think

The logic is simple but under-discussed: developers who struggle to communicate with product owners, translate goals into architecture, or anticipate system-wide tradeoffs are more likely to build the wrong thing, need more rework, or get stuck in cycles of iteration that waste time and resources. These are not theoretical risks, they’re quantifiable cost drivers. According to Lumenalta’s findings, organizations that invest in well-rounded senior developers, including soft skill development, see fewer errors, faster time to delivery, and stronger alignment between technical execution and business value. ... The irony? Most organizations already have technically proficient talent in-house. What they lack is the environment to develop those skills that drive high-impact outcomes. Senior developers who think like “chess masters”—a term Lumenalta uses for those who anticipate several moves ahead—can drastically reduce a project’s TCO by mentoring junior talent, catching architecture risks early, and building systems that adapt rather than break under pressure. ... As AI reshapes every layer of tech, developers who can bridge business goals and algorithmic capabilities will become increasingly valuable. It’s not just about knowing how to fine-tune a model, it’s about knowing when not to.


Why AV is an overlooked cybersecurity risk

As cyber attackers become more sophisticated, they’re shifting their attention to overlooked entry points like AV infrastructure. A good example is YouTuber Jim Browning’s infiltration of a scam call center, where he used unsecured CCTV systems to monitor and expose criminals in real time. This highlights the potential for AV vulnerabilities to be exploited for intelligence gathering. To counter these risks, organizations must adopt a more proactive approach. Simulated social engineering and phishing attacks can help assess user awareness and expose vulnerabilities in behavior. These simulations should be backed by ongoing training that equips staff to recognize manipulation tactics and understand the value of security hygiene. ... To mitigate the risks posed by vulnerable AV systems, organizations should take a proactive and layered approach to security. This includes regularly updating device firmware and underlying software packages, which are often left outdated even when new versions are available. Strong password policies should be enforced, particularly on devices running webservers, with security practices aligned to standards like the OWASP Top 10. Physical access to AV infrastructure must also be tightly controlled to prevent unauthorized LAN connections. 


EU Presses for Quantum-Safe Encryption by 2030 as Risks Grow

The push comes amid growing concern about the long-term viability of conventional encryption techniques. Current security protocols rely on complex mathematical problems — such as factoring large numbers — that would take today’s classical computers thousands of years to solve. But quantum computers could potentially crack these systems in a fraction of the time, opening the door to what cybersecurity experts refer to as “store now, decrypt later” attacks. In these attacks, hackers collect encrypted data today with the intention of breaking the encryption once quantum technology matures. Germany’s Federal Office for Information Security (BSI) estimates that conventional encryption could remain secure for another 10 to 20 years in the absence of sudden breakthroughs, The Munich Eye reports. Europol has echoed that forecast, suggesting a 15-year window before current systems might be compromised. While the timeline is uncertain, European authorities agree that proactive planning is essential. PQC is designed to resist attacks from both classical and quantum computers by using algorithms based on different kinds of hard mathematical problems. These newer algorithms are more complex and require different computational strategies than those used in today’s standards like RSA and ECC. 


MongoDB Doubles Down on India's Database Boom

Chawla says MongoDB is helping Indian enterprises move beyond legacy systems through two distinct approaches. "The first one is when customers decide to build a completely new modern application, gradually sunsetting the old legacy application," he explains. "We work closely with them to build these modern systems." ... Despite this fast-paced growth, Chawla points out several lingering myths in India. "A lot of customers still haven't realised that if you want to build a modern application especially one that's AI-driven you can't build it on a relational structure," he explains. "Most of the data today is unstructured and messy. So you need a database that can scale, can handle different types of data, and support modern workloads." ... Even those trying to move away from traditional databases often fall into the trap of viewing PostgreSQL as a modern alternative. "PostgreSQL is still relational in nature. It has the same row-and-column limitations and scalability issues." He also adds that if companies want to build a future-proof application especially one that infuses AI capabilities they need something that can handle all data types and offers native support for features like full-text search, hybrid search, and vector search. Other NoSQL players such as Redis and Apache Cassandra also have significant traction in India.


AI only works if the infrastructure is right

The successful implementation of artificial intelligence is therefore closely linked to the underlying infrastructure. But how you define that AI infrastructure is open to debate. An AI infrastructure always consists of different components, which is clearly reflected in the diverse backgrounds of the participating parties. As a customer, how can you best assess such an AI infrastructure? ... For companies looking to get started with AI infrastructure, a phased approach is crucial. Start small with a pilot, clearly define what you want to achieve, and expand step by step. The infrastructure must grow with the ambitions, not the other way around. A practical approach must be based on the objectives. Then the software, middleware, and hardware will be available. For virtually every use case, you can choose from the necessary and desired components. ... At the same time, the AI landscape requires a high degree of flexibility. Technological developments are rapid, models change, and business requirements can shift from quarter to quarter. It is therefore essential to establish an infrastructure that is not only scalable but also adaptable to new insights or shifting objectives. Consider the possibility of dynamically scaling computing capacity up or down, compressing models where necessary, and deploying tooling that adapts to the requirements of the use case. 


Software abstraction: The missing link in commercially viable quantum computing

Quantum Infrastructure Software delivers this essential abstraction, turning bare-metal QPUs into useful devices, much the way data center providers integrate virtualization software for their conventional systems. Current offerings cover all of the functions typically associated with the classical BIOS up through virtual machine Hypervisors, extending to developer tools at the application level. Software-driven abstraction of quantum complexity away from the end users lets anyone, irrespective of their quantum expertise, leverage quantum computing for the problems that matter most to them. ... With a finely tuned quantum computer accessible, a user must still execute many tasks to extract useful answers from the QPU, in analogy with the need for careful memory management required to gain practical acceleration with GPUs. Most importantly, in executing a real workload, they must convert high-level “assembly-language” logical definitions of quantum applications into hardware-specific “machine-language” instructions that account for the details of the QPU in use, and deploy countermeasures where errors might leak in. These are typically tasks that can only be handled by (expensive!) specialists in quantum-device operation.


Guest Post: Why AI Regulation Won’t Work for Quantum

Artificial intelligence regulation has been in the regulatory spotlight for the past seven to ten years and there is no shortage of governments and global institutions, as well as corporations and think tanks, putting forth regulatory frameworks in response to this widely buzzy tech. AI makes decisions in a “black box,” creating a need for “explainability” in order to fully understand how determinations by these systems affect the public. With the democratization of AI systems, there is the potential for bad actors to create harm in a decentralized ecosystem. ... Because quantum systems do not learn on their own, evolve over time, or make decisions based on training data, they do not pose the same kind of existential or social threats that AI does. Whereas the implications of quantum breakthroughs will no doubt be profound, especially in cryptography, defense, drug development, and material science, the core risks are tied to who controls the technology and for what purpose. Regulating who controls technology and ensuring bad actors are disincentivized from using technology in harmful ways is the stuff of traditional regulation across many sectors, so regulating quantum should prove somewhat less challenging than current AI regulatory debates would suggest.


Validation is an Increasingly Critical Element of Cloud Security

Security engineers simply don’t have the time or resources to familiarize themselves with the vast number of cloud services available today. In the past, security engineers primarily needed to understand Windows and Linux internals, Active Directory (AD) domain basics, networks and some databases and storage solutions. Today, they need to be familiar with hundreds of cloud services, from virtual machines (VMs) to serverless functions and containers at different levels of abstraction. ... It’s also important to note that cloud environments are particularly susceptible to misconfigurations. Security teams often primarily focus on assessing the performance of their preventative security controls, searching for weaknesses in their ability to detect attack activity. But this overlooks the danger posed by misconfigurations, which are not caused by bad code, software bugs, or malicious activity. That means they don’t fall within the definition of “vulnerabilities” that organizations typically test for—but they still pose a significant danger.  ... Securing the cloud isn’t just about having the right solutions in place — it’s about determining whether they are functioning correctly. But it’s also about making sure attackers don’t have other, less obvious ways into your network.


Build and Deploy Scalable Technical Architecture a Bit Easier

A critical challenge when transforming proof-of-concept systems into production-ready architecture is balancing rapid development with future scalability. At one organization, I inherited a monolithic Python application that was initially built as a lead distribution system. The prototype performed adequately in controlled environments but struggled when processing real-world address data, which, by their nature, contain inconsistencies and edge cases. ... Database performance often becomes the primary bottleneck in scaling systems. Domain-Driven Design (DDD) has proven particularly valuable for creating loosely coupled microservices, with its strategic phase ensuring that the design architecture properly encapsulates business capabilities, and the tactical phase allowing the creation of domain models using effective design patterns. ... For systems with data retention policies, table partitioning proved particularly effective, turning one table into several while maintaining the appearance of a single table to the application. This allowed us to implement retention simply by dropping entire partition tables rather than performing targeted deletions, which prevented database bloat. These optimizations reduced average query times from seconds to milliseconds, enabling support for much higher user loads on the same infrastructure.


What AI Policy Can Learn From Cyber: Design for Threats, Not in Spite of Them

The narrative that constraints kill innovation is both lazy and false. In cybersecurity, we’ve seen the opposite. Federal mandates like the Federal Information Security Modernization Act (FISMA), which forced agencies to map their systems, rate data risks, and monitor security continuously, and state-level laws like California’s data breach notification statute created the pressure and incentives that moved security from afterthought to design priority.  ... The irony is that the people who build AI, like their cybersecurity peers, are more than capable of innovating within meaningful boundaries. We’ve both worked alongside engineers and product leaders in government and industry who rise to meet constraints as creative challenges. They want clear rules, not endless ambiguity. They want the chance to build secure, equitable, high-performing systems — not just fast ones. The real risk isn’t that smart policy will stifle the next breakthrough. The real risk is that our failure to govern in real time will lock in systems that are flawed by design and unfit for purpose. Cybersecurity found its footing by designing for uncertainty and codifying best practices into adaptable standards. AI can do the same if we stop pretending that the absence of rules is a virtue.

Daily Tech Digest - July 03, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


The Goldilocks Theory – preparing for Q-Day ‘just right’

When it comes to quantum readiness, businesses currently have two options: Quantum key distribution (QKD) and post quantum cryptography (PQC). Of these, PQC reigns supreme. Here’s why. On the one hand, you have QKD which leverages principles of quantum physics, such as superposition, to securely distribute encryption keys. Although great in theory, it needs extensive new infrastructure, including bespoke networks and highly specialised hardware. More importantly, it also lacks authentication capabilities, severely limiting its practical utility. PQC, on the other hand, comprises classical cryptographic algorithms specifically designed to withstand quantum attacks. It can be integrated into existing digital infrastructures with minimal disruption. ... Imagine installing new quantum-safe algorithms prematurely, only to discover later they’re vulnerable, incompatible with emerging standards, or impractical at scale. This could have the opposite effect and could inadvertently increase attack surface and bring severe operational headaches, ironically becoming less secure. But delaying migration for too long also poses serious risks. Malicious actors could be already harvesting encrypted data, planning to decrypt it when quantum technology matures – so businesses protecting sensitive data such as financial records, personal details, intellectual property cannot afford indefinite delays.


Sovereign by Design: Data Control in a Borderless World

The regulatory framework for digital sovereignty is a national priority. The EU has set the pace with GDPR and GAIA-X. It prioritizes data residency and local infrastructure. China's cybersecurity law and personal information protection law enforce strict data localization. India's DPDP Act mandates local storage for sensitive data, aligning with its digital self-reliance vision through platforms such as Aadhaar. Russia's federal law No. 242-FZ requires citizen data to stay within the country for the sake of national security. Australia's privacy act focuses on data privacy, especially for health records, and Canada's PIPEDA encourages local storage for government data. Saudi Arabia's personal data protection law enforces localization for sensitive sectors, and Indonesia's personal data protection law covers all citizen-centric data. Singapore's PDPA balances privacy with global data flows, and Brazil's LGPD, mirroring the EU's GDPR, mandates the protection of privacy and fundamental rights of its citizens. ... Tech companies have little option but to comply with the growing demands of digital sovereignty. For example, Amazon Web Services has a digital sovereignty pledge, committing to "a comprehensive set of sovereignty controls and features in the cloud" without compromising performance.


Agentic AI Governance and Data Quality Management in Modern Solutions

Agentic AI governance is a framework that ensures artificial intelligence systems operate within defined ethical, legal, and technical boundaries. This governance is crucial for maintaining trust, compliance, and operational efficiency, especially in industries such as Banking, Financial Services, Insurance, and Capital Markets. In tandem with robust data quality management, Agentic AI governance can substantially enhance the reliability and effectiveness of AI-driven solutions. ... In industries such as Banking, Financial Services, Insurance, and Capital Markets, the importance of Agentic AI governance cannot be overstated. These sectors deal with vast amounts of sensitive data and require high levels of accuracy, security, and compliance. Here’s why Agentic AI governance is essential: Enhanced Trust: Proper governance fosters trust among stakeholders by ensuring AI systems are transparent, fair, and reliable. Regulatory Compliance: Adherence to legal and regulatory requirements helps avoid penalties and safeguard against legal risks. Operational Efficiency: By mitigating risks and ensuring accuracy, AI governance enhances overall operational efficiency and decision-making. Protection of Sensitive Data: Robust governance frameworks protect sensitive financial data from breaches and misuse, ensuring privacy and security. 


Fundamentals of Dimensional Data Modeling

Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. Data modelers organize these facts and descriptive dimensions into separate tables within the data warehouse, aligning them with the different subject areas and business processes. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. Additionally, dimensional data modeling proves to be flexible as business needs evolve. The data warehouse updates technology according to the concept of slowly changing dimensions (SCD) as business contexts emerge. ... Alignment in the design requires these processes, and data governance plays an integral role in getting there. Once the organization is on the same page about the dimensional model’s design, it chooses the best kind of implementation. Implementation choices include the star or snowflake schema around a fact. When organizations have multiple facts and dimensions, they use a cube. A dimensional model defines how technology needs to build a data warehouse architecture or one of its components using good design and implementation.


IDE Extensions Pose Hidden Risks to Software Supply Chain

The latest research, published this week by application security vendor OX Security, reveals the hidden dangers of verified IDE extensions. While IDEs provide an array of development tools and features, there are a variety of third-party extensions that offer additional capabilities and are available in both official marketplaces and external websites. ... But OX researchers realized they could add functionality to verified extensions after the fact and still maintain the checkmark icon. After analyzing traffic for Visual Studio Code, the researchers found a server request to the marketplace that determines whether the extension is verified; they discovered they could modify the values featured in the server request and maintain the verification status even after creating malicious versions of the approved extensions. ... Using this attack technique, a threat actor could inject malicious code into verified and seemingly safe extensions that would maintain their verified status. "This can result in arbitrary code execution on developers' workstations without their knowledge, as the extension appears trusted," Siman-Tov Bustan and Zadok wrote. "Therefore, relying solely on the verified symbol of extensions is inadvisable." ... "It only takes one developer to download one of these extensions," he says. "And we're not talking about lateral movement. ..."


Business Case for Agentic AI SOC Analysts

A key driver behind the business case for agentic AI in the SOC is the acute shortage of skilled security analysts. The global cybersecurity workforce gap is now estimated at 4 million professionals, but the real bottleneck for most organizations is the scarcity of experienced analysts with the expertise to triage, investigate, and respond to modern threats. One ISC2 survey report from 2024 shows that 60% of organizations worldwide reported staff shortages significantly impacting their ability to secure the organizations, with another report from the World Economic Forum showing that just 15% of organizations believe they have the right people with the right skills to properly respond to a cybersecurity incident. Existing teams are stretched thin, often forced to prioritize which alerts to investigate and which to leave unaddressed. As previously mentioned, the flood of false positives in most SOCs means that even the most experienced analysts are too distracted by noise, increasing exposure to business-impacting incidents. Given these realities, simply adding more headcount is neither feasible nor sustainable. Instead, organizations must focus on maximizing the impact of their existing skilled staff. The AI SOC Analyst addresses this by automating routine Tier 1 tasks, filtering out noise, and surfacing the alerts that truly require human judgment. 


Microservice Madness: Debunking Myths and Exposing Pitfalls

Microservices will reduce dependencies, because it forces you to serialize your types into generic graph objects (read; JSON or XML or something similar). This implies that you can just transform your classes into a generic graph object at its interface edges, and accomplish the exact same thing. ... There are valid arguments for using message brokers, and there are valid arguments for decoupling dependencies. There are even valid points of scaling out horizontally by segregating functionality on to different servers. But if your argument in favor of using microservices is "because it eliminates dependencies," you're either crazy, corrupt through to the bone, or you have absolutely no idea what you're talking about (make your pick!) Because you can easily achieve the same amount of decoupling using Active Events and Slots, combined with a generic graph object, in-process, and it will execute 2 billion times faster in production than your "microservice solution" ... "Microservice Architecture" and "Service Oriented Architecture" (SOA) have probably caused more harm to our industry than the financial crisis in 2008 caused to our economy. And the funny thing is, the damage is ongoing because of people repeating mindless superstitious belief systems as if they were the truth.


Sustainability and social responsibility

Direct-to-chip liquid cooling delivers impressive efficiency but doesn’t manage the entire thermal load. That’s why hybrid systems that combine liquid and traditional air cooling are increasingly popular. These systems offer the ability to fine-tune energy use, reduce reliance on mechanical cooling, and optimize server performance. HiRef offers advanced cooling distribution units (CDUs) that integrate liquid-cooled servers with heat exchangers and support infrastructure like dry coolers and dedicated high-temperature chillers. This integration ensures seamless heat management regardless of local climate or load fluctuations. ... With liquid cooling systems capable of operating at higher temperatures, facilities can increasingly rely on external conditions for passive cooling. This shift not only reduces electricity usage, but also allows for significant operational cost savings over time. But this sustainable future also depends on regulatory compliance, particularly in light of the recently updated F-Gas Regulation, which took effect in March 2024. The EU regulation aims to reduce emissions of fluorinated greenhouse gases to net-zero by 2050 by phasing out harmful high-GWP refrigerants like HFCs. “The F-Gas regulation isn’t directly tailored to the data center sector,” explains Poletto.


Infrastructure Operators Leaving Control Systems Exposed

Threat intelligence firm Censys has scanned the internet twice a month for the last six months, looking for a representative sample composed of four widely used types of ICS devices publicly exposed to the internet. Overall exposure slightly increased from January through June, the firm said Monday. One of the devices Censys scanned for is programmable logic controllers made by an Israel-based Unitronics. The firm's Vision-series devices get used in numerous industries, including the water and wastewater sector. Researchers also counted publicly exposed devices built by Israel-based Orpak - a subsidiary of Gilbarco Veeder-Root - that run SiteOmat fuel station automation software. It also looked for devices made by Red Lion that are widely deployed for factory and process automation, as well as in oil and gas environments. It additionally probed for instances of a facilities automation software framework known as Niagara, made by Tridium. ... Report author Emily Austin, principal security researcher at Censys, said some fluctuation over time isn't unusual, given how "services on the internet are often ephemeral by nature." The greatest number of publicly exposed systems were in the United States, except for Unitronics, which are also widely used in Australia.


Healthcare CISOs must secure more than what’s regulated

Security must be embedded early and consistently throughout the development lifecycle, and that requires cross-functional alignment and leadership support. Without an understanding of how regulations translate into practical, actionable security controls, CISOs can struggle to achieve traction within fast-paced development environments. ... Security objectives should be mapped to these respective cycles—addressing tactical issues like vulnerability remediation during sprints, while using PI planning cycles to address larger technical and security debt. It’s also critical to position security as an enabler of business continuity and trust, rather than a blocker. Embedding security into existing workflows rather than bolting it on later builds goodwill and ensures more sustainable adoption. ... The key is intentional consolidation. We prioritize tools that serve multiple use cases and are extensible across both DevOps and security functions. For example, choosing solutions that can support infrastructure-as-code security scanning, cloud posture management, and application vulnerability detection within the same ecosystem. Standardizing tools across development and operations not only reduces overhead but also makes it easier to train teams, integrate workflows, and gain unified visibility into risk.

Daily Tech Digest - July 02, 2025


Quote for the day:

"Success is not the absence of failure; it's the persistence through failure." -- Aisha Tyle


How cybersecurity leaders can defend against the spur of AI-driven NHI

Many companies don’t have lifecycle management for all their machine identities and security teams may be reluctant to shut down old accounts because doing so might break critical business processes. ... Access-management systems that provide one-time-use credentials to be used exactly when they are needed are cumbersome to set up. And some systems come with default logins like “admin” that are never changed. ... AI agents are the next step in the evolution of generative AI. Unlike chatbots, which only work with company data when provided by a user or an augmented prompt, agents are typically more autonomous, and can go out and find needed information on their own. This means that they need access to enterprise systems, at a level that would allow them to carry out all their assigned tasks. “The thing I’m worried about first is misconfiguration,” says Yageo’s Taylor. If an AI agent’s permissions are set incorrectly “it opens up the door to a lot of bad things to happen.” Because of their ability to plan, reason, act, and learn AI agents can exhibit unpredictable and emergent behaviors. An AI agent that’s been instructed to accomplish a particular goal might find a way to do it in an unanticipated way, and with unanticipated consequences. This risk is magnified even further, with agentic AI systems that use multiple AI agents working together to complete bigger tasks, or even automate entire business processes. 


The silent backbone of 5G & beyond: How network APIs are powering the future of connectivity

Network APIs are fueling a transformation by making telecom networks programmable and monetisable platforms that accelerate innovation, improve customer experiences, and open new revenue streams.  ... Contextual intelligence is what makes these new-generation APIs so attractive. Your needs change significantly depending on whether you’re playing a cloud game, streaming a match, or participating in a remote meeting. Programmable networks can now detect these needs and adjust dynamically. Take the example of a user streaming a football match. With network APIs, a telecom operator can offer temporary bandwidth boosts just for the game’s duration. Once it ends, the network automatically reverts to the user’s standard plan—no friction, no intervention. ... Programmable networks are expected to have the greatest impact in Industry 4.0, which goes beyond consumer applications. ... 5G combined IOT and with network APIs enables industrial systems to become truly connected and intelligent. Remote monitoring of manufacturing equipment allows for real-time maintenance schedule adjustments based on machine behavior. Over a programmable, secure network, an API-triggered alert can coordinate a remote diagnostic session and even start remedial actions if a fault is found.


Quantum Computers Just Reached the Holy Grail – No Assumptions, No Limits

A breakthrough led by Daniel Lidar, a professor of engineering at USC and an expert in quantum error correction, has pushed quantum computing past a key milestone. Working with researchers from USC and Johns Hopkins, Lidar’s team demonstrated a powerful exponential speedup using two of IBM’s 127-qubit Eagle quantum processors — all operated remotely through the cloud. Their results were published in the prestigious journal Physical Review X. “There have previously been demonstrations of more modest types of speedups like a polynomial speedup, says Lidar, who is also the cofounder of Quantum Elements, Inc. “But an exponential speedup is the most dramatic type of speed up that we expect to see from quantum computers.” ... What makes a speedup “unconditional,” Lidar explains, is that it doesn’t rely on any unproven assumptions. Prior speedup claims required the assumption that there is no better classical algorithm against which to benchmark the quantum algorithm. Here, the team led by Lidar used an algorithm they modified for the quantum computer to solve a variation of “Simon’s problem,” an early example of quantum algorithms that can, in theory, solve a task exponentially faster than any classical counterpart, unconditionally.


4 things that make an AI strategy work in the short and long term

Most AI gains came from embedding tools like Microsoft Copilot, GitHub Copilot, and OpenAI APIs into existing workflows. Aviad Almagor, VP of technology innovation at tech company Trimble, also notes that more than 90% of Trimble engineers use Github Copilot. The ROI, he says, is evident in shorter development cycles, and reduced friction in HR and customer service. Moreover, Trimble has introduced AI into their transportation management system, where AI agents optimize freight procurement by dynamically matching shippers and carriers. ... While analysts often lament the difficulty of showing short-term ROI for AI projects, these four organizations disagree — at least in part. Their secret: flexible thinking and diverse metrics. They view ROI not only as dollars saved or earned, but also as time saved, satisfaction increased, and strategic flexibility gained. London says that Upwave listens for customer signals like positive feedback, contract renewals, and increased engagement with AI-generated content. Given the low cost of implementing prebuilt AI models, even modest wins yield high returns. For example, if a customer cites an AI-generated feature as a reason to renew or expand their contract, that’s taken as a strong ROI indicator. Trimble uses lifecycle metrics in engineering and operations. For instance, one customer used Trimble AI tools to reduce the time it took to perform a tunnel safety analysis from 30 minutes to just three.


How IT Leaders Can Rise to a CIO or Other C-level Position

For any IT professional who aspires to become a CIO, the key is to start thinking like a business leader, not just a technologist, says Antony Marceles, a technology consultant and founder of software staffing firm Pumex. "This means taking every opportunity to understand the why behind the technology, how it impacts revenue, operations, and customer experience," he explained in an email. The most successful tech leaders aren't necessarily great technical experts, but they possess the ability to translate tech speak into business strategy, Marceles says, adding that "Volunteering for cross-functional projects and asking to sit in on executive discussions can give you that perspective." ... CIOs rarely have solo success stories; they're built up by the teams around them, Marceles says. "Colleagues can support a future CIO by giving honest feedback, nominating them for opportunities, and looping them into strategic conversations." Networking also plays a pivotal role in career advancement, not just for exposure, but for learning how other organizations approach IT leadership, he adds. Don't underestimate the power of having an executive sponsor, someone who can speak to your capabilities when you’re not there to speak for yourself, Eidem says. "The combination of delivering value and having someone champion that value -- that's what creates real upward momentum."


SLMs vs. LLMs: Efficiency and adaptability take centre stage

SLMs are becoming central to Agentic AI systems due to their inherent efficiency and adaptability. Agentic AI systems typically involve multiple autonomous agents that collaborate on complex, multi-step tasks and interact with environments. Fine-tuning methods like Reinforcement Learning (RL) effectively imbue SLMs with task-specific knowledge and external tool-use capabilities, which are crucial for agentic operations. This enables SLMs to be efficiently deployed for real-time interactions and adaptive workflow automation, overcoming the prohibitive costs and latency often associated with larger models in agentic contexts. ... Operating entirely on-premises ensures that decisions are made instantly at the data source, eliminating network delays and safeguarding sensitive information. This enables timely interpretation of equipment alerts, detection of inventory issues, and real-time workflow adjustments, supporting faster and more secure enterprise operations. SLMs also enable real-time reasoning and decision-making through advanced fine-tuning, especially Reinforcement Learning. RL allows SLMs to learn from verifiable rewards, teaching them to reason through complex problems, choose optimal paths, and effectively use external tools. 


Quantum’s quandary: racing toward reality or stuck in hyperbole?

One important reason is for researchers to demonstrate their advances and show that they are adding value. Quantum computing research requires significant expenditure, and the return on investment will be substantial if a quantum computer can solve problems previously deemed unsolvable. However, this return is not assured, nor is the timeframe for when a useful quantum computer might be achievable. To continue to receive funding and backing for what ultimately is a gamble, researchers need to show progress — to their bosses, investors, and stakeholders. ... As soon as such announcements are made, scientists and researchers scrutinize them for weaknesses and hyperbole. The benchmarks used for these tests are subject to immense debate, with many critics arguing that the computations are not practical problems or that success in one problem does not imply broader applicability. In Microsoft’s case, a lack of peer-reviewed data means there is uncertainty about whether the Majorana particle even exists beyond theory. The scientific method encourages debate and repetition, with the aim of reaching a consensus on what is true. However, in quantum computing, marketing hype and the need to demonstrate advancement take priority over the verification of claims, making it difficult to place these announcements in the context of the bigger picture.


Ethical AI for Product Owners and Product Managers

As the product and customer information steward, the PO/PM must lead the process of protecting sensitive data. The Product Backlog often contains confidential customer feedback, competitive analysis, and strategic plans that cannot be exposed. This guardrail requires establishing clear protocols for what data can be shared with AI tools. A practical first step is to lead the team in a data classification exercise, categorizing information as Public, Internal, or Restricted. Any data classified for internal use, such as direct customer quotes, must be anonymized before being used in an AI prompt. ... AI is proficient at generating text but possesses no real-world experience, empathy, or strategic insight. This guardrail involves proactively defining the unique, high-value work that AI can assist but never replace. Product leaders should clearly delineate between AI-optimal tasks, creating first drafts of technical user stories, summarizing feedback themes, or checking for consistency across Product Backlog items and PO/PM-essential areas. These human-centric responsibilities include building genuine empathy through stakeholder interviews, making difficult strategic prioritization trade-offs, negotiating scope, resolving conflicting stakeholder needs, and communicating the product vision. By modeling this partnership and using AI as an assistant to prepare for strategic work, the PO/PM reinforces that their core value lies in strategy, relationships, and empathy.


Sharded vs. Distributed: The Math Behind Resilience and High Availability

In probability theory, independent events are events whose outcomes do not affect each other. For example, when throwing four dice, the number displayed on each dice is independent of the other three dice. Similarly, the availability of each server in a six-node application-sharded cluster is independent of the others. This means that each server has an individual probability of being available or unavailable, and the failure of one server is not affected by the failure or otherwise of other servers in the cluster. In reality, there may be shared resources or shared infrastructure that links the availability of one server to another. In mathematical terms, this means that the events are dependent. However, we consider the probability of these types of failures to be low, and therefore, we do not take them into account in this analysis.  ... Traditional architectures are limited by single-node failure risk. Application-level sharding compounds this problem because if any node goes down, its shard and therefore the total system becomes unavailable. In contrast, distributed databases with quorum-based consensus (like YugabyteDB) provide fault tolerance and scalability, enabling higher resilience and improved availability.


How FinTechs are turning GRC into a strategic enabler

The misconception that risk management and innovation exist in tension is one that modern FinTechs must move beyond. At its core, cybersecurity – when thoughtfully integrated – serves not as a brake but as an enabler of innovation. The key is to design governance structures that are both intelligent and adaptive (and resilient in itself). The foundation lies in aligning cybersecurity risk management with the broader business objective: enablement. This means integrating security thinking early in the innovation cycle, using standardized interfaces, expectations, and frameworks that don’t obstruct, but rather channel innovation safely. For instance, when risk statements are defined consistently across teams, decisions can be made faster and with greater confidence. Critically, it starts with the threat model. A well-defined, enterprise-level threat model is the compass that guides risk assessments and controls where they matter most. Yet many companies still operate without a clear articulation of their own threat landscape, leaving their enterprise risk strategies untethered from reality. Without this grounding, risk management becomes either overly cautious or blindly permissive, or a bit of both. We place a strong emphasis on bridging the traditional silos between GRC, IT Security, Red Teaming, and Operational teams.

Daily Tech Digest - June 21, 2025


Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins


AI in Disaster Recovery: Mapping Technical Capabilities to Real Business Value

Despite its promise, AI introduces new challenges, including security risks and trust deficits. Threat actors leverage the same AI advancements, targeting systems with more precision and, in some cases, undermining AI-driven defenses. In the Zerto–IDC survey mentioned earlier, for instance, only 41% of respondents felt that AI is “very” or “somewhat” trustworthy; 59% felt that it is “not very” or “not at all” trustworthy. To mitigate these risks, organizations must adopt AI responsibly. For example, combining AI-driven monitoring with robust encryption and frequent model validation ensures that AI systems deliver consistent and secure performance. Furthermore, organizations should emphasize transparency in AI operations to maintain trust among stakeholders. Successful AI deployment in DR/CR requires cross-functional alignment between ITOps and management. Misaligned priorities can delay response times during crises, exacerbating data loss and downtime. Additionally, the ongoing IT skills shortage is still very much underway, with a different recent IDC study predicting that 9 out of 10 organizations will feel an impact by 2026, at a cost of $5.5 trillion in potential delays, quality issues, and revenue loss across the economy. Integrating AI-driven automation can partially mitigate these impacts by optimizing resource allocation and reducing dependency on manual intervention.


The Quantum Supply Chain Risk: How Quantum Computing Will Disrupt Global Commerce

Whether its API’s, middleware, firmware embedded devices or operational technology, they’re all built on the same outdated encryption and systems of trust. One of the biggest threats from quantum computing will be on all this unseen machinery that powers global digital trade. These systems handle the backend of everything from routing to cargo to scheduling deliveries and clearing large shipments, but they were never designed to withstand the threat of quantum. Attackers will be able to break in quietly — injecting malicious code into control software, ERP systems or impersonating suppliers to communicate malicious information and hijack digital workflows. Quantum computing won’t necessarily affect the industries on its own, but it will corrupt the systems that power the global economy. ... Some of the most dangerous attacks are being staged today, with many nation-states and bad actors storing encrypted data, from procurement orders to shipping records. When quantum computers are finally able to break those encryption schemes, attackers will be able to decrypt them in what’s coined a Harvest Now Decrypt Later (HNDL) attack. These attacks, although retroactive in nature, represent one of the biggest threats to the integrity of cross-border commerce. Global trade depends on digital provenance or handling goods and proving where they came from. 


Securing OT Systems: The Limits of the Air Gap Approach

Aside from susceptibility to advanced techniques, tactics, and procedures (TTPs) such as thermal manipulation and magnetic fields, more common vulnerabilities associated with air-gapped environments include factors such as unpatched systems going unnoticed, lack of visibility into network traffic, potentially malicious devices coming on the network undetected, and removable media being physically connected within the network. Once the attack is inside OT systems, the consequences can be disastrous regardless of whether there is an air gap or not. However, it is worth considering how the existence of the air gap can affect the time-to-triage and remediation in the case of an incident. ... This incident reveals that even if a sensitive OT system has complete digital isolation, this robust air gap still cannot fully eliminate one of the greatest vulnerabilities of any system—human error. Human error would still hold if an organization went to the extreme of building a faraday cage to eliminate electromagnetic radiation. Air-gapped systems are still vulnerable to social engineering, which exploits human vulnerabilities, as seen in the tactics that Dragonfly and Energetic Bear used to trick suppliers, who then walked the infection right through the front door. Ideally, a technology would be able to identify an attack regardless of whether it is caused by a compromised supplier, radio signal, or electromagnetic emission. 


How to Lock Down the No-Code Supply Chain Attack Surface

A core feature of no-code development, third-party connectors allow applications to interact with cloud services, databases, and enterprise software. While these integrations boost efficiency, they also create new entry points for adversaries. ... Another emerging threat involves dependency confusion attacks, where adversaries exploit naming collisions between internal and public software packages. By publishing malicious packages to public repositories with the same names as internally used components, attackers could trick the platform into downloading and executing unauthorized code during automated workflow executions. This technique allows adversaries to silently insert malicious payloads into enterprise automation pipelines, often bypassing traditional security reviews. ... One of the most challenging elements of securing no-code environments is visibility. Security teams struggle with asset discovery and dependency tracking, particularly in environments where business users can create applications independently without IT oversight. Applications and automations built outside of IT governance may use unapproved connectors and expose sensitive data, since they often integrate with critical business workflows. 


Securing Your AI Model Supply Chain

Supply chain Levels for Software Artifacts (SLSA) is a comprehensive framework designed to protect the integrity of software artifacts, including AI models. SLSA provides a set of standards and practices to secure the software supply chain from source to deployment. By implementing SLSA, organizations can ensure that their AI models are built and maintained with the highest levels of security, reducing the risk of tampering and ensuring the authenticity of their outputs. ... Sigstore is an open-source project that aims to improve the security and integrity of software supply chains by providing a transparent and secure way to sign and verify software artifacts. Using cryptographic signatures, Sigstore ensures that AI models and other software components are authentic and have not been tampered with. This system allows developers and organizations to trace the provenance of their AI models, ensuring that they originate from trusted sources. ... The most valuable takeaway for ensuring model authenticity is the implementation of robust verification mechanisms. By utilizing frameworks like SLSA and tools like Sigstore, organizations can create a transparent and secure supply chain that guarantees the integrity of their AI models. This approach helps build trust with stakeholders and ensures that the models deployed in production are reliable and free from malicious alterations.


Data center retrofit strategies for AI workloads

AI accelerators are highly sensitive to power quality. Sub-cycle power fluctuations can cause bit errors, data corruption, or system instability. Older uninterruptible power supply (UPS) systems may struggle to handle the dynamic loads AI can produce, often involving three MW sub-cycle swings or more. Updating the electrical distribution system (EDS) is an opportunity that includes replacing dated UPS technology, which often cannot handle the dynamic AI load profile, redesigning power distribution for redundancy, and ensuring that power supply configurations meet the demands of high-density computing. ... With the high cost of AI downtime, risk mitigation becomes paramount. Energy and power management systems (EPMS) are capable of high-resolution waveform capture, which allows operators to trace and address electrical anomalies quickly. These systems are essential for identifying the root cause of power quality issues and coordinating fast response mechanisms. ... No two mission-critical facilities are the same regarding space, power, and cooling. Add the variables of each AI deployment, and what works for one facility may not be the best fit for another. That said, there are some universal truths about retrofitting for AI. You will need engineers who are well-versed in various equipment configurations, including cooling and electrical systems connected to the network. 


Is it time for a 'cloud reset'? New study claims public and private cloud balance is now a major consideration for companies across the world

Enterprises often still have some kind of a cloud-first policy, he outlined, but they have realized they need some form of private cloud too, typically due to the fact that some workloads do not meet the needs, mainly around cost, complexity and compliance. However the problem is that because public cloud has taken priority, infrastructure has not grown in the right way - so increasingly, Broadcom’s conversations are now with customers realizing they need to focus on both public and private cloud, and some on-prem, Baguley says, as they're realizing, “we need to make sure we do it right, we're doing it in a cost-effective way, and we do it in a way that's actually going to be strategically sensible for us going forward.” "In essence - they've realised they need to build something on-prem that can not only compete with public cloud, but actually be better in various categories, including cost, compliance and complexity.” ... In order to help with these concerns, Broadcom has released VMware Cloud Foundation (VCF) 9.0, the latest edition of its platform to help customers get the most out of private cloud. Described by Baguely as, “the culmination of 25 years work at VMware”, VCF 9.0 offers users a single platform with one SKU - giving them improved visibility while supporting all applications with a consistent experience across the private cloud environment.


Cloud in the age of AI: Six things to consider

This is an issue impacting many multinational organizations, driving the growth for regional- and even industry clouds. These offer specific tailored compliance, security, and performance options. As organizations try to architect infrastructure that supports their future states, with a blend of cloud and on-prem, data sovereignty is an increasingly large issue. I hear a lot from IT leaders about how they must consider local and regional regulations, which adds a consideration to the simple concept of migration to the cloud. ... Sustainability was always the hidden cost of connected computing. Hosting data in the cloud consumes a lot of energy. Financial cost is most top of mind when IT leaders talk about driving efficiency through the cloud right now. It’s also at the root of a lot of talk about moving to the edge and using AI-infused end user devices. But expect sustainability to become an increasingly important factor in cloud: geo political instability, the cost of energy, and the increasing demands of AI will see to that. ... The AI PC pitch from hardware vendors is that organizations will be able to build small ‘clouds’ of end user devices. Specific functions and roles will work on AI PCs and do their computing at the edge. The argument is compelling: better security and efficient modular scalability. Not every user or function needs all capabilities and access to all data.


Creating a Communications Framework for Platform Engineering

When platform teams focus exclusively on technical excellence while neglecting a communication strategy, they create an invisible barrier between the platform’s capability and its business impact. Users can’t adopt what they don’t understand, and leadership won’t invest in what they can’t measure. ... To overcome engineers’ skepticism of new tools that may introduce complexity, your communication should clearly articulate how the platform simplifies their work. Highlight its ability to reduce cognitive load, minimize context switching, enhance access to documentation and accelerate development cycles. Present these advantages as concrete improvements to daily workflows, rather than abstract concepts. ... Tap into the influence of respected technical colleagues who have contributed to the platform’s development or were early adopters. Their endorsements are more impactful than any official messaging. Facilitate opportunities for these champions to demonstrate the platform’s capabilities through lightning talks, recorded demos or pair programming sessions. These peer-to-peer interactions allow potential users to observe practical applications firsthand and ask candid questions in a low-pressure environment.


Why data sovereignty is an enabler of Europe’s digital future

Data sovereignty has broad reaching implications with potential impact on many areas of a business extending beyond the IT department. One of the most obvious examples is for the legal and finance departments, where GDPR and similar legislation require granular control over how data is stored and handled. The harsh reality is that any gaps in compliance could result in legal action, substantial fines and subsequent damage to longer term reputation. Alongside this, providing clarity on data governance increasingly factors into trust and competitive advantage, with customers and partners keen to eliminate grey areas around data sovereignty. ... One way that many companies are seeking to gain more control and visibility of their data is by repatriating specific data sets from public cloud environments over to on-premise storage or private clouds. This is not about reversing cloud technology; instead, repatriation is a sound way of achieving compliance with local legislation and ensuring there is no scope for questions over exactly where data resides. In some instances, repatriating data can improve performance, reduce cloud costs and it can also provide assurance that data is protected from foreign government access. Additionally, on-premise or private cloud setups can offer the highest levels of security from third-party risks for the most sensitive or proprietary data.