Showing posts with label automation. Show all posts
Showing posts with label automation. Show all posts

Daily Tech Digest - August 11, 2025


Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek


Attackers Target the Foundations of Crypto: Smart Contracts

Central to the attack is a malicious smart contract, written in the Solidity programming language, with obfuscated functionality that transfers stolen funds to a hidden externally owned account (EOA), says Alex Delamotte, the senior threat researcher with SentinelOne who wrote the analysis. ... The decentralized finance (DeFi) ecosystem relies on smart contracts — as well as other technologies such as blockchains, oracles, and key management — to execute transactions, manage data on a blockchain, and allow for agreements between different parties and intermediaries. Yet their linchpin status also makes smart contracts a focus of attacks and a key component of fraud. "A single vulnerability in a smart contract can result in the irreversible loss of funds or assets," Shashank says. "In the DeFi space, even minor mistakes can have catastrophic financial consequences. However, the danger doesn’t stop at monetary losses — reputational damage can be equally, if not more, damaging." ... Companies should take stock of all smart contracts by maintaining a detailed and up-to-date record of all deployed smart contracts, verifying every contract, and conducting periodic audits. Real-time monitoring of smart contracts and transactions can detect anomalies and provide fast response to any potential attack, says CredShields' Shashank.


Is AI the end of IT as we know it?

CIOs have always been challenged by the time, skills, and complexities involved in running IT operations. Cloud computing, low-code development platforms, and many DevOps practices helped IT teams move “up stack,” away from the ones and zeros, to higher-level tasks. Now the question is whether AI will free CIOs and IT to focus more on where AI can deliver business value, instead of developing and supporting the underlying technologies. ... Joe Puglisi, growth strategist and fractional CIO at 10xnewco, offered this pragmatic advice: “I think back to the days when you wrote an assembly and it took a lot of time. We introduced compilers, higher-level languages, and now we have AI that can write code. This is a natural progression of capabilities and not the end of programming.” The paradigm shift suggests CIOs will have to revisit their software development lifecycles for significant shifts in skills, practices, and tools. “AI won’t replace agile or DevOps — it’ll supercharge them with standups becoming data-driven, CI/CD pipelines self-optimizing, and QA leaning on AI for test creation and coverage,” says Dominik Angerer, CEO of Storyblok. “Developers shift from coding to curating, business users will describe ideas in natural language, and AI will build functional prototypes instantly. This democratization of development brings more voices into the software process while pushing IT to focus on oversight, scalability, and compliance.”


From Indicators to Insights: Automating Risk Amplification to Strengthen Security Posture

Security analysts don’t want more alerts. They want more relevant ones. Traditional SIEMs generate events using their own internal language that involve things like MITRE tags, rule names and severity scores. But what frontline responders really want to know is which users, systems, or cloud resources are most at risk right now. That’s why contextual risk modeling matters. Instead of alerting on abstract events, modern detection should aggregate risk around assets including users, endpoints, APIs, or services. This shifts the SOC conversation from “What alert fired?” to “Which assets should I care about today?” ... The burden of alert fatigue isn’t just operational but also emotional. Analysts spend hours chasing shadows, pivoting across tools, chasing one-off indicators that lead nowhere. When everything is an anomaly, nothing is actionable. Risk amplification offers a way to reduce the unseen yet heavy weight on security analysts and the emotional toll it can take by aligning high-risk signals to high-value assets and surfacing insights only when multiple forms of evidence converge. Rather than relying on a single failed login or endpoint alert, analysts can correlate chains of activity whether they be login anomalies, suspicious API queries, lateral movement, or outbound data flows – all of which together paint a much stronger picture of risk.


The Immune System of Software: Can Biology Illuminate Testing?

In software engineering, quality assurance is often framed as identifying bugs, validating outputs, and confirming expected behaviour. But similar to immunology, software testing is much more than verification. It is the process of defining the boundaries of the system, training it to resist failure, and learning from its past weaknesses. Like the immune system, software testing should be multi-layered, adaptive, and capable of evolving over time. ... Just as innate immunity is present from biological birth, unit tests should be present from the birth of our code. Just as innate immunity doesn't need a full diagnostic history to act, unit tests don’t require a full system context. They work in isolation, making them highly efficient. But they also have limits: they can't catch integration issues or logic bugs that emerge from component interactions. That role belongs to more evolved layers. ... Negative testing isn’t about proving what a system can do — it’s about ensuring the system doesn’t do what it must never do. It verifies how the software behaves when exposed to invalid input, unauthorized access, or unexpected data structures. It asks: Does the system fail gracefully? Does it reject the bad while still functioning with the good? Just as an autoimmune disease results from a misrecognition of the self, software bugs often arise when we misrecognise what our code should do and what it should not do.


CSO hiring on the rise: How to land a top security exec role

“Boards want leaders who can manage risk and reputation, which has made soft skills — such as media handling, crisis communication, and board or financial fluency — nearly as critical as technical depth,” Breckenridge explains. ... “Organizations are seeking cybersecurity leaders who combine technical depth, AI fluency, and strong interpersonal skills,” Fuller says. “AI literacy is now a baseline expectation, as CISOs must understand how to defend against AI-driven threats and manage governance frameworks.” ... Offers of top pay and authority to CSO candidates obviously come with high expectations. Organizations are looking for CSOs with a strong blend of technical expertise, business acumen, and interpersonal strength, Fuller says. Key skills include cloud security, identity and access management (IAM), AI governance, and incident response planning. Beyond technical skills, “power skills” such as communication, creativity, and problem-solving are increasingly valued, Fuller explains. “The ability to translate complex risks into business language and influence board-level decisions is a major differentiator. Traits such as resilience, adaptability, and ethical leadership are essential — not only for managing crises but also for building trust and fostering a culture of security across the enterprise,” he says.


From legacy to SaaS: Why complexity is the enemy of enterprise security

By modernizing, i.e., moving applications to a more SaaS-like consumption model, the network perimeter and associated on-prem complexity tends to dissipate, which is actually a good thing, as it makes ZTNA easier to implement. As the main entry point into an organization’s IT system becomes the web application URL (and browser), this reduces attackers’ opportunities and forces them to focus on the identity layer, subverting authentication, phishing, etc. Of course, a higher degree of trust has to be placed (and tolerated) in SaaS providers, but at least we now have clear guidance on what to look for when transitioning to SaaS and cloud: identity protection, MFA, and phishing-resistant authentication mechanisms become critical—and these are often enforced by default or at least much easier to implement compared to traditional systems. ... The unwillingness to simplify technology stack by moving to SaaS is then combined with a reluctant and forced move to the cloud for some applications, usually dictated by business priorities or even ransomware attacks (as in the BL case above). This is a toxic mix which increases complexity and reduces the ability for a resource-constrained organization to keep security risks at bay.


Why Metadata Is the New Interface Between IT and AI

A looming risk in enterprise AI today is using the wrong data or proprietary data in AI data pipelines. This may include feeding internal drafts to a public chatbot, training models on outdated or duplicate data, or using sensitive files containing employee, customer, financial or IP data. The implications range from wasted resources to data breaches and reputational damage. A comprehensive metadata management strategy for unstructured data can mitigate these risks by acting as a gatekeeper for AI workflows. For example, if a company wants to train a model to answer customer questions in a chatbot, metadata can be used to exclude internal files, non-final versions, or documents marked as confidential. Only the vetted, tagged, and appropriate content is passed through for embedding and inference. This is a more intelligent, nuanced approach than simply dumping all available files into an AI pipeline. With rich metadata in place, organizations can filter, sort, and segment data based on business requirements, project scope, or risk level. Metadata augments vector labeling for AI inferencing. A metadata management system helps users discover which files to feed the AI tool, such as health benefits documents in an HR chatbot while vector labeling gives deeper information as to what’s in each document.


Ask a Data Ethicist: What Should You Know About De-Identifying Data?

Simply put, data de-identification is removing or obscuring details from a dataset in order to preserve privacy. We can think about de-identification as existing on a continuum... Pseudonymization is the application of different techniques to obscure the information, but allows it to be accessed when another piece of information (key) is applied. In the above example, the identity number might unlock the full details – Joe Blogs of 123 Meadow Drive, Moab UT. Pseudonymization retains the utility of the data while affording a certain level of privacy. It should be noted that while the terms anonymize or anonymization are widely used – including in regulations – some feel it is not really possible to fully anonymize data, as there is always a non-zero chance of reidentification. Yet, taking reasonable steps on the de-identification continuum is an important part of compliance with requirements that call for the protection of personal data. There are many different articles and resources that discuss a wide variety of types of de-identification techniques and the merits of various approaches ranging from simple masking techniques to more sophisticated types of encryption. The objective is to strike a balance between the complexity of the the technique to ensure sufficient protection, while not being burdensome to implement and maintain.


5 ways business leaders can transform workplace culture - and it starts by listening

Antony Hausdoerfer, group CIO at auto breakdown specialist The AA, said effective leaders recognize that other people will challenge established ways of working. Hearing these opinions comes with an open management approach. "You need to ensure that you're humble in listening, but then able to make decisions, commit, and act," he said. "Effective listening is about managing with humility with commitment, and that's something we've been very focused on recently." Hausdoerfer told ZDNET how that process works in his IT organization. "I don't know the answer to everything," he said. "In fact, I don't know the answer to many things, but my team does, and by listening to them, we'll probably get the best outcome. Then we commit to act." ... Bev White, CEO at technology and talent solutions provider Nash Squared, said open ears are a key attribute for successful executives. "There are times to speak and times to listen -- good leaders recognize which is which," she said. "The more you listen, the more you will understand how people are really thinking and feeling -- and with so many great people in any business, you're also sure to pick up new information, deepen your understanding of certain issues, and gain key insights you need."


Beyond Efficiency: AI's role in reshaping work and reimagining impact

The workplace of the future is not about humans versus machines; it's about humans working alongside machines. AI's real value lies in augmentation: enabling people to do more, do better, and do what truly matters. Take recruitment, for example. Traditionally time-intensive and often vulnerable to unconscious bias, hiring is being reimagined through AI. Today, organisations can deploy AI to analyse vast talent pools, match skills to roles with precision, and screen candidates based on objective data. This not only reduces time-to-hire but also supports inclusive hiring practices by mitigating biases in decision-making. In fact, across the employee lifecycle, it personalises experiences at scale. From career development tools that recommend roles and learning paths aligned with individual aspirations, to chatbots that provide real-time HR support, AI makes the employee journey more intuitive, proactive, and empowering. ... AI is not without its challenges. As with any transformative technology, its success hinges on responsible deployment. This includes robust governance, transparency, and a commitment to fairness and inclusion. Diversity must be built into the AI lifecycle, from the data it's trained on to the algorithms that guide its decisions. 

Daily Tech Digest - August 10, 2025


Quote for the day:

"Don't worry about being successful but work toward being significant and the success will naturally follow." -- Oprah Winfrey


The Scrum Master: A True Leader Who Serves

Many people online claim that “Agile is a mindset”, and that the mindset is more important than the framework. But let us be honest, the term “agile mindset” is very abstract. How do we know someone truly has it? We cannot open their brain to check. Mindset manifests in different behaviour depending on culture and context. In one place, “commitment” might mean fixed scope and fixed time. In another, it might mean working long hours. In yet another, it could mean delivering excellence within reasonable hours. Because of this complexity, simply saying “agile is a mindset” is not enough. What works better is modelling the behaviour. When people consistently observe the Scrum Master demonstrating agility, those behaviours can become habits. ... Some Scrum Masters and agile coaches believe their job is to coach exclusively, asking questions without ever offering answers. While coaching is valuable, relying on it alone can be harmful if it is not relevant or contextual. Relevance is key to improving team effectiveness. At times, the Scrum Master needs to get their hands dirty. If a team has struggled with manual regression testing for twenty Sprints, do not just tell them to adopt Test-Driven Development (TDD). Show them. ... To be a true leader, the Scrum Master must be humble and authentic. You cannot fake true leadership. It requires internal transformation, a shift in character. As the saying goes, “Character is who we are when no one is watching.”


Vendors Align IAM, IGA and PAM for Identity Convergence

The historic separation of IGA, PAM and IAM created inefficiencies and security blind spots, and attackers exploited inconsistencies in policy enforcement across layers, said Gil Rapaport, chief solutions officer at CyberArk. By combining governance, access and privilege in a single platform, the company could close the gaps between policy enforcement and detection, Rapaport said. "We noticed those siloed markets creating inefficiency in really protecting those identities, because you need to manage different type of policies for governance of those identities and for securing the identities and for the authentication of those identities, and so on," Rapaport told ISMG. "The cracks between those silos - this is exactly where the new attack factors started to develop." ... Enterprise customers that rely on different tools for IGA, PAM, IAM, cloud entitlements and data governance are increasingly frustrated because integrating those tools is time-consuming and error-prone, Mudra said. Converged platforms reduce integration overhead and allow vendors to build tools that communicate natively and share risk signals, he said. "If you have these tools in silos, yes, they can all do different things, but you have to integrate them after the fact versus a converged platform comes with out-of-the-box integration," Mudra said. "So, these different tools can share context and signals out of the box."


The Importance of Technology Due Diligence in Mergers and Acquisitions

The primary reason for conducting technology due diligence is to uncover any potential risks that could derail the deal or disrupt operations post-acquisition. This includes identifying outdated software, unresolved security vulnerabilities, and the potential for data breaches. By spotting these risks early, you can make informed decisions and create risk mitigation strategies to protect your company. ... A key part of technology due diligence is making sure that the target company’s technology assets align with your business’s strategic goals. Whether it’s cloud infrastructure, software solutions, or hardware, the technology should complement your existing operations and provide a foundation for long-term growth. Misalignment in technology can lead to inefficiencies and costly reworks. ... Rank the identified risks based on their potential impact on your business and the likelihood of their occurrence. This will help prioritize mitigation efforts, so that you’re addressing the most critical vulnerabilities first. Consider both short-term risks, like pending software patches, and long-term issues, such as outdated technology or a lack of scalability. ... Review existing vendor contracts and third-party service provider agreements, looking for any liabilities or compliance risks that may emerge post-acquisition—especially those related to data access, privacy regulations, or long-term commitments. It’s also important to assess the cybersecurity posture of vendors and their ability to support integration.


From terabytes to insights: Real-world AI obervability architecture

The challenge is not only the data volume, but the data fragmentation. According to New Relic’s 2023 Observability Forecast Report, 50% of organizations report siloed telemetry data, with only 33% achieving a unified view across metrics, logs and traces. Logs tell one part of the story, metrics another, traces yet another. Without a consistent thread of context, engineers are forced into manual correlation, relying on intuition, tribal knowledge and tedious detective work during incidents. ... In the first layer, we develop the contextual telemetry data by embedding standardized metadata in the telemetry signals, such as distributed traces, logs and metrics. Then, in the second layer, enriched data is fed into the MCP server to index, add structure and provide client access to context-enriched data using APIs. Finally, the AI-driven analysis engine utilizes the structured and enriched telemetry data for anomaly detection, correlation and root-cause analysis to troubleshoot application issues. This layered design ensures that AI and engineering teams receive context-driven, actionable insights from telemetry data. ... The amalgamation of structured data pipelines and AI holds enormous promise for observability. We can transform vast telemetry data into actionable insights by leveraging structured protocols such as MCP and AI-driven analyses, resulting in proactive rather than reactive systems. 


MCP explained: The AI gamechanger

Instead of relying on scattered prompts, developers can now define and deliver context dynamically, making integrations faster, more accurate, and easier to maintain. By decoupling context from prompts and managing it like any other component, developers can, in effect, build their own personal, multi-layered prompt interface. This transforms AI from a black box into an integrated part of your tech stack. ... MCP is important because it extends this principle to AI by treating context as a modular, API-driven component that can be integrated wherever needed. Similar to microservices or headless frontends, this approach allows AI functionality to be composed and embedded flexibly across various layers of the tech stack without creating tight dependencies. The result is greater flexibility, enhanced reusability, faster iteration in distributed systems and true scalability. ... As with any exciting disruption, the opportunity offered by MCP comes with its own set of challenges. Chief among them is poorly defined context. One of the most common mistakes is hardcoding static values — instead, context should be dynamic and reflect real-time system states. Overloading the model with too much, too little or irrelevant data is another pitfall, often leading to degraded performance and unpredictable outputs. 


AI is fueling a power surge - it could also reinvent the grid

Data centers themselves are beginning to evolve as well. Some forward-looking facilities are now being designed with built-in flexibility to contribute back to the grid or operate independently during times of peak stress. These new models, combined with improved efficiency standards and smarter site selection strategies, have the potential to ease some of the pressure being placed on energy systems. Equally important is the role of cross-sector collaboration. As the line between tech and infrastructure continues to blur, it’s critical that policymakers, engineers, utilities, and technology providers work together to shape the standards and policies that will govern this transition. That means not only building new systems, but also rethinking regulatory frameworks and investment strategies to prioritize resiliency, equity, and sustainability. Just as important as technological progress is public understanding. Educating communities about how AI interacts with infrastructure can help build the support needed to scale promising innovations. Transparency around how energy is generated, distributed, and consumed—and how AI fits into that equation—will be crucial to building trust and encouraging participation. ... To be clear, AI is not a silver bullet. It won’t replace the need for new investment or hard policy choices. But it can make our systems smarter, more adaptive, and ultimately more sustainable.


AI vs Technical Debt: Is This A Race to the Bottom?

Critically, AI-generated code can carry security liabilities. One alarming study analyzed code suggested by GitHub Copilot across common security scenarios – the result: roughly 40% of Copilot’s suggestions had vulnerabilities. These included classic mistakes like buffer overflows and SQL injection holes. Why so high? The AI was trained on tons of public code – including insecure code – so it can regurgitate bad practices (like using outdated encryption or ignoring input sanitization) just as easily as good ones. If you blindly accept such output, you’re effectively inviting known bugs into your codebase. It doesn’t help that AI is notoriously bad at certain logical tasks (for example, it struggles with complex math or subtle state logic, so it might write code that looks legit but is wrong in edge cases. ... In many cases, devs aren’t reviewing AI-written code as rigorously as their own, and a common refrain when something breaks is, “It is not my code,” implying they feel less responsible since the AI wrote it. That attitude itself is dangerous, if nobody feels accountable for the AI’s code, it slips through code reviews or testing more easily, leading to more bad deployments. The open-source world is also grappling with an influx of AI-generated “contributions” that maintainers describe as low-quality or even spam. Imagine running an open-source project and suddenly getting dozens of auto-generated pull requests that technically add a feature or fix but are riddled with style issues or bugs.


The Future of Manufacturing: Digital Twin in Action

Process digital twins are often confused with traditional simulation tools, but there is an important distinction. Simulations are typically offline models used to test “what-if” scenarios, verify system behaviour, and optimise processes without impacting live operations. These models are predefined and rely on human input to set parameters and ask the right questions. A digital twin, on the other hand, comes to life when connected to real-time operational data. It reflects current system states, responds to live inputs, and evolves continuously as conditions change. This distinction between static simulation and dynamic digital twin is widely recognised across the industrial sector. While simulation still plays a valuable role in system design and planning, the true power of the digital twin lies in its ability to mirror, interpret, and influence operational performance in real time. ... When AI is added, the digital twin evolves into a learning system. AI algorithms can process vast datasets - far beyond what a human operator can manage - and detect early warning signs of failure. For example, if a transformer begins to exhibit subtle thermal or harmonic irregularities, an AI-enhanced digital twin doesn’t just flag it. It assesses the likelihood of failure, evaluates the potential downstream impact, and proposes mitigation strategies, such as rerouting power or triggering maintenance workflows.


Bridging the Gap: How Hybrid Cloud Is Redefining the Role of the Data Center

Today’s hybrid models involve more than merging public clouds with private data centers. They also involve specialized data center solutions like colocation, edge facilities and bare-metal-as-a-service (BMaaS) offerings. That’s the short version of how hybrid cloud and its relationship to data centers are evolving. ... Fast forward to the present, and the goals surrounding hybrid cloud strategies often look quite different. When businesses choose a hybrid cloud approach today, it’s typically not because of legacy workloads or sunk costs. It’s because they see hybrid architectures as the key to unlocking new opportunities ... The proliferation of edge data centers has also enabled simpler, better-performing and more cost-effective hybrid clouds. The more locations businesses have to choose from when deciding where to place private infrastructure and workloads, the more opportunity they have to optimize performance relative to cost. ... Today’s data centers are no longer just a place to host whatever you can’t run on-prem or in a public cloud. They have evolved into solutions that offer specialized services and capabilities that are critical for building high-performing, cost-effective hybrid clouds – but that aren’t available from public cloud providers, and that would be very costly and complicated for businesses to implement on their own.


AI Agents: Managing Risks In End-To-End Workflow Automation

As CIOs map out their AI strategies, it’s becoming clear that agents will change how they manage their organization’s IT environment and how they deliver services to the rest of the business. With the ability of agents to automate a broad swath of end-to-end business processes—learning and changing as they go—CIOs will have to oversee significant shifts in software development, IT operating models, staffing, and IT governance. ... Human-based checks and balances are vital for validating agent-based outputs and recommendations and, if needed, manually change course should unintended consequences—including hallucinations or other errors—arise. “Agents being wrong is not the same thing as humans being wrong,” says Elliott. “Agents can be really wrong in ways that would get a human fired if they made the same mistake. We need safeguards so that if an agent calls the wrong API, it’s obvious to the person overseeing that task that the response or outcome is unreasonable or doesn’t make sense.” These orchestration and observability layers will be increasingly important as agents are implemented across the business. “As different parts of the organization [automate] manual processes, you can quickly end up with a patchwork-quilt architecture that becomes almost impossible to upgrade or rethink,” says Elliott.

Daily Tech Digest - August 07, 2025


Quote for the day:

"Do the difficult things while they are easy and do the great things while they are small." -- Lao Tzu


Data neutrality: Safeguarding your AI’s competitive edge

“At the bottom there is a computational layer, such as the NVIDIA GPUs, anyone who provides the infrastructure for running AI. The next few layers are software-oriented, but also impacts infrastructure as well. Then there’s security and the data that feeds the models and those that feeds the applications. And on top of that, there’s the operational layer, which is how you enable data operations for AI. Data being so foundational means that whoever works with that layer is essentially holding the keys to the AI asset, so, it’s imperative that anything you do around data has to have a level of trust and data neutrality.” ... The risks in having common data infrastructure, particularly with those that are direct or indirect competitors, are significant. When proprietary training data is transplanted to another platform or service of a competitor, there is always an implicit, but frequently subtle, risk that proprietary insights, unique patterns of data or even the operational data of an enterprise will be accidentally shared. ... These trends in the market have precipitated the need for “sovereign AI platforms”– controlled spaces where companies have complete control over their data, models and the overall AI pipeline for development without outside interference.


The problem with AI agent-to-agent communication protocols

Some will say, “Competition breeds innovation.” That’s the party line. But for anyone who’s run a large IT organization, it means increased integration work, risk, cost, and vendor lock-in—all to achieve what should be the technical equivalent of exchanging a business card. Let’s not forget history. The 90s saw the rise and fall of CORBA and DCOM, each claiming to be the last word in distributed computing. The 2000s blessed us with WS-* (the asterisk is a wildcard because the number of specs was infinite), most of which are now forgotten. ... The truth: When vendors promote their own communication protocols, they build silos instead of bridges. Agents trained on one protocol can’t interact seamlessly with those speaking another dialect. Businesses end up either locking into one vendor’s standard, writing costly translation layers, or waiting for the market to move on from this round of wheel reinvention. ... We in IT love to make simple things complicated. The urge to create a universal, infinitely extensible, plug-and-play protocol is irresistible. But the real-world lesson is that 99% of enterprise agent interaction can be handled with a handful of message types: request, response, notify, error. The rest—trust negotiation, context passing, and the inevitable “unknown unknowns”—can be managed incrementally, so long as the basic messaging is interoperable.


Agents or Bots? Making Sense of AI on the Open Web

The difference between automated crawling and user-driven fetching isn't just technical—it's about who gets to access information on the open web. When Google's search engine crawls to build its index, that's different from when it fetches a webpage because you asked for a preview. Google's "user-triggered fetchers" prioritize your experience over robots.txt restrictions because these requests happen on your behalf. The same applies to AI assistants. When Perplexity fetches a webpage, it's because you asked a specific question requiring current information. The content isn't stored for training—it's used immediately to answer your question. ... An AI assistant works just like a human assistant. When you ask an AI assistant a question that requires current information, they don’t already know the answer. They look it up for you in order to complete whatever task you’ve asked. On Perplexity and all other agentic AI platforms, this happens in real-time, in response to your request, and the information is used immediately to answer your question. It's not stored in massive databases for future use, and it's not used to train AI models. User-driven agents only act when users make specific requests, and they only fetch the content needed to fulfill those requests. This is the fundamental difference between a user agent and a bot.


The Increasing Importance of Privacy-By-Design

Today’s data landscape is evolving at breakneck speed. With the explosion of IoT devices, AI-powered systems, and big data analytics, the volume and variety of personal data collected have skyrocketed. This means more opportunities for breaches, misuse, and regulatory headaches. And let’s not forget that consumers are savvier than ever about privacy risks – they want to know how their data is handled, shared, and stored. ... Integrating Privacy-By-Design into your development process doesn’t require reinventing the wheel; it simply demands a mindset shift and a commitment to building privacy into every stage of the lifecycle. From ideation to deployment, developers and product teams need to ask: How are we collecting, storing, and using data? ... Privacy teams need to work closely with developers, legal advisors, and user experience designers to ensure that privacy features do not compromise usability or performance. This balance can be challenging to achieve, especially in fast-paced development environments where deadlines are tight and product launches are prioritized. Another common challenge is educating the entire team on what Privacy-By-Design actually means in practice. It’s not enough to have a single data protection champion in the company; the entire culture needs to shift toward valuing privacy as a key product feature.


Microsoft’s real AI challenge: Moving past the prototypes

Now, you can see that with Bing Chat, Microsoft was merely repeating an old pattern. The company invested in OpenAI early, then moved to quickly launch a consumer AI product with Bing Chat. It was the first AI search engine and the first big consumer AI experience aside from ChatGPT — which was positioned more as a research project and not a consumer tool at the time. Needless to say, things didn’t pan out. Despite using the tarnished Bing name and logo that would probably make any product seem less cool, Bing Chat and its “Sydney” persona had breakout viral success. But the company scrambled after Bing Chat behaved in unpredictable ways. Microsoft’s explanation doesn’t exactly make it better: “Microsoft did not expect people to have hours-long conversations with it that would veer into personal territory,” Yusuf Mehdi, a corporate vice president at the company, told NPR. In other words, Microsoft didn’t expect people would chat with its chatbot so much. Faced with that, Microsoft started instituting limits and generally making Bing Chat both less interesting and less useful. Under current CEO Satya Nadella, Microsoft is a different company than it was under Ballmer. The past doesn’t always predict the future. But it does look like Microsoft had an early, rough prototype — yet again — and then saw competitors surpass it.


Is confusion over tech emissions measurement stifling innovation?

If sustainability is becoming a bottleneck for innovation, then businesses need to take action. If a cloud provider cannot (or will not) disclose exact emissions per workload, that is a red flag. Procurement teams need to start asking tough questions, and when appropriate, walking away from vendors that will not answer them. Businesses also need to unite to push for the development of a global measurement standard for carbon accounting. Until regulators or consortia enforce uniform reporting standards, companies will keep struggling to compare different measurements and metrics. Finally, it is imperative that businesses rethink the way they see emissions reporting. Rather than it being a compliance burden, they need to grasp it as an opportunity. Get emissions tracking right, and companies can be upfront and authentic about their green credentials, which can reassure potential customers and ultimately generate new business opportunities. Measuring environmental impact can be messy right now, but the alternative of sticking with outdated systems because new ones feel "too risky" is far worse. The solution is more transparency, smarter tools, a collective push for accountability, and above all, working with the right partners that can deliver accurate emissions statistics.


Making sense of data sovereignty and how to regain it

Although the concept of sovereignty is subject to greater regulatory control, its practical implications are often misunderstood or oversimplified, resulting in it being frequently reduced to questions of data location or legal jurisdiction. In reality, however, sovereignty extends across technical, operational and strategic domains. In practice, these elements are difficult to separate. While policy discussions often centre on where data is stored and who can access it, true sovereignty goes further. For example, much of the current debate focuses on physical infrastructure and national data residency. While these are very important issues, they represent only one part of the overall picture. Sovereignty is not achieved simply by locating data in a particular jurisdiction or switching to a domestic provider, because without visibility into how systems are built, maintained and supported, location alone offers limited protection. ... Organisations that take it seriously tend to focus less on technical purity and more on practical control. That means understanding which systems are critical to ongoing operations, where decision-making authority sits and what options exist if a provider, platform or regulation changes. Clearly, there is no single approach that suits every organisation, but these core principles help set direction. 


Beyond PQC: Building adaptive security programs for the unknown

The lack of a timeline for a post-quantum world means that it doesn’t make sense to consider post-quantum as either a long-term or a short-term risk, but both. Practically, we can prepare for the threat of quantum technology today by deploying post-quantum cryptography to protect identities and sensitive data. This year is crucial for post-quantum preparedness, as organisations are starting to put quantum-safe infrastructure in place, and regulatory bodies are beginning to address the importance of post-quantum cryptography. ... CISOs should take steps now to understand their current cryptographic estate. Many organisations have developed a fragmented cryptographic estate without a unified approach to protecting and managing keys, certificates, and protocols. This lack of visibility opens increased exposure to cybersecurity threats. Understanding this landscape is a prerequisite for migrating safely to post-quantum cryptography. Another practical step you can take is to prepare your organisation for the impact of quantum computing on public key encryption. This has become more feasible with NIST’s release of quantum-resistant algorithms and the NCSC’s recently announced three-step plan for moving to quantum-safe encryption. Even if there is no pressing threat to your business, implementing a crypto-agile strategy will also ensure a smooth transition to quantum-resistant algorithms when they become mainstream.


Critical Zero-Day Bugs Crack Open CyberArk, HashiCorp Password Vaults

"Secret management is a good thing. You just have to account for when things go badly. I think many professionals think that by vaulting a credential, their job is done. In reality, this should be just the beginning of a broader effort to build a more resilient identity infrastructure." "You want to have high fault tolerance, and failover scenarios — break-the-glass scenarios for when compromise happens. There are Gartner guides on how to do that. There's a whole market for identity and access management (IAM) integrators which sells these types of preparing for doomsday solutions," he notes. It might ring unsatisfying — a bandage for a deeper-rooted problem. It's part of the reason why, in recent years, many security experts have been asking not just how to better protect secrets, but how to move past them to other models of authorization. "I know there are going to be static secrets for a while, but they're fading away," Tal says. "We should be managing [users], rather than secrets. We should be contextualizing behaviors, evaluating the kinds of identities and machines of users that are performing actions, and then making decisions based on their behavior, not just what secrets they hold. I think that secrets are not a bad thing for now, but eventually we're going to move to the next generation of identity infrastructure."


Strategies for Robust Engineering: Automated Testing for Scalable Software

The changes happening to software development through AI and machine learning require testing to transform as well. The purpose now exceeds basic software testing because we need to create testing systems that learn and grow as autonomous entities. Software quality should be viewed through a new perspective where testing functions as an intelligent system that adapts over time instead of remaining as a collection of unchanging assertions. The future of software development will transform when engineering leaders move past traditional automated testing frameworks to create predictive AI-based test suites. The establishment of scalable engineering presents an exciting new direction that I am eager to lead. Software development teams must adopt new automated testing approaches because the time to transform their current strategies has arrived. Our testing systems should evolve from basic code verification into active improvement mechanisms. As applications become increasingly complex and dynamic, especially in distributed, cloud-native environments, test automation must keep pace. Predictive models, trained on historical failure patterns, can anticipate high-risk areas in codebases before issues emerge. Test coverage should be driven by real-time code behavior, user analytics, and system telemetry rather than static rule sets.

Daily Tech Digest - August 05, 2025


Quote for the day:

"Let today be the day you start something new and amazing." -- Unknown


Convergence of Technologies Reshaping the Enterprise Network

"We are now at the epicenter of the transformation of IT, where AI and networking are converging," said Antonio Neri, president and CEO of HPE. "In addition to positioning HPE to offer our customers a modern network architecture alternative and an even more differentiated and complete portfolio across hybrid cloud, AI and networking, this combination accelerates our profitable growth strategy as we deepen our customer relevance and expand our total addressable market into attractive adjacent areas." Naresh Singh, senior director analyst at Gartner, told Information Security Media Group that the merger of two networking heavyweights would make the networking landscape interesting in the near future. ... Security vendors have long tackled cyberthreats through robust portfolios, including next-generation firewalls, endpoint security, secure access service edge, intrusion detection system or intrusion prevention system, software-defined wide area network and network security management. But the rise of AI and large language models has introduced new risks that demand a deeper transformation across people, processes and technology. As organizations recognize the need for a secure foundation, many are accelerating their AI adoption initiatives.


Blind spots at the top: Why leaders fail

You’ve stopped learning. Not because there’s nothing left to learn, but because your ego can’t handle starting from scratch again. You default to what worked five years ago. Meanwhile, your environment has moved on, your competitors have pivoted, and your team can smell the stagnation. Ultimately, you are an architect of resilience and trust. As Alvin Toffler warned, “The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.” ... Believing you’re always right is a shortcut to irrelevance. When you stop listening, you stop leading. You confuse confidence with competence and dominance with clarity. You bulldoze feedback and mistake silence for agreement. That silence? It’s fear. ... Stress is part of the job. But if every challenge sends you into a spiral, your people will spend more time managing your mood than solving real problems. Fragile leaders don’t scale. Their teams shrink. Their influence dries up. Strong leadership isn’t about acting tough. It’s about staying grounded when things go sideways. ... You think you’re empowering, but you’re micromanaging. You think you’re a visionary, but your team sees a control freak. You think you’re a mentor, but you dominate every meeting. The gap between intent and impact? That’s where teams disengage. The worst part? No one will tell you unless you build a culture where they can.


9 habits of the highly ineffective vibe coder

It’s easy to think that one large language model is the same as any other. The interfaces are largely identical, after all. In goes some text and out comes a magic answer, right? LLMs even tend to give similar answers to easy questions. And their names don’t even tell us much, because most LLM creators choose something cute rather than descriptive. But models have different internal structures, which can affect how well they unpack and understand problems that involve complex logic, like writing code. ... Many developers don’t realize how much LLMs are affected by the size of their input. The model must churn through all the tokens in your prompt before it can generate something that might be useful to you. More input tokens require more resources. Habitually dumping big blocks of code on the LLM can start to add up. Do it too much and you’ll end up overwhelming the hardware and filling up the context window. Some developers even talk about just uploading their entire source folder “just in case.” ... AI assistants do best when they’re focusing our attention on some obscure corner of the software documentation. Or maybe they’re finding a tidbit of knowledge about some feature that isn’t where we expected it to be. They’re amazing at searching through a vast training set for just the right insight. They’re not always so good at synthesizing or offering deep insight, though.


How to Eliminate Deployment Bottlenecks Without Sacrificing Application Security

As organizations embrace DevOps to accelerate innovation, the traditional approach of treating security as a checkpoint begins to break down. The result? Security either slows releases or, even worse, gets bypassed altogether amidst the need to deliver as quickly as possible. ... DevOps has reshaped software delivery, with teams now expected to deploy applications at high velocity, using continuous integration and delivery (CI/CD), microservices architectures, and container orchestration platforms like Kubernetes. But as development practices evolved, many security tools have not kept pace. While traditional Web Application Firewalls (WAFs) remain effective for many use cases, their operational models can become challenging when applied to highly dynamic, modern development environments. In such scenarios, they often introduce delays, limit flexibility, and add operational burden instead of enabling agility. ... Modern architectures introduce constant change. New microservices, APIs, and environments are deployed daily. Traditional WAFs, built for stable applications, rely on domain-first onboarding models that treat each application as an isolated unit. Every new domain or service often requires manual configuration, creating friction and increasing the risk of unprotected assets.


Anthropic wants to stop AI models from turning evil - here's how

In a paper released Friday, the company explores how and why models exhibit undesirable behavior, and what can be done about it. A model's persona can change during training and once it's deployed, when user inputs start influencing it. This is evidenced by models that may have passed safety checks before deployment, but then develop alter egos or act erratically once they're publicly available ... Anthropic admitted in the paper that "shaping a model's character is more of an art than a science," but said persona vectors are another arm with which to monitor -- and potentially safeguard against -- harmful traits. In the paper, Anthropic explained that it can steer these vectors by instructing models to act in certain ways -- for example, if it injects an evil prompt into the model, the model will respond from an evil place, confirming a cause-and-effect relationship that makes the roots of a model's character easier to trace. "By measuring the strength of persona vector activations, we can detect when the model's personality is shifting towards the corresponding trait, either over the course of training or during a conversation," Anthropic explained. "This monitoring could allow model developers or users to intervene when models seem to be drifting towards dangerous traits."


From Aspiration to Action: The State of DevOps Automation Today

One of the report's clearest findings is the advantage of engaging QA teams earlier in the development cycle. Teams practicing shift-left testing — bringing QA into planning, design, and early build phases — report higher satisfaction rates and stronger results overall. In fact, 88% of teams with early QA involvement reported satisfaction with their quality processes, and those teams also experienced fewer escaped defects and more comprehensive test coverage. Rather than testing at the end of the development cycle, early QA involvement enables faster feedback loops, better test design, and tighter alignment with user requirements. It also improves collaboration between developers and testers, making it easier to catch potential issues before they escalate into expensive fixes. ... While more DevOps teams recognize the importance of integrating security into the software development lifecycle (SDLC), sizable gaps remain. ... Many organizations still treat security as a separate function, disconnected from their routine QA and DevOps processes. This separation slows down vulnerability detection and remediation. These findings show the need for teams to better integrate security practices earlier in the SDLC, leveraging AI-driven tools that facilitate proactive threat detection and management.


Why the AI era is forcing a redesign of the entire compute backbone

Traditional fault tolerance relies on redundancy among loosely connected systems to achieve high uptime. ML computing demands a different approach. First, the sheer scale of computation makes over-provisioning too costly. Second, model training is a tightly synchronized process, where a single failure can cascade to thousands of processors. Finally, advanced ML hardware often pushes to the boundary of current technology, potentially leading to higher failure rates. ... As we push for greater performance, individual chips require more power, often exceeding the cooling capacity of traditional air-cooled data centers. This necessitates a shift towards more energy-intensive, but ultimately more efficient, liquid cooling solutions, and a fundamental redesign of data center cooling infrastructure. ... One important observation is that AI will, in the end, enhance attacker capabilities. This, in turn, means that we must ensure that AI simultaneously supercharges our defenses. This includes end-to-end data encryption, robust data lineage tracking with verifiable access logs, hardware-enforced security boundaries to protect sensitive computations and sophisticated key management systems. ... The rise of gen AI marks not just an evolution, but a revolution that requires a radical reimagining of our computing infrastructure. 


Industry Leaders Warn MSPs: Rolling Out AI Too Soon Could Backfire

“The biggest risk actually out there is deploying this stuff too soon,” he said. “If you push it really, really hard, your customers are going to be like, ‘This is terrible. I hate it. Why did you do this?’ That will change their opinion on AI for everything moving forward.” The message resonated with other leaders on the panel, including Heddy, who likened AI adoption to on-boarding a new employee. “I would not put my new employees in front of customers until I have educated them,” he said. “And so yes, you should roll [AI] out to your customers only when you are sure that what it is delivering is going to be good.” ... “Everybody’s just sort of siloed in their own little chat box. Wherever this agentic future is, we can all see that’s where it’s going, but at what point do we trust an agent to actually do something? ... “So what are the steps? What is the training that has to happen? How do we have all this information in context for the individual, the team, the entire organization? Where we’re headed is clear. Just … how long does that take?” ... “Don’t wait until you think you have it nailed and are the expert in the world on this to go have a conversation because those who are not experts on it are going to go have conversations with your customers about AI. We should consume it to make ourselves a better company, and then once we understand it well enough to sell it, only then should we go and try to sell it.”


Why Standards and Certification Matter More Than Ever

A major obstacle for enterprise IT teams is the lack of interoperability. Today's networked services span multiple clouds, edge locations and on-premises systems. Each environment brings unique security and compliance needs, making cohesive service delivery difficult. Lifecycle Service Orchestration (LSO), developed and advanced by Mplify, formerly MEF, offers a path through this complexity. With standardized and certified APIs and consistent service definitions, LSO supports automated provisioning and service management across environments and enables seamless interoperability between providers and platforms. ... In a world of constant change, standards and certification are strategic necessities. ... By reuniting around proven frameworks, organizations can modernize more confidently. Certification provides a layer of trust, ensuring solutions meet real-world requirements and work across the environments that enterprises rely on most. ... Standards and certification offer a way to cut through the complexity so networks, services and AI deployments can evolve without introducing new risks. Enterprises that succeed won't be the ones asking whether to adopt LSO, SASE or GPUaaS, but rather finding smart, swift ways to put them into practice.


Security tooling pitfalls for small teams: Cost, complexity, and low ROI

Retrofitting enterprise-grade platforms into SMB environments is often a disaster in the making. These tools are designed for organizations with layers of bureaucracy, complex structures, and entire teams dedicated to each security and compliance function. A large enterprise like Microsoft or Salesforce might have separate teams for governance, risk, compliance, cloud security, network security, and security operations. Each of those teams would own and manage specialized tooling, which in itself assumes domain experts running the show. ... “Compliance is not security” is a statement that sparks heated debates amongst many security experts. However, the reality is that even checklist-based compliance can help companies with no security in place build a strong foundation. Frameworks like SOC 2 and ISO 27001 help establish the baseline of a strong security program, ensuring you have coverage across critical controls. If you deal with Personally Identifiable Information (PII), GDPR is the gold standard for privacy controls. And with AI adoption becoming unavoidable, ISO 42001 is emerging as a key framework for AI governance, helping organizations manage AI risk and build responsible practices from the ground up.

Daily Tech Digest - July 25, 2025


 Quote for the day:

"Technology changes, but leadership is about clarity, courage, and creating momentum where none exists." -- Inspired by modern digital transformation principles


Why foundational defences against ransomware matter more than the AI threat

The 2025 Cyber Security Breaches Survey paints a concerning picture. According to the study, ransomware attacks doubled between 2024 and 2025 – a surge less to do with AI innovation and more about deep-rooted economic, operational and structural changes within the cybercrime ecosystem. At the heart of this growth in attacks is the growing popularity of the ransomware-as-a-service (RaaS) business model. Groups like DragonForce or Ransomhub sell ready-made ransomware toolkits to affiliates in exchange for a cut of the profits, enabling even low-skilled attackers to conduct disruptive campaigns. ... Breaches often stem from common, preventable issues such as poor credential hygiene or poorly configured systems – areas that often sit outside scheduled assessments. When assessments happen only once or twice a year, new gaps may go unnoticed for months, giving attackers ample opportunity. To keep up, organisations need faster, more continuous ways of validating defences. ... Most ransomware actors follow well-worn playbooks, making them frequent visitors to company networks but not necessarily sophisticated ones. That’s why effective ransomware prevention is not about deploying cutting-edge technologies at every turn – it’s about making sure the basics are consistently in place. 


Subliminal learning: When AI models learn what you didn’t teach them

“Subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development,” the researchers from Anthropic, Truthful AI, the Warsaw University of Technology, the Alignment Research Center, and UC Berkeley, wrote in their paper. “Distillation could propagate unintended traits, even when developers try to prevent this via data filtering.” ... Models trained on data generated by misaligned models, where AI systems diverge from their original intent due to bias, flawed algorithms, data issues, insufficient oversight, or other factors, and produce incorrect, lewd or harmful content, can also inherit that misalignment, even if the training data had been carefully filtered, the researchers found. They offered examples of harmful outputs when student models became misaligned like their teachers, noting, “these misaligned responses are egregious far beyond anything in the training data, including endorsing the elimination of humanity and recommending murder.” ... Today’s multi-billion parameter models are able to discern extremely complicated relationships between a dataset and the preferences associated with that data, even if it’s not immediately obvious to humans, he noted. This points to a need to look beyond semantic and direct data relationships when working with complex AI models.


Why people-first leadership wins in software development

It frequently involves pushing for unrealistic deadlines, with project schedules made without enough input from the development team about the true effort needed and possible obstacles. This results in ongoing crunch periods and mandatory overtime. ... Another indicator is neglecting signs of burnout and stress. Leaders may ignore or dismiss signals such as team members consistently working late, increased irritability, or a decline in productivity, instead pushing for more output without addressing the root causes. Poor work-life balance becomes commonplace, often without proper recognition or rewards for the extra effort. ... Beyond the code, there’s a stifled innovation and creativity. When teams are constantly under pressure to just “ship it,” there’s little room for creative problem-solving, experimentation, or thinking outside the box. Innovation, often born from psychological safety and intellectual freedom, gets squashed, hindering your company’s ability to adapt to new trends and stay competitive. Finally, there’s damage to your company’s reputation. In the age of social media and employer review sites, news travels fast. ... It’s vital to invest in team growth and development. Provide opportunities for continuous learning, training, and skill enhancement. This not only boosts individual capabilities but also shows your commitment to their long-term career paths within your organization. This is a crucial retention strategy.


Achieving resilience in financial services through cloud elasticity and automation

In an era of heightened regulatory scrutiny, volatile markets, and growing cybersecurity threats, resilience isn’t just a nice-to-have—it’s a necessity. A lack of robust operational resilience can lead to regulatory penalties, damaged reputations, and crippling financial losses. In this context, cloud elasticity, automation, and cutting-edge security technologies are emerging as crucial tools for financial institutions to not only survive but thrive amidst these evolving pressures. ... Resilience ensures that financial institutions can maintain critical operations during crises, minimizing disruptions and maintaining service quality. Efficient operations are crucial for maintaining competitive advantage and customer satisfaction. ... Effective resilience strategies help institutions manage diverse risks, including cyber threats, system failures, and third-party vulnerabilities. The complexity of interconnected systems and the rapid pace of technological advancement add layers of risk that are difficult to manage. ... Financial institutions are particularly susceptible to risks such as system failures, cyberattacks, and third-party vulnerabilities. ... As financial institutions navigate a landscape marked by heightened risk, evolving regulations, and increasing customer expectations, operational resilience has become a defining imperative.


Digital attack surfaces expand as key exposures & risks double

Among OT systems, the average number of exposed ports per organisation rose by 35%, with Modbus (port 502) identified as the most commonly exposed, posing risks of unauthorised commands and potential shutdowns of key devices. The exposure of Unitronics port 20256 surged by 160%. The report cites cases where attackers, such as the group "CyberAv3ngers," targeted industrial control systems during conflicts, exploiting weak or default passwords. ... The number of vulnerabilities identified on public-facing assets more than doubled, rising from three per organisation in late 2024 to seven in early 2025. Critical vulnerabilities dating as far back as 2006 and 2008 still persist on unpatched systems, with proof-of-concept code readily available online, making exploitation accessible even to attackers with limited expertise. The report also references the continued threat posed by ransomware groups who exploit such weaknesses in internet-facing devices. ... Incidents involving exposed access keys, including cloud and API keys, doubled from late 2024 to early 2025. Exposed credentials can enable threat actors to enter environments as legitimate users, bypassing perimeter defenses. The report highlights that most exposures result from accidental code pushes to public repositories or leaks on criminal forums.


How Elicitation in MCP Brings Human-in-the-Loop to AI Tools

Elicitation represents more than an incremental protocol update. It marks a shift toward collaborative AI workflows, where the system and human co-discover missing context rather than expecting all details upfront. Python developers building MCP tools can now focus on core logic and delegate parameter gathering to the protocol itself, allowing for a more streamlined approach. Clients declare an elicitation capability during initialization, so servers know they may elicit input at any time. That standardized interchange liberates developers from generating custom UIs or creating ad hoc prompts, ensuring coherent behaviour across diverse MCP clients. ... Elicitation transforms human-in-the-loop (HITL) workflows from an afterthought to a core capability. Traditional AI systems often struggle with scenarios that require human judgment, approval, or additional context. Developers had to build custom solutions for each case, leading to inconsistent experiences and significant development overhead. With elicitation, HITL patterns become natural extensions of tool functionality. A database migration tool can request confirmation before making irreversible changes. A document generation system can gather style preferences and content requirements through guided interactions. An incident response tool can collect severity assessments and stakeholder information as part of its workflow.


Cognizant Agents Gave Hackers Passwords, Clorox Says in Lawsuit

“Cognizant was not duped by any elaborate ploy or sophisticated hacking techniques,” the company says in its partially redacted 19-page complaint. “The cybercriminal just called the Cognizant Service Desk, asked for credentials to access Clorox’s network, and Cognizant handed the credentials right over. Cognizant is on tape handing over the keys to Clorox’s corporate network to the cybercriminal – no authentication questions asked.” ... The threat actors made multiple calls to the Cognizant help desk, essentially asking for new passwords and getting them without any effort to verify them, Clorox wrote. They then used those new credentials to gain access to the corporate network, launching a “debilitating” attack that “paralyzed Clorox’s corporate network and crippled business operations. And to make matters worse, when Clorox called on Cognizant to provide incident response and disaster recovery support services, Cognizant botched its response and compounded the damage it had already caused.” In statement to media outlets, a Cognizant spokesperson said it was “shocking that a corporation the size of Clorox had such an inept internal cybersecurity system to mitigate this attack.” While Clorox is placing the blame on Cognizant, “the reality is that Clorox hired Cognizant for a narrow scope of help desk services which Cognizant reasonably performed. Cognizant did not manage cybersecurity for Clorox,” the spokesperson said.


Digital sovereignty becomes a matter of resilience for Europe

Open-source and decentralized technologies are essential to advancing Europe’s strategic autonomy. Across cybersecurity, communications, and foundational AI, we’re seeing growing support for open-source infrastructure, now treated with the same strategic importance once reserved for energy, water and transportation. The long-term goal is becoming clear: not to sever global ties, but to reduce dependencies by building credible, European-owned alternatives to foreign-dominated systems. Open-source is a cornerstone of this effort. It empowers European developers and companies to innovate quickly and transparently, with full visibility and control, essential for trust and sovereignty. Decentralized systems complement this by increasing resilience against cyber threats, monopolistic practices and commercial overreach by “big tech”. While public investment is important, what Europe needs most is a more “risk-on” tech environment, one that rewards ambition, accelerated growth and enables European players to scale and compete globally. Strategic autonomy won’t be achieved by funding alone, but by creating the right innovation and investment climate for open technologies to thrive. Many sovereign platforms emphasize end-to-end encryption, data residency, and open standards. Are these enough to ensure trust, or is more needed to truly protect digital independence?



Building better platforms with continuous discovery

Platform teams are often judged by stability, not creativity. Balancing discovery with uptime and reliability takes effort. So does breaking out of the “tickets and delivery” cycle to explore problems upstream. But the teams that manage it? They build platforms that people want to use, not just have to use. Start by blocking time for discovery in your sprint planning, measuring both adoption and friction metrics, and most importantly, talking to your users periodically rather than waiting for them to come to you with problems. Cultural shifts like this take time because you're not just changing the process; you're changing what people believe is acceptable or expected. That kind of change doesn't happen just because leadership says it should, or because a manager adds a new agenda to planning meetings. It sticks when ICs feel inspired and safe enough to work differently and when managers back that up with support and consistency. Sometimes a C-suite champion helps set the tone, but day-to-day, it's middle managers and senior ICs who do the slow, steady work of normalizing new behavior. You need repeated proof that it's okay to pause and ask why, to explore, to admit uncertainty. Without that psychological safety, people just go back to what they know: deliverables and deadlines. 


AI-enabled software development: Risk of skill erosion or catalyst for growth?

We need to reframe AI not as a rival, but as a tool—one that has its own pros and cons and can extend human capability, not devalue it. This shift in perspective opens the door to a broader understanding of what it means to be a skilled engineer today. Using AI doesn’t eliminate the need for expertise—it changes the nature of that expertise. Classical programming, once central to the developer’s identity, becomes one part of a larger repertoire. In its place emerge new competencies: critical evaluation, architectural reasoning, prompt literacy, source skepticism, interpretative judgment. These are not hard skills, but meta-cognitive abilities—skills that require us to think about how we think. We’re not losing cognitive effort—we’re relocating it. This transformation mirrors earlier technological shifts. ... Some of the early adopters of AI enablement are already looking ahead—not just at the savings from replacing employees with AI, but at the additional gains those savings might unlock. With strategic investment and redesigned expectations, AI can become a growth driver—not just a cost-cutting tool. But upskilling alone isn’t enough. As organizations embed AI deeper into the development workflow, they must also confront the technical risks that come with automation. The promise of increased productivity can be undermined if these tools are applied without adequate context, oversight, or infrastructure.

Daily Tech Digest - May 25, 2025


Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel



What CIOs Need to Know About the Technical Aspects of AI Integration

AI systems are built around models that utilize data stores, algorithms for query, and machine learning that expands the AI’s body of knowledge as the AI recognizes common logic patterns in data and assimilates knowledge from them. There are many different AI models to choose from. In most cases, companies use predefined AI models from vendors and then expand on them. In other cases, companies elect to build their own models “from scratch.” Building from scratch usually means that the organization has an on-board data science group with expertise in AI model building. Common AI model frameworks (e.g., Tensorflow, PyTorch, Keras, and others), provide the software resources and tools. ... The AI has to be integrated seamlessly with the top to bottom tech stack if it is going to work. This means discussing how and where data from the AI will be stored, with SQL and noSQL databases being the early favorites. Middleware that enables the AI to interoperate with other IT systems must be interfaced with. Most AI models are open source, which can simplify integration -- but integration still requires using middleware APIs like REST, which integrates the AI system with Internet-based resources; or GraphQL, which facilitates the integration of data from multiple sources.


What is Chaos Engineering in DevOps?

As a key enabler for high-performing DevOps teams, Chaos Engineering pushes the boundaries of system resilience. It intentionally introduces faults into environments to expose hidden failures and validate that systems can recover gracefully. Rather than waiting for outages to learn hard lessons, Chaos Engineering allows teams to simulate and study these failures in a controlled way. ... In the DevOps landscape, continuous delivery and rapid iteration are standard. However, these practices can increase the risk of introducing instability if reliability isn’t addressed with equal rigor. Chaos Engineering complements DevOps goals in several ways:Reveals Single Points of Failure (SPOFs): Chaos experiments help discover dependencies that may not be resilient. Uncovers Alerting Gaps: By simulating failures, teams can assess whether monitoring systems raise appropriate alerts. Tests Recovery Readiness: Teams get real-world practice in recovery and incident response. Improves System Observability: Monitoring behaviors during chaos experiments leads to better instrumentation. Builds Team Confidence: Engineering and operations teams gain a better understanding of the system and how to handle outages. Chaos Engineering helps shift failure from a reactive event to a proactive learning opportunity — aligning directly with the DevOps principle of continuous improvement.


Resilience vs. risk: Rethinking cyber strategy for the AI-driven threat landscape

Unfortunately, many companies are similarly unprepared. In a recent survey of 1,500 C-suite and senior executives in 14 countries conducted for the LevelBlue 2025 Futures Report, most respondents said their organizations were simply not ready for the new wave of AI-powered and supply-chain attacks. ... There's also a certain disconnect in the survey results, with fears about AI tempered by overconfidence in one's own abilities. Fifty-four percent of respondents claim to be highly competent at using AI to enhance cybersecurity, and 52% feel just as confident in their abilities to defend against attackers who use AI. However, there's a substantial difference between the bulk of the respondents and those few — about 7% of the total of 1,500 — that LevelBlue classified as already having achieved cyber resilience. "An organization with a cyber-resilient culture is a place where everyone, at every level, understands their role in cybersecurity and takes accountability for it — including protecting sensitive data and systems," the 2025 Futures Report explains. Most notably, none of the 100 or so organizations that LevelBlue deemed cyber resilient had experienced a breach in the 12 months preceding the survey. Ninety-four percent of the cyber-resilient elite said they were making investments in software-supply-chain security, versus 62% of the total group. 


Can Digital Trust Be Rebuilt in the Age of AI?

When digital trust erodes, everything from e-commerce to online communities suffers, as users approach the content with increasing skepticism. Many online communities on social platforms such as Reddit are becoming more and more dominated by bots that seem human at first glance, but they quickly show patterns designed to steer the conversation in specific ways. The implications, both for users and the platforms, are quite worrisome. ... People don’t want to connect with perfection. They want to connect with shared humanity. I recently visited one of my healthcare provider’s websites for info on a procedure and spent some time browsing through the blogs. What I wanted was to read about people sharing my concerns, about the doctors and positive outcomes, how others overcame their illnesses, or maybe a surgeon’s perspective on the procedure. Instead, I got lots of AI-generated information (it’s easy to recognize the “Chat GPT style”—bullet points, summaries, words that it tends to use) on medical conditions and procedures, but it left me cold. It felt like the machine was “AI-xplaining” to me what it thought I needed to read, not what I wanted. Prioritizing authentic communication helps invite our audiences into building a relationship, rather than a transaction. It expresses to them that we value them as visitors, as readers and consumers of our content. 


Industrial cybersecurity leadership is evolving from stopping threats to bridging risk, resilience

The role of cybersecurity leadership in industrial control systems (ICS/OT) is evolving, but not fast enough, Richard Robinson, chief executive officer of Cynalytica, told Industrial Cyber. “We often view leadership maturity through a Western lens. That is a mistake. The threat landscape is global, but readiness is uneven,” Robinson said. “Many regions still operate under the assumption that cyber threats are an ‘IT problem.’ Meanwhile, adversarial technologies targeting control systems, from protocol-aware malware to AI-generated logic attacks, are advancing faster than many leaders are willing to acknowledge.” He added that “We are past the era of defending just IP networks. Today’s threats exploit blind spots in non-IP protocols, legacy PLCs, and analog instrumentation. Nation-states are building offensive capabilities that bypass traditional defenses entirely, and they are being tested in active conflict zones.” ... As the industrial CISO role becomes more strategically focused, balancing compliance, operational integrity, and business risk, the executives reevaluate how expectations around cybersecurity leadership are shifting across industrial organizations. Pereira mentioned that resilience is becoming a bigger focus. 


Eyes on Data: Best Practices and Excellence in Data Management Matter More Than Ever

Despite the universal dependence, most organizations still grapple with fundamental data challenges — inconsistent definitions, fragmented governance, escalating regulatory expectations, and massive growth in data’s volume, variety, and velocity. In other words — as every data professional and even every data user understands — data is more critical than ever, and yet harder than ever to manage effectively. It’s precisely in this high-stakes context that best practices in data management are not just beneficial — they’re essential. ... True to its purpose, DCAM v3 is not a one-time initiative — it’s a lifecycle framework designed to support continuous improvement and progress. That’s why the EDM Council also created the Data Excellence Program, a structured path for organizations to achieve and gain recognition at the organizational level for data excellence. Given its role in driving best practices, DCAM serves as the Program’s backbone for defining and assessing data management capabilities and measuring participants’ progress in their journey towards long-term success and achieving sustained excellence. ... In an era where data is both a competitive asset and a compliance requirement, only those organizations that manage it with rigor, purpose, and strategy will thrive. 


Modern Test Automation With AI (LLM) and Playwright MCP

The ability to interact with the web programmatically is becoming increasingly crucial. This is where GenAI steps in, by leveraging large language models (LLMs) like Claude or custom AI frameworks, GenAI introduces intelligence into test automation, enabling natural language test creation, self-healing scripts, and dynamic adaptability. The bridge that makes this synergy possible is the Model Context Protocol (MCP), a standardized interface that connects GenAI’s cognitive power with Playwright’s automation prowess. ... Playwright MCP is a server that acts as a bridge between large language models (LLMs) or other agents and Playwright-managed browsers. It enables structured command execution, allowing AI to control web interactions like navigation, form filling, or assertions. What sets MCP apart is its reliance on the browser’s accessibility tree — a semantic, hierarchical representation of UI elements—rather than screenshot-based visual interpretation. In Snapshot Mode, MCP provides real-time accessibility snapshots, detailing roles, labels, and states. This approach is lightweight and precise, unlike Vision Mode, which uses screenshots for custom UIs but is slower and less reliable. By prioritizing the accessibility tree, MCP delivers unparalleled speed, reliability, and resource efficiency.


How to tackle your infrastructure technical debt

The quickest win, he suggested, is the removal of “zombie servers” – those that no one dares to turn off because their purpose is unknown. Network tools can reveal what these servers are doing and who is using them, “and frequently, the answer is nothing and nobody”. The same applies to zombie virtual machines (VMs). Another relatively quick win involves replacing obsolete on-premise applications with a software-as-a-service (SaaS) equivalent. One of Harvey’s clients was using an unsupported version of Hyperion on an outdated operating system and hardware. “[They] can’t get rid of it because this is used by people who report directly to the board for the financials.” A simple solution, Harvey suggested, was to “go to Oracle Financials in the cloud...and it’s not your problem anymore”. Infrastructure and operations teams should also lead by example and upgrade their own systems. “You should be able to get the CIO to approve the budget for this because it’s in your control,” said Harvey.... A crucial first step is to stop installing old products. This requires backing from the CIO and other executives, Harvey said, but rules should be established, such as: We will not install any new copies of Windows Server 2016…because it’s going to reach end of support in 2026.


Why Every Business Leader Needs to Think Like an Enterprise Architect

Enterprise architecture provides the structure and governance for effective digital transformation, laying the groundwork for innovation such as AI models. Vizcaino said that SAP will soon have successfully established an AI copilot mode to provide expansive process automation, helping workers make decisions more quickly and effectively. Even as companies enhance automation in this way, leaders’ core responsibilities will largely remain the same: creating business, sustaining business, and creating competitive advantage. It’s how they go about doing this in the age of AI that will look a bit different. Many different types of assets and techniques optimized by data and AI — as enabled by enterprise architecture — will become more valuable moving forward. ... While leveraging AI to automate processes is a key area of current innovation, future innovation will involve optimizing the ways in which those AI investments are orchestrated. Enterprise architecture is not a skill, but rather a discipline that should be infused across all departments of an organization, Vizcaino added. The silos of the past should be avoided in the ways that organizations restructure their operations; instead, inculcating an enterprise-architecture mindset within all business units can serve to bring stakeholders from across an organization together in service of shared technology goals.


Building Supply Chain Cybersecurity Resilience

Supply chain cybersecurity threats are diverse, and the repercussions severe. How can retail and hospitality organizations protect themselves? To help fend off social engineering attacks, make cybersecurity education a priority. Train everyone in your organization, from top to bottom, to spot suspicious activity so they can detect and deflect phishing schemes. Meanwhile, verify all software is up to date to prevent cyber attackers from exploiting network vulnerabilities. It’s also wise to regularly audit your third-party vendors’ security postures to screen for risks and find areas for improvement. In addition to your third-party vendors, turn to your fellow retail and hospitality organizations. The best defense against cyber attackers is putting up a united front and bolstering the entire supply chain. You can collaborate with other retailers and hoteliers via RH-ISAC, the global cybersecurity community, created specifically to help retail and hospitality organizations share cyber intelligence and cybersecurity best practices. Its new LinkSECURE Program offers a membership for small- to mid-size vendors and service providers to help those with limited IT or cyber resources mature their cybersecurity operations. The new program gives every participant an evaluation of their cybersecurity posture, along with a dedicated success manager to guide them through 18 critical security controls and safeguards.