Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Daily Tech Digest - August 10, 2025


Quote for the day:

"Don't worry about being successful but work toward being significant and the success will naturally follow." -- Oprah Winfrey


The Scrum Master: A True Leader Who Serves

Many people online claim that “Agile is a mindset”, and that the mindset is more important than the framework. But let us be honest, the term “agile mindset” is very abstract. How do we know someone truly has it? We cannot open their brain to check. Mindset manifests in different behaviour depending on culture and context. In one place, “commitment” might mean fixed scope and fixed time. In another, it might mean working long hours. In yet another, it could mean delivering excellence within reasonable hours. Because of this complexity, simply saying “agile is a mindset” is not enough. What works better is modelling the behaviour. When people consistently observe the Scrum Master demonstrating agility, those behaviours can become habits. ... Some Scrum Masters and agile coaches believe their job is to coach exclusively, asking questions without ever offering answers. While coaching is valuable, relying on it alone can be harmful if it is not relevant or contextual. Relevance is key to improving team effectiveness. At times, the Scrum Master needs to get their hands dirty. If a team has struggled with manual regression testing for twenty Sprints, do not just tell them to adopt Test-Driven Development (TDD). Show them. ... To be a true leader, the Scrum Master must be humble and authentic. You cannot fake true leadership. It requires internal transformation, a shift in character. As the saying goes, “Character is who we are when no one is watching.”


Vendors Align IAM, IGA and PAM for Identity Convergence

The historic separation of IGA, PAM and IAM created inefficiencies and security blind spots, and attackers exploited inconsistencies in policy enforcement across layers, said Gil Rapaport, chief solutions officer at CyberArk. By combining governance, access and privilege in a single platform, the company could close the gaps between policy enforcement and detection, Rapaport said. "We noticed those siloed markets creating inefficiency in really protecting those identities, because you need to manage different type of policies for governance of those identities and for securing the identities and for the authentication of those identities, and so on," Rapaport told ISMG. "The cracks between those silos - this is exactly where the new attack factors started to develop." ... Enterprise customers that rely on different tools for IGA, PAM, IAM, cloud entitlements and data governance are increasingly frustrated because integrating those tools is time-consuming and error-prone, Mudra said. Converged platforms reduce integration overhead and allow vendors to build tools that communicate natively and share risk signals, he said. "If you have these tools in silos, yes, they can all do different things, but you have to integrate them after the fact versus a converged platform comes with out-of-the-box integration," Mudra said. "So, these different tools can share context and signals out of the box."


The Importance of Technology Due Diligence in Mergers and Acquisitions

The primary reason for conducting technology due diligence is to uncover any potential risks that could derail the deal or disrupt operations post-acquisition. This includes identifying outdated software, unresolved security vulnerabilities, and the potential for data breaches. By spotting these risks early, you can make informed decisions and create risk mitigation strategies to protect your company. ... A key part of technology due diligence is making sure that the target company’s technology assets align with your business’s strategic goals. Whether it’s cloud infrastructure, software solutions, or hardware, the technology should complement your existing operations and provide a foundation for long-term growth. Misalignment in technology can lead to inefficiencies and costly reworks. ... Rank the identified risks based on their potential impact on your business and the likelihood of their occurrence. This will help prioritize mitigation efforts, so that you’re addressing the most critical vulnerabilities first. Consider both short-term risks, like pending software patches, and long-term issues, such as outdated technology or a lack of scalability. ... Review existing vendor contracts and third-party service provider agreements, looking for any liabilities or compliance risks that may emerge post-acquisition—especially those related to data access, privacy regulations, or long-term commitments. It’s also important to assess the cybersecurity posture of vendors and their ability to support integration.


From terabytes to insights: Real-world AI obervability architecture

The challenge is not only the data volume, but the data fragmentation. According to New Relic’s 2023 Observability Forecast Report, 50% of organizations report siloed telemetry data, with only 33% achieving a unified view across metrics, logs and traces. Logs tell one part of the story, metrics another, traces yet another. Without a consistent thread of context, engineers are forced into manual correlation, relying on intuition, tribal knowledge and tedious detective work during incidents. ... In the first layer, we develop the contextual telemetry data by embedding standardized metadata in the telemetry signals, such as distributed traces, logs and metrics. Then, in the second layer, enriched data is fed into the MCP server to index, add structure and provide client access to context-enriched data using APIs. Finally, the AI-driven analysis engine utilizes the structured and enriched telemetry data for anomaly detection, correlation and root-cause analysis to troubleshoot application issues. This layered design ensures that AI and engineering teams receive context-driven, actionable insights from telemetry data. ... The amalgamation of structured data pipelines and AI holds enormous promise for observability. We can transform vast telemetry data into actionable insights by leveraging structured protocols such as MCP and AI-driven analyses, resulting in proactive rather than reactive systems. 


MCP explained: The AI gamechanger

Instead of relying on scattered prompts, developers can now define and deliver context dynamically, making integrations faster, more accurate, and easier to maintain. By decoupling context from prompts and managing it like any other component, developers can, in effect, build their own personal, multi-layered prompt interface. This transforms AI from a black box into an integrated part of your tech stack. ... MCP is important because it extends this principle to AI by treating context as a modular, API-driven component that can be integrated wherever needed. Similar to microservices or headless frontends, this approach allows AI functionality to be composed and embedded flexibly across various layers of the tech stack without creating tight dependencies. The result is greater flexibility, enhanced reusability, faster iteration in distributed systems and true scalability. ... As with any exciting disruption, the opportunity offered by MCP comes with its own set of challenges. Chief among them is poorly defined context. One of the most common mistakes is hardcoding static values — instead, context should be dynamic and reflect real-time system states. Overloading the model with too much, too little or irrelevant data is another pitfall, often leading to degraded performance and unpredictable outputs. 


AI is fueling a power surge - it could also reinvent the grid

Data centers themselves are beginning to evolve as well. Some forward-looking facilities are now being designed with built-in flexibility to contribute back to the grid or operate independently during times of peak stress. These new models, combined with improved efficiency standards and smarter site selection strategies, have the potential to ease some of the pressure being placed on energy systems. Equally important is the role of cross-sector collaboration. As the line between tech and infrastructure continues to blur, it’s critical that policymakers, engineers, utilities, and technology providers work together to shape the standards and policies that will govern this transition. That means not only building new systems, but also rethinking regulatory frameworks and investment strategies to prioritize resiliency, equity, and sustainability. Just as important as technological progress is public understanding. Educating communities about how AI interacts with infrastructure can help build the support needed to scale promising innovations. Transparency around how energy is generated, distributed, and consumed—and how AI fits into that equation—will be crucial to building trust and encouraging participation. ... To be clear, AI is not a silver bullet. It won’t replace the need for new investment or hard policy choices. But it can make our systems smarter, more adaptive, and ultimately more sustainable.


AI vs Technical Debt: Is This A Race to the Bottom?

Critically, AI-generated code can carry security liabilities. One alarming study analyzed code suggested by GitHub Copilot across common security scenarios – the result: roughly 40% of Copilot’s suggestions had vulnerabilities. These included classic mistakes like buffer overflows and SQL injection holes. Why so high? The AI was trained on tons of public code – including insecure code – so it can regurgitate bad practices (like using outdated encryption or ignoring input sanitization) just as easily as good ones. If you blindly accept such output, you’re effectively inviting known bugs into your codebase. It doesn’t help that AI is notoriously bad at certain logical tasks (for example, it struggles with complex math or subtle state logic, so it might write code that looks legit but is wrong in edge cases. ... In many cases, devs aren’t reviewing AI-written code as rigorously as their own, and a common refrain when something breaks is, “It is not my code,” implying they feel less responsible since the AI wrote it. That attitude itself is dangerous, if nobody feels accountable for the AI’s code, it slips through code reviews or testing more easily, leading to more bad deployments. The open-source world is also grappling with an influx of AI-generated “contributions” that maintainers describe as low-quality or even spam. Imagine running an open-source project and suddenly getting dozens of auto-generated pull requests that technically add a feature or fix but are riddled with style issues or bugs.


The Future of Manufacturing: Digital Twin in Action

Process digital twins are often confused with traditional simulation tools, but there is an important distinction. Simulations are typically offline models used to test “what-if” scenarios, verify system behaviour, and optimise processes without impacting live operations. These models are predefined and rely on human input to set parameters and ask the right questions. A digital twin, on the other hand, comes to life when connected to real-time operational data. It reflects current system states, responds to live inputs, and evolves continuously as conditions change. This distinction between static simulation and dynamic digital twin is widely recognised across the industrial sector. While simulation still plays a valuable role in system design and planning, the true power of the digital twin lies in its ability to mirror, interpret, and influence operational performance in real time. ... When AI is added, the digital twin evolves into a learning system. AI algorithms can process vast datasets - far beyond what a human operator can manage - and detect early warning signs of failure. For example, if a transformer begins to exhibit subtle thermal or harmonic irregularities, an AI-enhanced digital twin doesn’t just flag it. It assesses the likelihood of failure, evaluates the potential downstream impact, and proposes mitigation strategies, such as rerouting power or triggering maintenance workflows.


Bridging the Gap: How Hybrid Cloud Is Redefining the Role of the Data Center

Today’s hybrid models involve more than merging public clouds with private data centers. They also involve specialized data center solutions like colocation, edge facilities and bare-metal-as-a-service (BMaaS) offerings. That’s the short version of how hybrid cloud and its relationship to data centers are evolving. ... Fast forward to the present, and the goals surrounding hybrid cloud strategies often look quite different. When businesses choose a hybrid cloud approach today, it’s typically not because of legacy workloads or sunk costs. It’s because they see hybrid architectures as the key to unlocking new opportunities ... The proliferation of edge data centers has also enabled simpler, better-performing and more cost-effective hybrid clouds. The more locations businesses have to choose from when deciding where to place private infrastructure and workloads, the more opportunity they have to optimize performance relative to cost. ... Today’s data centers are no longer just a place to host whatever you can’t run on-prem or in a public cloud. They have evolved into solutions that offer specialized services and capabilities that are critical for building high-performing, cost-effective hybrid clouds – but that aren’t available from public cloud providers, and that would be very costly and complicated for businesses to implement on their own.


AI Agents: Managing Risks In End-To-End Workflow Automation

As CIOs map out their AI strategies, it’s becoming clear that agents will change how they manage their organization’s IT environment and how they deliver services to the rest of the business. With the ability of agents to automate a broad swath of end-to-end business processes—learning and changing as they go—CIOs will have to oversee significant shifts in software development, IT operating models, staffing, and IT governance. ... Human-based checks and balances are vital for validating agent-based outputs and recommendations and, if needed, manually change course should unintended consequences—including hallucinations or other errors—arise. “Agents being wrong is not the same thing as humans being wrong,” says Elliott. “Agents can be really wrong in ways that would get a human fired if they made the same mistake. We need safeguards so that if an agent calls the wrong API, it’s obvious to the person overseeing that task that the response or outcome is unreasonable or doesn’t make sense.” These orchestration and observability layers will be increasingly important as agents are implemented across the business. “As different parts of the organization [automate] manual processes, you can quickly end up with a patchwork-quilt architecture that becomes almost impossible to upgrade or rethink,” says Elliott.

Daily Tech Digest - August 09, 2025


Quote for the day:

“Develop success from failures. Discouragement and failure are two of the surest stepping stones to success.” -- Dale Carnegie


Is ‘Decentralized Data Contributor’ the Next Big Role in the AI Economy?

Training AI models requires real-world, high-quality, and diverse data. The problem is that the astronomical demand is slowly outpacing the available sources. Take public datasets as an example. Not only is this data overused, but it’s often restricted to avoid privacy or legal concerns. There’s also a huge issue with geographic or spatial data gaps where the information is incomplete regarding specific regions, which can and will lead to inaccuracies or biases with AI models. Decentralized contributors can help bust these challenges. ... Even though a large part of the world’s population has no problem with passively sharing data when browsing the web, due to the relative infancy of decentralized systems, active data contribution may seem to many like a bridge too far. Anonymized data isn’t 100% safe. Determined threat actor parties can sometimes re-identify individuals from unnamed datasets. The concern is valid, which is why decentralized projects working in the field must adopt privacy-by-design architectures where privacy is a core part of the system instead of being layered on top after the fact. Zero-knowledge proofs is another technique that can reduce privacy risks by allowing contributors to prove the validity of the data without exposing any information. For example, demonstrating their identity meets set criteria without divulging anything identifiable.


The ROI of Governance: Nithesh Nekkanti on Taming Enterprise Technical Debt

A key symptom of technical debt is rampant code duplication, which inflates maintenance efforts and increases the risk of bugs. A multi-pronged strategy focused on standardization and modularity proved highly effective, leading to a 30% reduction in duplicated code. This initiative went beyond simple syntax rules to forge a common development language, defining exhaustive standards for Apex and Lightning Web Components. By measuring metrics like technical debt density, teams can effectively track the health of their codebase as it evolves. ... Developers may perceive stricter quality gates as a drag on velocity, and the task of addressing legacy code can seem daunting. Overcoming this resistance requires clear communication and a focus on the long-term benefits. "Driving widespread adoption of comprehensive automated testing and stringent code quality tools invariably presents cultural and operational challenges," Nekkanti acknowledges. The solution was to articulate a compelling vision. ... Not all technical debt is created equal, and a mature governance program requires a nuanced approach to prioritization. The PEC developed a technical debt triage framework to systematically categorize issues based on type, business impact, and severity. This structured process is vital for managing a complex ecosystem, where a formal Technical Governance Board (TGB) can use data to make informed decisions about where to invest resources.


Why Third-Party Risk Management (TPRM) Can’t Be Ignored in 2025

In today’s business world, no organization operates in a vacuum. We rely on vendors, suppliers, and contractors to keep things running smoothly. But every connection brings risk. Just recently, Fortinet made headlines as threat actors were found maintaining persistent access to FortiOS and FortiProxy devices using known vulnerabilities—while another actor allegedly offered a zero-day exploit for FortiGate firewalls on a dark web forum. These aren’t just IT problems—they’re real reminders of how vulnerabilities in third-party systems can open the door to serious cyber threats, regulatory headaches, and reputational harm. That’s why Third-Party Risk Management (TPRM) has become a must-have, not a nice-to-have. ... Think of TPRM as a structured way to stay on top of the risks your third parties, suppliers and vendors might expose you to. It’s more than just ticking boxes during onboarding—it’s an ongoing process that helps you monitor your partners’ security practices, compliance with laws, and overall reliability. From cloud service providers, logistics partners, and contract staff to software vendors, IT support providers, marketing agencies, payroll processors, data analytics firms, and even facility management teams—if they have access to your systems, data, or customers, they’re part of your risk surface. 


Ushering in a new era of mainframe modernization

One of the key challenges in modern IT environments is integrating data across siloed systems. Mainframe data, despite being some of the most valuable in the enterprise, often remains underutilized due to accessibility barriers. With a z17 foundation, software data solutions can more easily bridge critical systems, offering unprecedented data accessibility and observability. For CIOs, this is an opportunity to break down historical silos and make real-time mainframe data available across cloud and distributed environments without compromising performance or governance. As data becomes more central to competitive advantage, the ability to bridge existing and modern platforms will be a defining capability for future-ready organizations. ... For many industries, mainframes continue to deliver unmatched performance, reliability, and security for mission-critical workloads—capabilities that modern enterprises rely on to drive digital transformation. Far from being outdated, mainframes are evolving through integration with emerging technologies like AI, automation, and hybrid cloud, enabling organizations to modernize without disruption. With decades of trusted data and business logic already embedded in these systems, mainframes provide a resilient foundation for innovation, ensuring that enterprises can meet today’s demands while preparing for tomorrow’s challenges.


Fighting Cyber Threat Actors with Information Sharing

Effective threat intelligence sharing creates exponential defensive improvements that extend far beyond individual organizational benefits. It not only raises the cost and complexity for attackers but also lowers their chances of success. Information Sharing and Analysis Centers (ISACs) demonstrate this multiplier effect in practice. ISACs are, essentially, non-profit organizations that provide companies with timely intelligence and real-world insights, helping them boost their security. The success of existing ISACs has also driven expansion efforts, with 26 U.S. states adopting the NAIC Model Law to encourage information sharing in the insurance sector. ... Although the benefits of information sharing are clear, actually implementing them is a different story. Common obstacles include legal issues regarding data disclosure, worries over revealing vulnerabilities to competitors, and the technical challenge itself – evidently, devising standardized threat intelligence formats is no walk in the park. And yet it can certainly be done. Case in point: the above-mentioned partnership between CrowdStrike and Microsoft. Its success hinges on its well-thought-out governance system, which allows these two business rivals to collaborate on threat attribution while protecting their proprietary techniques and competitive advantages. 


The Ultimate Guide to Creating a Cybersecurity Incident Response Plan

Creating a fit-for-purpose cyber incident response plan isn’t easy. However, by adopting a structured approach, you can ensure that your plan is tailored for your organisational risk context and will actually help your team manage the chaos that ensues a cyber attack. In our experience, following a step-by-step process to building a robust IR plan always works. Instead of jumping straight into creating a plan, it’s best to lay a strong foundation with training and risk assessment and then work your way up. ... Conducting a cyber risk assessment before creating a Cybersecurity Incident Response Plan is critical. Every business has different assets, systems, vulnerabilities, and exposure to risk. A thorough risk assessment identifies what assets need the most protection. The assets could be customer data, intellectual property, or critical infrastructure. You’ll be able to identify where the most likely entry points for attackers may be. This insight ensures that the incident response plan is tailored and focused on the most pressing risks instead of being a generic checklist. A risk assessment will also help you define the potential impact of various cyber incidents on your business. You can prioritise response strategies based on what incidents would be most damaging. Without this step, response efforts may be misaligned or inadequate in the face of a real threat.


How to Become the Leader Everyone Trusts and Follows With One Skill

Leaders grounded in reason have a unique ability; they can take complex situations and make sense of them. They look beyond the surface to find meaning and use logic as their compass. They're able to spot patterns others might miss and make clear distinctions between what's important and what's not. Instead of being guided by emotion, they base their decisions on credibility, relevance and long-term value. ... The ego doesn't like reason. It prefers control, manipulation and being right. At its worst, it twists logic to justify itself or dominate others. Some leaders use data selectively or speak in clever soundbites, not to find truth but to protect their image or gain power. But when a leader chooses reason, something shifts. They let go of defensiveness and embrace objectivity. They're able to mediate fairly, resolve conflicts wisely and make decisions that benefit the whole team, not just their own ego. This mindset also breaks down the old power structures. Instead of leading through authority or charisma, leaders at this level influence through clarity, collaboration and solid ideas. ... Leaders who operate from reason naturally elevate their organizations. They create environments where logic, learning and truth are not just considered as values, they're part of the culture. This paves the way for innovation, trust and progress. 


Why enterprises can’t afford to ignore cloud optimization in 2025

Cloud computing has long been the backbone of modern digital infrastructure, primarily built around general-purpose computing. However, the era of one-size-fits-all cloud solutions is rapidly fading in a business environment increasingly dominated by AI and high-performance computing (HPC) workloads. Legacy cloud solutions struggle to meet the computational intensity of deep learning models, preventing organizations from fully realizing the benefits of their investments. At the same time, cloud-native architectures have become the standard, as businesses face mounting pressure to innovate, reduce time-to-market, and optimize costs. Without a cloud-optimized IT infrastructure, organizations risk losing key operational advantages—such as maximizing performance efficiency and minimizing security risks in a multi-cloud environment—ultimately negating the benefits of cloud-native adoption. Moreover, running AI workloads at scale without an optimized cloud infrastructure leads to unnecessary energy consumption, increasing both operational costs and environmental impact. This inefficiency strains financial resources and undermines corporate sustainability goals, which are now under greater scrutiny from stakeholders who prioritize green initiatives.


Data Protection for Whom?

To be clear, there is no denying that a robust legal framework for protecting privacy is essential. In the absence of such protections, both rich and poor citizens face exposure to fraud, data theft and misuse. Personal data leakages – ranging from banking details to mobile numbers and identity documents – are rampant, and individuals are routinely subjected to financial scams, unsolicited marketing and phishing attacks. Often, data collected for one purpose – such as KYC verification or government scheme registration – finds its way into other hands without consent. ... The DPDP Act, in theory, establishes strong penalties for violations. However, the enforcement mechanisms under the Act are opaque. The composition and functioning of the Data Protection Board – a body tasked with adjudicating complaints and imposing penalties – are entirely controlled by the Union government. There is no independent appointments process, no safeguards against arbitrary decision-making, and no clear procedure for appeals. Moreover, there is a genuine worry that smaller civil society initiatives – such as grassroots surveys, independent research and community-based documentation efforts – will be priced out of existence. The compliance costs associated with data processing under the new framework, including consent management, data security audits and liability for breaches, are likely to be prohibitive for most non-profit and community-led groups.


Stargate’s slow start reveals the real bottlenecks in scaling AI infrastructure

“Scaling AI infrastructure depends less on the technical readiness of servers or GPUs and more on the orchestration of distributed stakeholders — utilities, regulators, construction partners, hardware suppliers, and service providers — each with their own cadence and constraints,” Gogia said. ... Mazumder warned that “even phased AI infrastructure plans can stall without early coordination” and advised that “enterprises should expect multi-year rollout horizons and must front-load cross-functional alignment, treating AI infra as a capital project, not a conventional IT upgrade.” ... Given the lessons from Stargate’s delays, analysts recommend a pragmatic approach to AI infrastructure planning. Rather than waiting for mega-projects to mature, Mazumder emphasized that “enterprise AI adoption will be gradual, not instant and CIOs must pivot to modular, hybrid strategies with phased infrastructure buildouts.” ... The solution is planning for modular scaling by deploying workloads in hybrid and multi-cloud environments so progress can continue even when key sites or services lag. ... For CIOs, the key lesson is to integrate external readiness into planning assumptions, create coordination checkpoints with all providers, and avoid committing to go-live dates that assume perfect alignment.

Daily Tech Digest - August 06, 2025


Quote for the day:

"What you do has far greater impact than what you say." -- Stephen Covey


“Man in the Prompt”: New Class of Prompt Injection Attacks Pairs With Malicious Browser Extensions to Issue Secret Commands to LLMs

The so-called “Man in the Prompt” attack presents two priority risks. One is to internal LLMs that store sensitive company data and personal information, in the belief that it is appropriately fenced off from other software and apps. The other risk comes from particular LLMs that are broadly integrated into workspaces, such as Google Gemini’s interaction with Google Workspace tools such as Mail and Docs. This category of prompt injection attacks applies not just to any type of browser extension, but any model or deployment of LLM. And the malicious extension requires no special permissions to work, given that the DOM access already provides everything it needs. ... The other proof-of-concept targets Google Gemini, and by extension any elements of Google Workspace it has been integrated with. Gemini is meant to automate routine and tedious tasks in Workspace such as email responses, document editing and updating contacts. The trouble is that it has almost complete access to the contents of these accounts as well as anything the user has access permission for or has had shared with them by someone else. Prompt injection attacks conducted by these extensions can not only steal the contents of emails and documents with ease, but complex queries can be fed to the LLM to target particular types of data and file extensions; the autocomplete function can also be abused to enumerate available files.


EU seeks more age verification transparency amid contentious debate

The EU is considering setting minimum requirements for online platforms to disclose their use of age verification or age estimation tools in their terms and conditions. The obligation is contained in a new compromise draft text of the EU’s proposed law on detecting and removing online child sex abuse material (CSAM), dated July 24 and seen by MLex. A discussion of the proposal, which contains few other changes to a previous draft, is scheduled for September 12. The text also calls for online platforms to perform mandatory scans for CSAM, which critics say could result in false positives and break end-to-end cryptography. ... The way age verification is set to work under the OSA is described as a “privacy nightmare” by PC Gamer, but the article stands in stark contrast to the vague posturing of the political class. Author Jacob Ridley acknowledges the possibility for double-blind methods of age assurance among those that do not require any personal information at all to be shared with the website or app the individual is trying to access. At the same time, many age verification systems do not work this way. Also, age assurance pop-ups can be spoofed, and those spoofs could harvest a wealth of valuable personal information Privado ID Co-founder Evan McMullen calls it “like using a sledgehammer to crack a walnut.” McMullen, of course, prefers a decentralized approach that leans on zero-knowledge proofs (ZKPs).


AI Is Changing the Cybersecurity Game in Ways Both Big and Small

“People are rushing now to get [MCP] functionality while overlooking the security aspect,” he said. “But once the functionality is established and the whole concept of MCP becomes the norm, I would assume that security researchers will go in and essentially update and fix those security issues over time. But it will take a couple of years, and while that is taking time, I would advise you to run MCP somehow securely so that you know what’s going on.” Beyond the tactical security issues around MCP, there are bigger issues that are more strategic, more systemic in nature. They involve the big changes that large language models (LLMs) are having on the cybersecurity business and the things that organizations will have to do to protect themselves from AI-powered attacks in the future ... The sheer volume of threat data, some of which may be AI generated, demands more AI to be able to parse it and understand it, Sharma said. “It’s not humanly possible to do it by a SOC engineer or a vulnerability engineer or a threat engineer,” he said. Tuskira essentially functions as an AI-powered security analyst to detect traditional threats on IT systems as well as threats posed to AI-powered systems. Instead of using commercial AI models, Sharma adopted open-source foundation models running in private data centers. Developing AI tools to counter AI-powered security threats demands custom models, a lot of fine-tuning, and a data fabric that can maintain context of particular threats, he said.


AI burnout: A new challenge for CIOs

To take advantage of the benefits of smart tools and avoid overburdening the workforce, the board of directors must carefully manage their deployment. “As leaders, we must set clear limits, encourage training without overwhelming others, and open spaces for conversation about how people are experiencing this transition,” Blázquez says. “Technology must be an ally, not a threat, and the role of leadership will be key in that balance.” “It is recommended that companies take the first step. They must act from a preventative, humane, and structural perspective,” says De la Hoz. “In addition to all the human, ethical, and responsible components, it is in the company’s economic interest to maintain a happy, safe, and mission-focused workforce.” Regarding increasing personal productivity, he emphasizes the importance of “valuing their efforts, whether through higher salary returns or other forms of compensation.” ... From here, action must be taken, “implementing contingency plans to alleviate these areas.” One way: working groups, where the problems and barriers associated with technology can be analyzed. “From here, use these KPIs to change my strategy. Or to set it up, because often what happens is that I deploy the technology and forget how to get that technology adopted.” 


CIOs need a military mindset

While the battlefield feels very far away from the boardroom, this principle is something that CIOs can take on board when they’re tasked with steering a complex digital programme. Step back and clear the path so that you can trust your people to deliver; that’s when the real progress gets made. Contrary to popular belief, the military is not rigidly hierarchical. In fact, it teaches individuals to operate with autonomy within defined parameters. Officers set the boundaries of a mission and step back, allowing you to take full ownership of your actions. This approach is supported by the OODA Loop, a framework that cultivates awareness and decisive action under pressure. ... Resilience is perhaps the hardest leadership trait to teach and the most vital to embody. Military officers are taught to plan exhaustively, train rigorously, and prepare for all scenarios, but they’re also taught that ‘the first casualty of war is the plan.’ Adaptability under pressure is a non-negotiable mindset for you to adopt and instil in your team. When your team feels supported to grow, they stop fearing change and start responding to it; it is here that adaptability and resilience become second nature. There is also a practical opportunity to bring these principles in-house, as veterans transitioning out of the army may bring with them a refreshed leadership approach. Because they’re often confident under pressure and focused on outcomes, their transferrable skills allow them to thrive in the corporate world.


Backend FinOps: Engineering Cost-Efficient Microservices in the Cloud

Integrating cost management directly into Infrastructure-as-Code (IaC) frameworks such as Terraform enforces fiscal responsibility at the resource provisioning phase. By explicitly defining resource constraints and mandatory tagging, teams can preemptively mitigate orphaned cloud expenditures. ... Integrating cost awareness directly within Continuous Integration and Delivery (CI/CD) pipelines ensures proactive management of cloud expenditures throughout the development lifecycle. Tools such as Infracost automate the calculation of incremental cloud costs introduced by individual code changes. ... Cost-based pre-merge testing frameworks reinforce fiscal prudence by simulating peak-load scenarios prior to code integration. Automated tests measured critical metrics, including ninety-fifth percentile response times and estimated cost per ten thousand requests, to ensure compliance with established financial performance benchmarks. Pull requests failing predefined cost-efficiency criteria were systematically blocked. ... Comprehensive cost observability tools such as Datadog Cost Dashboards combine billing metrics with Application Performance Monitoring (APM) data, directly supporting operational and cost-related SLO compliance.


5 hard truths of a career in cybersecurity — and how to navigate them

Leadership and HR teams often gatekeep by focusing exclusively on candidates with certain educational degrees or specific credentials, typically from vendors such as Cisco, Juniper, or Palo Alto. Although Morrato finds this somewhat understandable given the high cost of hiring in cybersecurity, he believes this approach unfairly filters out capable individuals who, in a different era, would have had more opportunities. ... Because most team managers elevate from technical roles, they often lack the leadership and interpersonal skills needed to foster healthy team cultures or manage stakeholder relationships effectively. This cultural disconnect has a tangible impact on individuals. “People who work in security functions don’t always feel safe — psychologically safe — doing so,” Budge explains. ... Cybersecurity teams must also rethink how they approach risk, as relying solely on strict, one-size-fits-all controls is no longer tenable, Mistry says. Instead, he advocates for a more adaptive, business-aligned framework that considers overall exposure rather than just technical vulnerabilities. “Can I live with this risk? Can I not live with this risk? Can I do something to reduce the risk? Can I offload the risk? And it’s a risk conversation, not a ‘speeds and feeds’ conversation,” he says, emphasizing that cybersecurity leaders must actively build relationships across the organization to make these conversations possible.


How AI amplifies these other tech trends that matter most to business in 2025

Agentic AI is an artificial intelligence system capable of independently planning and executing complex, multistep tasks. Built on foundation models, these agents can autonomously perform actions, communicate with one another, and adapt to new information. Significant advancements have emerged, from general agent platforms to specialized agents designed for deep research. ... Application-specific semiconductors are purpose-built chips optimized to perform specialized tasks. Unlike general-purpose semiconductors, they are engineered to handle specific workloads (such as large-scale AI training and inference tasks) while optimizing performance characteristics, including offering superior speed, energy efficiency, and performance. ... Cloud and edge computing involve distributing workloads across locations, from hyperscale remote data centers to regional hubs and local nodes. This approach optimizes performance by addressing factors such as latency, data transfer costs, data sovereignty, and data security. ... Quantum-based technologies use the unique properties of quantum mechanics to execute certain complex calculations exponentially faster than classical computers; secure communication networks; and produce sensors with higher sensitivity levels than their classical counterparts.


Differentiable Economics: Strategic Behavior, Mechanisms, and Machine Learning

Differential economics is related to but different from the recent progress in building agents that achieve super-human performance in combinatorial games such as chess and Go. First, economic games such as auctions, oligopoly competition, or contests typically have a continuous action space expressed in money, and opponents are modeled as draws from a prior distribution that has continuous support. Second, differentiable economics is focused on modeling and achieving equilibrium behavior. The second opportunity in differentiable economics is to use data-driven methods and machine learning to discover rules, constraints, and affordances—mechanisms—for economic environments that promote good outcomes in the equilibrium behavior of a system. Mechanism design solves the inverse problem of game theory, finding rules of strategic interaction such that agents in equilibrium will effect an outcome with desired properties. Where possible, mechanisms promote strong equilibrium solution concepts such as dominant strategy equilibria, making it strategically easy for agents to participate. Think of a series of bilateral negotiations between buyers and a seller that is replaced by an efficient auction mechanism with simple dominant strategies for agents to report their preferences truthfully. 


Ownership Mindset Drives Innovation: Milwaukee Tool CEO

“Empowerment was not a free-for-all,” Richman explained. In fact, the company recently changed the wording around its core values from “empowerment” to “extreme ownership” to reflect the importance of accountability for results. Emphasizing ownership can also help employees do what is best for the company as a whole rather than just their own teams, particularly when it comes to reallocating resources. ... Surprises and setbacks are an unavoidable cost of trying new things while innovating. Since organizations cannot avoid these issues, leaders and employees need to discuss them frankly and quickly enough to minimize the downside while seizing the upside. “[Being] candid is the most challenging cultural element of any company,” Richman said. “And we believe that it really leads to success or failure.” … In successful cultures, teams, people, parts of the organization can bring problems up and bring them up in a way to be able to say, ‘How are we going to rally the troops as one team, come together, fix it, and figure out why we got into this mess, and what are we going to do to not do it again?’” Candor is a two-way street. To build trust, leaders need to provide an honest assessment of the state of the company and the path forward — a “candid communication of where you are,” Richman said. 

Daily Tech Digest - July 29, 2025


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe


AI Skills Are in High Demand, But AI Education Is Not Keeping Up

There’s already a big gap between how many AI workers are needed and how many are available, and it’s only getting worse. The report says the U.S. was short more than 340,000 AI and machine learning workers in 2023. That number could grow to nearly 700,000 by 2027 if nothing changes. Faced with limited options in traditional higher education, most learners are taking matters into their own hands. According to the report, “of these 8.66 million people learning AI, 32.8% are doing so via a structured and supervised learning program, the rest are doing so in an independent manner.” Even within structured programs, very few involve colleges or universities. As the report notes, “only 0.2% are learning AI via a credit-bearing program from a higher education institution,” while “the other 99.8% are learning these skills from alternative education providers.” That includes everything from online platforms to employer-led training — programs built for speed, flexibility, and real-world use, rather than degrees. College programs in AI are growing, but they’re still not reaching enough people. Between 2018 and 2023, enrollment in AI and machine learning programs at U.S. colleges went up nearly 45% each year. Even with that growth, these programs serve only a small slice of learners — most people are still turning to other options.


Why chaos engineering is becoming essential for enterprise resilience

Enterprises should treat chaos engineering as a routine practice, just like sports teams before every game. These groups would never participate in matches without understanding their opponent or ensuring they are in the best possible position to win. They train under pressure, run through potential scenarios, and test their plays to identify the weaknesses of their opponents. This same mindset applies to enterprise engineering teams preparing for potential chaos in their environments. By purposely simulating disruptions like server outages, latency, or dropped connections, or by identifying bugs and poor code, enterprises can position themselves to perform at their best when these scenarios occur in real life. They can adopt proactive approaches to detecting vulnerabilities, instituting recovery strategies, building trust in systems and, in the end, improving their overall resilience. ... Additionally, chaos engineering can help improve scalability within the organisation. Enterprises are constantly seeking ways to grow and enhance their apps or platforms so that more and more end-users can see the benefits. By doing this, they can remain competitive and generate more revenue. Yet, if there are any cracks within the facets or systems that power their apps or platforms, it can be extremely difficult to scale and deliver value to both customers and the organisation.


Fractional CXOs: A New Model for a C-Everything World

Fractional leadership isn’t a new idea—it’s long been part of the advisory board and consulting space. But what’s changed is its mainstream adoption. Companies are now slotting in fractional leaders not just for interim coverage or crisis management, but as a deliberate strategy for agility and cost-efficiency. It’s not just companies benefiting either. Many high-performing professionals are choosing the fractional path because it gives them freedom, variety, and a more fulfilling way to leverage their skills without being tied down to one company or role. For them, it’s not just about fractional time—it’s about full-spectrum opportunity. ... Whether you’re a company executive exploring options or a leader considering a lifestyle pivot, here are the biggest advantages of fractional CxOs:Strategic Agility: Need someone to lead a transformation for 6–12 months? Need guidance scaling your data team? A fractional CxO lets you dial in the right leadership at the right time. Cost Containment: You pay for what you need, when you need it. No long-term employment contracts, no full comp packages, no redundancy risk. Experience Density: Most fractional CxOs have deep domain expertise and have led across multiple industries. That cross-pollination of experience can bring unique insights and fast-track solutions.


Cyberattacks reshape modern conflict & highlight resilience needs

Governments worldwide are responding to the changing threat landscape. The United States, European Union, and NATO have increased spending on cyber defence and digital threat-response measures. The UK's National Cyber Force has broadened its recruitment initiatives, while the European Union has introduced new cyber resilience strategies. Even countries with neutral status, such as Switzerland, have begun investing more heavily in cyber intelligence. ... Critical infrastructure encompasses power grids, water systems, and transport networks. These environments often use operational technology (OT) networks that are separated from the internet but still have vulnerabilities. Attackers typically exploit mechanisms such as phishing, infected external drives, or unsecured remote access points to gain entry. In 2024, a group linked to Iran, called CyberAv3ngers, breached several US water utilities by targeting internet-connected control systems, raising risks of water contamination. ... Organisations are advised against bespoke security models, with tried and tested frameworks such as NIST CSF, OWASP SAMM, and ISO standards cited as effective guides for structuring improvement. The statement continues, "Like any quality control system it is all about analysis of the situation and iterative improvements. Things evolve slowly until they happen all at once."


The trials of HR manufacturing: AI in blue-collar rebellion

The challenge of automation isn't just technological, it’s deeply human. How do you convince someone who has operated a ride in your park for almost two decades, who knows every sound, every turn, every lever by heart, that the new sleek control panel is an upgrade and not a replacement? That the machine learning model isn’t taking their job; it’s opening doors to something better? For many workers, the introduction of automation doesn’t feel like innovation but like erasure. A line shuts down. A machine takes over. A skill that took them years to master becomes irrelevant overnight. In this reality, HR’s role extends far beyond workflow design; it now must navigate fear, build trust, and lead people through change with empathy and clarity. Upskilling entails more than just access to platforms that educate you. It’s about building trust, ensuring relevance, and respecting time. Workers aren’t just asking how to learn, but why. Workers want clarity on their future career paths. They’re asking, “Where is this ride taking me?” As Joseph Fernandes, SVP of HR for South Asia at Mastercard, states, change management should “emphasize how AI can augment employee capabilities rather than replace them.” Additionally, HR must address the why of training, not just the how. Workers don’t want training videos; rather, they want to know what the next five years of their job look like. 


What Do DevOps Engineers Think of the Current State of DevOps

The toolchain is consolidating. CI/CD, monitoring, compliance, security and cloud provisioning tools are increasingly bundled or bridged in platform layers. DevOps.com’s coverage tracks this trend: It’s no longer about separate pipelines, it’s about unified DevOps platforms. CloudBees Unify is a prime example: Launched in mid‑2025, it unifies governance across toolchains without forcing migration — an AI‑powered operating layer over existing tools. ... DevOps education and certification remain fragmented. Traditional certs — Kubernetes (CKA, CKAD), AWS/Azure/GCP and DevOps Foundation — remain staples. But DevOps engineers express frustration: Formal learning often lags behind real‑world tooling, AI integration, or platform engineering practices. Many engineers now augment certs with hands‑on labs, bootcamps and informal community learning. Organizations are piloting internal platform engineer training programs to bridge skills gaps. Still, a mismatch persists between the modern tech stack and classroom syllabi. ... DevOps engineers today stand at a crossroads: Platform engineering and cloud tooling have matured into the ecosystem, AI is no longer experimentation but embedded flow. Job markets are shifting, but real demand remains strong — for creative, strategic and adaptable engineers who can shepherd tools, teams and AI together into scalable delivery platforms.


7 enterprise cloud strategy trends shaking up IT today

Vertical cloud platforms aren’t just generic cloud services — they’re tailored ecosystems that combine infrastructure, AI models, and data architectures specifically optimized for sectors such as healthcare, manufacturing, finance, and retail, says Chandrakanth Puligundla, a software engineer and data analyst at grocery store chain Albertsons. What makes this trend stand out is how quickly it bridges the gap between technical capabilities and real business outcomes, Puligundla says. ... Organizations must consider what workloads go where and how that distribution will affect enterprise performance, reduce unnecessary costs, and help keep workloads secure, says Tanuj Raja, senior vice president, hyperscaler and marketplace, North America, at IT distributor and solution aggregator TD SYNNEX. In many cases, needs are driving a move toward a hybrid cloud environment for more control, scalability, and flexibility, Raja says. ... We’re seeing enterprises moving past the assumption that everything belongs in the cloud, says Cache Merrill, founder of custom software development firm Zibtek. “Instead, they’re making deliberate decisions about workload placement based on actual business outcomes.” This transition represents maturity in the way enterprises think about making technology decisions, Merrill says. He notes that the initial cloud adoption phase was driven by a fear of being left behind. 


Beyond the Rack: 6 Tips for Reducing Data Center Rental Costs

One of the simplest ways to reduce spending on data center rentals is to choose data centers located in regions where data center space costs the least. Data center rental costs, which are often measured in terms of dollars-per-kilowatt, can vary by a factor of ten or more between different parts of the world. Perhaps surprisingly, regions with the largest concentrations of data centers tend to offer the most cost-effective rates, largely due to economies of scale. ... Another key strategy for cutting data center rental costs is to consolidate servers. Server consolidation reduces the total number of servers you need to deploy, which in turn minimizes the space you need to rent. The challenge, of course, is that consolidating servers can be a complex process, and businesses don’t always have the means to optimize their infrastructure footprint overnight. But if you deploy more servers than necessary, they effectively become a form of technical debt that costs more and more the longer you keep them in service. ... As with many business purchases, the list price for data center rent is often not the lowest price that colocation operators will accept. To save money, consider negotiating. The more IT equipment you have to deploy, the more successful you’ll likely be in locking in a rental discount. 


Ransomware will thrive until we change our strategy

We need to remember that those behind ransomware attacks are part of organized criminal gangs. These are professional criminal enterprises, not lone hackers, with access to global infrastructures, safe havens to operate from, and laundering mechanisms to clean their profits. ... Disrupting ransomware gangs isn’t just about knocking a website or a dark marketplace offline. It requires trained personnel, international legal instruments, strong financial intelligence, and political support. It also takes time, which means political patience. We can’t expect agencies to dismantle global criminal networks with only short-term funding windows and reactive mandates. ... The problem of ransomware, or indeed cybercrime in general, is not just about improving how organizations manage their cybersecurity, we also need to demand better from the technology providers that those organizations rely on. Too many software systems, including ironically cybersecurity solutions, are shipped with outdated libraries, insecure default settings, complex patching workflows, and little transparency around vulnerability disclosure. Customers have been left to carry the burden of addressing flaws they didn’t create and often can’t easily fix. This must change. Secure-by-design and secure-by-default must become reality, and not slogans on a marketing slide or pinkie-promises that vendors “take cybersecurity seriously”.


The challenges for European data sovereignty

The false sense of security created by the physical storage of data in European data centers of US companies deserves critical consideration. Many organizations assume that geographical storage within the EU automatically means that data is protected by European law. In reality, the physical location is of little significance when legal control is in the hands of a foreign entity. After all, the CLOUD Act focuses on the nationality and legal status of the provider, not on the place of storage. This means that data in Frankfurt or Amsterdam may be accessible to US authorities without the customer’s knowledge. Relying on European data centers as being GDPR-compliant and geopolitically neutral by definition is therefore misplaced. ... European procurement rules often do not exclude foreign companies such as Microsoft or Amazon, even if they have a branch in Europe. This means that US providers compete for strategic digital infrastructure, while Europe wants to position itself as autonomous. The Dutch government recently highlighted this challenge and called for an EU-wide policy that combats digital dependency and offers opportunities for European providers without contravening international agreements on open procurement.

Daily Tech Digest - July 27, 2025


Quote for the day:

"The only way to do great work is to love what you do." -- Steve Jobs


Amazon AI coding agent hacked to inject data wiping commands

The hacker gained access to Amazon’s repository after submitting a pull request from a random account, likely due to workflow misconfiguration or inadequate permission management by the project maintainers. ... On July 23, Amazon received reports from security researchers that something was wrong with the extension and the company started to investigate. Next day, AWS released a clean version, Q 1.85.0, which removed the unapproved code. “AWS is aware of and has addressed an issue in the Amazon Q Developer Extension for Visual Studio Code (VSC). Security researchers reported a potential for unapproved code modification,” reads the security bulletin. “AWS Security subsequently identified a code commit through a deeper forensic analysis in the open-source VSC extension that targeted Q Developer CLI command execution.” “After which, we immediately revoked and replaced the credentials, removed the unapproved code from the codebase, and subsequently released Amazon Q Developer Extension version 1.85.0 to the marketplace.” AWS assured users that there was no risk from the previous release because the malicious code was incorrectly formatted and wouldn’t run on their environments.


How to migrate enterprise databases and data to the cloud

Migrating data is only part of the challenge; database structures, stored procedures, triggers and other code must also be moved. In this part of the process, IT leaders must identify and select migration tools that address the specific needs of the enterprise, especially if they’re moving between different database technologies (heterogeneous migration). Some things they’ll need to consider are: compatibility, transformation requirements and the ability to automate repetitive tasks.  ... During migration, especially for large or critical systems, IT leaders should keep their on-premises and cloud databases synchronized to avoid downtime and data loss. To help facilitate this, select synchronization tools that can handle the data change rates and business requirements. And be sure to test these tools in advance: High rates of change or complex data relationships can overwhelm some solutions, making parallel runs or phased cutovers unfeasible. ... Testing is a safety net. IT leaders should develop comprehensive test plans that cover not just technical functionality, but also performance, data integrity and user acceptance. Leaders should also plan for parallel runs, operating both on-premises and cloud systems in tandem, to validate that everything works as expected before the final cutover. They should engage end users early in the process in order to ensure the migrated environment meets business needs.


Researchers build first chip combining electronics, photonics, and quantum light

The new chip integrates quantum light sources and electronic controllers using a standard 45-nanometer semiconductor process. This approach paves the way for scaling up quantum systems in computing, communication, and sensing, fields that have traditionally relied on hand-built devices confined to laboratory settings. "Quantum computing, communication, and sensing are on a decades-long path from concept to reality," said MiloÅ¡ Popović, associate professor of electrical and computer engineering at Boston University and a senior author of the study. "This is a small step on that path – but an important one, because it shows we can build repeatable, controllable quantum systems in commercial semiconductor foundries." ... "What excites me most is that we embedded the control directly on-chip – stabilizing a quantum process in real time," says Anirudh Ramesh, a PhD student at Northwestern who led the quantum measurements. "That's a critical step toward scalable quantum systems." This focus on stabilization is essential to ensure that each light source performs reliably under varying conditions. Imbert Wang, a doctoral student at Boston University specializing in photonic device design, highlighted the technical complexity.


Product Manager vs. Product Owner: Why Teams Get These Roles Wrong

While PMs work on the strategic plane, Product Owners anchor delivery. The PO is the guardian of the backlog. They translate the product strategy into epics and user stories, groom the backlog, and support the development team during sprints. They don’t just manage the “what” — they deeply understand the “how.” They answer developer questions, clarify scope, and constantly re-evaluate priorities based on real-time feedback. In Agile teams, they play a central role in turning strategic vision into working software. Where PMs answer to the business, POs are embedded with the dev team. They make trade-offs, adjust scope, and ensure the product is built right. ... Some products need to grow fast. That’s where Growth PMs come in. They focus on the entire user lifecycle, often structured using the PIRAT funnel: Problem, Insight, Reach, Activation, and Trust (a modern take on traditional Pirate Metrics, such as Acquisition, Activation, Retention, Referral, and Revenue). This model guides Growth PMs in identifying where user friction occurs and what levers to pull for meaningful impact. They conduct experiments, optimize funnels, and collaborate closely with marketing and data science teams to drive user growth. 


Ransomware payments to be banned – the unanswered questions

With thresholds in place, businesses/organisations may choose to operate differently so that they aren’t covered by the ban, such as lowering turnover or number of employees. All of this said, rules like this could help to get a better picture of what’s going on with ransomware threats in the UK. Arda Büyükkaya, senior cyber threat intelligence analyst at EclecticIQ, explains more: “As attackers evolve their tactics and exploit vulnerabilities across sectors, timely intelligence-sharing becomes critical to mounting an effective defence. Encouraging businesses to report incidents more consistently will help build a stronger national threat intelligence picture something that’s important as these attacks grow more frequent and become sophisticated. To spare any confusion, sector-specific guidance should be provided by government on how resources should be implemented, making resources clear and accessible. “Many victims still hesitate to come forward due to concerns around reputational damage, legal exposure, or regulatory fallout,” said Büyükkaya. “Without mechanisms that protect and support victims, underreporting will remain a barrier to national cyber resilience.” Especially in the earlier days of the legislation, organisations may still feel pressured to pay in order to keep operations running, even if they’re banned from doing so.


AI Unleashed: Shaping the Future of Cyber Threats

AI optimizes reconnaissance and targeting, giving hackers the tools to scour public sources, leaked and publicly available breach data, and social media to build detailed profiles of potential targets in minutes. This enhanced data gathering lets attackers identify high-value victims and network vulnerabilities with unprecedented speed and accuracy. AI has also supercharged phishing campaigns by automatically crafting phishing emails and messages that mimic an organization’s formatting and reference real projects or colleagues, making them nearly indistinguishable from genuine human-originated communications. ... AI is also being weaponized to write and adapt malicious code. AI-powered malware can autonomously modify itself to slip past signature-based antivirus defenses, probe for weaknesses, select optimal exploits, and manage its own command-and-control decisions. Security experts note that AI accelerates the malware development cycle, reducing the time from concept to deployment. ... AI presents more than external threats. It has exposed a new category of targets and vulnerabilities, as many organizations now rely on AI models for critical functions, such as authentication systems and network monitoring. These AI systems themselves can be manipulated or sabotaged by adversaries if proper safeguards have not been implemented.


Agile and Quality Engineering: Building a Culture of Excellence Through a Holistic Approach

Agile development relies on rapid iteration and frequent delivery, and this rhythm demands fast, accurate feedback on code quality, functionality, and performance. With continuous testing integrated into automated pipelines, teams receive near real-time feedback on every code commit. This immediacy empowers developers to make informed decisions quickly, reducing delays caused by waiting for manual test cycles or late-stage QA validations. Quality engineering also enhances collaboration between developers and testers. In a traditional setup, QA and development operate in silos, often leading to communication gaps, delays, and conflicting priorities. In contrast, QE promotes a culture of shared ownership, where developers write unit tests, testers contribute to automation frameworks, and both parties work together during planning, development, and retrospectives. This collaboration strengthens mutual accountability and leads to better alignment on requirements, acceptance criteria, and customer expectations. Early and continuous risk mitigation is another cornerstone benefit. By incorporating practices like shift-left testing, test-driven development (TDD), and continuous integration (CI), potential issues are identified and resolved long before they escalate. 


Could Metasurfaces be The Next Quantum Information Processors?

Broadly speaking, the work embodies metasurface-based quantum optics which, beyond carving a path toward room-temperature quantum computers and networks, could also benefit quantum sensing or offer “lab-on-a-chip” capabilities for fundamental science Designing a single metasurface that can finely control properties like brightness, phase, and polarization presented unique challenges because of the mathematical complexity that arises once the number of photons and therefore the number of qubits begins to increase. Every additional photon introduces many new interference pathways, which in a conventional setup would require a rapidly growing number of beam splitters and output ports. To bring order to the complexity, the researchers leaned on a branch of mathematics called graph theory, which uses points and lines to represent connections and relationships. By representing entangled photon states as many connected lines and points, they were able to visually determine how photons interfere with each other, and to predict their effects in experiments. Graph theory is also used in certain types of quantum computing and quantum error correction but is not typically considered in the context of metasurfaces, including their design and operation. The resulting paper was a collaboration with the lab of Marko Loncar, whose team specializes in quantum optics and integrated photonics and provided needed expertise and equipment.


New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples

When faced with a complex problem, current LLMs largely rely on chain-of-thought (CoT) prompting, breaking down problems into intermediate text-based steps, essentially forcing the model to “think out loud” as it works toward a solution. While CoT has improved the reasoning abilities of LLMs, it has fundamental limitations. In their paper, researchers at Sapient Intelligence argue that “CoT for reasoning is a crutch, not a satisfactory solution. It relies on brittle, human-defined decompositions where a single misstep or a misorder of the steps can derail the reasoning process entirely.” ... To move beyond CoT, the researchers explored “latent reasoning,” where instead of generating “thinking tokens,” the model reasons in its internal, abstract representation of the problem. This is more aligned with how humans think; as the paper states, “the brain sustains lengthy, coherent chains of reasoning with remarkable efficiency in a latent space, without constant translation back to language.” However, achieving this level of deep, internal reasoning in AI is challenging. Simply stacking more layers in a deep learning model often leads to a “vanishing gradient” problem, where learning signals weaken across layers, making training ineffective. 


For the love of all things holy, please stop treating RAID storage as a backup

Although RAID is a backup by definition, practically, a backup doesn't look anything like a RAID array. That's because an ideal backup is offsite. It's not on your computer, and ideally, it's not even in the same physical location. Remember, RAID is a warranty, and a backup is insurance. RAID protects you from inevitable failure, while a backup protects you from unforeseen failure. Eventually, your drives will fail, and you'll need to replace disks in your RAID array. This is part of routine maintenance, and if you're operating an array for long enough, you should probably have drive swaps on a schedule of several years to keep everything operating smoothly. A backup will protect you from everything else. Maybe you have multiple drives fail at once. A backup will protect you. Lord forbid you fall victim to a fire, flood, or other natural disaster and your RAID array is lost or damaged in the process. A backup still protects you. It doesn't need to be a fire or flood for you to get use out of a backup. There are small issues that could put your data at risk, such as your PC being infected with malware, or trying to write (and replicate) corrupted data. You can dream up just about any situation where data loss is a risk, and a backup will be able to get your data back in situations where RAID can't. 

Daily Tech Digest - July 25, 2025


 Quote for the day:

"Technology changes, but leadership is about clarity, courage, and creating momentum where none exists." -- Inspired by modern digital transformation principles


Why foundational defences against ransomware matter more than the AI threat

The 2025 Cyber Security Breaches Survey paints a concerning picture. According to the study, ransomware attacks doubled between 2024 and 2025 – a surge less to do with AI innovation and more about deep-rooted economic, operational and structural changes within the cybercrime ecosystem. At the heart of this growth in attacks is the growing popularity of the ransomware-as-a-service (RaaS) business model. Groups like DragonForce or Ransomhub sell ready-made ransomware toolkits to affiliates in exchange for a cut of the profits, enabling even low-skilled attackers to conduct disruptive campaigns. ... Breaches often stem from common, preventable issues such as poor credential hygiene or poorly configured systems – areas that often sit outside scheduled assessments. When assessments happen only once or twice a year, new gaps may go unnoticed for months, giving attackers ample opportunity. To keep up, organisations need faster, more continuous ways of validating defences. ... Most ransomware actors follow well-worn playbooks, making them frequent visitors to company networks but not necessarily sophisticated ones. That’s why effective ransomware prevention is not about deploying cutting-edge technologies at every turn – it’s about making sure the basics are consistently in place. 


Subliminal learning: When AI models learn what you didn’t teach them

“Subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development,” the researchers from Anthropic, Truthful AI, the Warsaw University of Technology, the Alignment Research Center, and UC Berkeley, wrote in their paper. “Distillation could propagate unintended traits, even when developers try to prevent this via data filtering.” ... Models trained on data generated by misaligned models, where AI systems diverge from their original intent due to bias, flawed algorithms, data issues, insufficient oversight, or other factors, and produce incorrect, lewd or harmful content, can also inherit that misalignment, even if the training data had been carefully filtered, the researchers found. They offered examples of harmful outputs when student models became misaligned like their teachers, noting, “these misaligned responses are egregious far beyond anything in the training data, including endorsing the elimination of humanity and recommending murder.” ... Today’s multi-billion parameter models are able to discern extremely complicated relationships between a dataset and the preferences associated with that data, even if it’s not immediately obvious to humans, he noted. This points to a need to look beyond semantic and direct data relationships when working with complex AI models.


Why people-first leadership wins in software development

It frequently involves pushing for unrealistic deadlines, with project schedules made without enough input from the development team about the true effort needed and possible obstacles. This results in ongoing crunch periods and mandatory overtime. ... Another indicator is neglecting signs of burnout and stress. Leaders may ignore or dismiss signals such as team members consistently working late, increased irritability, or a decline in productivity, instead pushing for more output without addressing the root causes. Poor work-life balance becomes commonplace, often without proper recognition or rewards for the extra effort. ... Beyond the code, there’s a stifled innovation and creativity. When teams are constantly under pressure to just “ship it,” there’s little room for creative problem-solving, experimentation, or thinking outside the box. Innovation, often born from psychological safety and intellectual freedom, gets squashed, hindering your company’s ability to adapt to new trends and stay competitive. Finally, there’s damage to your company’s reputation. In the age of social media and employer review sites, news travels fast. ... It’s vital to invest in team growth and development. Provide opportunities for continuous learning, training, and skill enhancement. This not only boosts individual capabilities but also shows your commitment to their long-term career paths within your organization. This is a crucial retention strategy.


Achieving resilience in financial services through cloud elasticity and automation

In an era of heightened regulatory scrutiny, volatile markets, and growing cybersecurity threats, resilience isn’t just a nice-to-have—it’s a necessity. A lack of robust operational resilience can lead to regulatory penalties, damaged reputations, and crippling financial losses. In this context, cloud elasticity, automation, and cutting-edge security technologies are emerging as crucial tools for financial institutions to not only survive but thrive amidst these evolving pressures. ... Resilience ensures that financial institutions can maintain critical operations during crises, minimizing disruptions and maintaining service quality. Efficient operations are crucial for maintaining competitive advantage and customer satisfaction. ... Effective resilience strategies help institutions manage diverse risks, including cyber threats, system failures, and third-party vulnerabilities. The complexity of interconnected systems and the rapid pace of technological advancement add layers of risk that are difficult to manage. ... Financial institutions are particularly susceptible to risks such as system failures, cyberattacks, and third-party vulnerabilities. ... As financial institutions navigate a landscape marked by heightened risk, evolving regulations, and increasing customer expectations, operational resilience has become a defining imperative.


Digital attack surfaces expand as key exposures & risks double

Among OT systems, the average number of exposed ports per organisation rose by 35%, with Modbus (port 502) identified as the most commonly exposed, posing risks of unauthorised commands and potential shutdowns of key devices. The exposure of Unitronics port 20256 surged by 160%. The report cites cases where attackers, such as the group "CyberAv3ngers," targeted industrial control systems during conflicts, exploiting weak or default passwords. ... The number of vulnerabilities identified on public-facing assets more than doubled, rising from three per organisation in late 2024 to seven in early 2025. Critical vulnerabilities dating as far back as 2006 and 2008 still persist on unpatched systems, with proof-of-concept code readily available online, making exploitation accessible even to attackers with limited expertise. The report also references the continued threat posed by ransomware groups who exploit such weaknesses in internet-facing devices. ... Incidents involving exposed access keys, including cloud and API keys, doubled from late 2024 to early 2025. Exposed credentials can enable threat actors to enter environments as legitimate users, bypassing perimeter defenses. The report highlights that most exposures result from accidental code pushes to public repositories or leaks on criminal forums.


How Elicitation in MCP Brings Human-in-the-Loop to AI Tools

Elicitation represents more than an incremental protocol update. It marks a shift toward collaborative AI workflows, where the system and human co-discover missing context rather than expecting all details upfront. Python developers building MCP tools can now focus on core logic and delegate parameter gathering to the protocol itself, allowing for a more streamlined approach. Clients declare an elicitation capability during initialization, so servers know they may elicit input at any time. That standardized interchange liberates developers from generating custom UIs or creating ad hoc prompts, ensuring coherent behaviour across diverse MCP clients. ... Elicitation transforms human-in-the-loop (HITL) workflows from an afterthought to a core capability. Traditional AI systems often struggle with scenarios that require human judgment, approval, or additional context. Developers had to build custom solutions for each case, leading to inconsistent experiences and significant development overhead. With elicitation, HITL patterns become natural extensions of tool functionality. A database migration tool can request confirmation before making irreversible changes. A document generation system can gather style preferences and content requirements through guided interactions. An incident response tool can collect severity assessments and stakeholder information as part of its workflow.


Cognizant Agents Gave Hackers Passwords, Clorox Says in Lawsuit

“Cognizant was not duped by any elaborate ploy or sophisticated hacking techniques,” the company says in its partially redacted 19-page complaint. “The cybercriminal just called the Cognizant Service Desk, asked for credentials to access Clorox’s network, and Cognizant handed the credentials right over. Cognizant is on tape handing over the keys to Clorox’s corporate network to the cybercriminal – no authentication questions asked.” ... The threat actors made multiple calls to the Cognizant help desk, essentially asking for new passwords and getting them without any effort to verify them, Clorox wrote. They then used those new credentials to gain access to the corporate network, launching a “debilitating” attack that “paralyzed Clorox’s corporate network and crippled business operations. And to make matters worse, when Clorox called on Cognizant to provide incident response and disaster recovery support services, Cognizant botched its response and compounded the damage it had already caused.” In statement to media outlets, a Cognizant spokesperson said it was “shocking that a corporation the size of Clorox had such an inept internal cybersecurity system to mitigate this attack.” While Clorox is placing the blame on Cognizant, “the reality is that Clorox hired Cognizant for a narrow scope of help desk services which Cognizant reasonably performed. Cognizant did not manage cybersecurity for Clorox,” the spokesperson said.


Digital sovereignty becomes a matter of resilience for Europe

Open-source and decentralized technologies are essential to advancing Europe’s strategic autonomy. Across cybersecurity, communications, and foundational AI, we’re seeing growing support for open-source infrastructure, now treated with the same strategic importance once reserved for energy, water and transportation. The long-term goal is becoming clear: not to sever global ties, but to reduce dependencies by building credible, European-owned alternatives to foreign-dominated systems. Open-source is a cornerstone of this effort. It empowers European developers and companies to innovate quickly and transparently, with full visibility and control, essential for trust and sovereignty. Decentralized systems complement this by increasing resilience against cyber threats, monopolistic practices and commercial overreach by “big tech”. While public investment is important, what Europe needs most is a more “risk-on” tech environment, one that rewards ambition, accelerated growth and enables European players to scale and compete globally. Strategic autonomy won’t be achieved by funding alone, but by creating the right innovation and investment climate for open technologies to thrive. Many sovereign platforms emphasize end-to-end encryption, data residency, and open standards. Are these enough to ensure trust, or is more needed to truly protect digital independence?



Building better platforms with continuous discovery

Platform teams are often judged by stability, not creativity. Balancing discovery with uptime and reliability takes effort. So does breaking out of the “tickets and delivery” cycle to explore problems upstream. But the teams that manage it? They build platforms that people want to use, not just have to use. Start by blocking time for discovery in your sprint planning, measuring both adoption and friction metrics, and most importantly, talking to your users periodically rather than waiting for them to come to you with problems. Cultural shifts like this take time because you're not just changing the process; you're changing what people believe is acceptable or expected. That kind of change doesn't happen just because leadership says it should, or because a manager adds a new agenda to planning meetings. It sticks when ICs feel inspired and safe enough to work differently and when managers back that up with support and consistency. Sometimes a C-suite champion helps set the tone, but day-to-day, it's middle managers and senior ICs who do the slow, steady work of normalizing new behavior. You need repeated proof that it's okay to pause and ask why, to explore, to admit uncertainty. Without that psychological safety, people just go back to what they know: deliverables and deadlines. 


AI-enabled software development: Risk of skill erosion or catalyst for growth?

We need to reframe AI not as a rival, but as a tool—one that has its own pros and cons and can extend human capability, not devalue it. This shift in perspective opens the door to a broader understanding of what it means to be a skilled engineer today. Using AI doesn’t eliminate the need for expertise—it changes the nature of that expertise. Classical programming, once central to the developer’s identity, becomes one part of a larger repertoire. In its place emerge new competencies: critical evaluation, architectural reasoning, prompt literacy, source skepticism, interpretative judgment. These are not hard skills, but meta-cognitive abilities—skills that require us to think about how we think. We’re not losing cognitive effort—we’re relocating it. This transformation mirrors earlier technological shifts. ... Some of the early adopters of AI enablement are already looking ahead—not just at the savings from replacing employees with AI, but at the additional gains those savings might unlock. With strategic investment and redesigned expectations, AI can become a growth driver—not just a cost-cutting tool. But upskilling alone isn’t enough. As organizations embed AI deeper into the development workflow, they must also confront the technical risks that come with automation. The promise of increased productivity can be undermined if these tools are applied without adequate context, oversight, or infrastructure.