Showing posts with label data quality. Show all posts
Showing posts with label data quality. Show all posts

Daily Tech Digest - August 01, 2025


Quote for the day:

“Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability.” -- Patrick Lencioni


It’s time to sound the alarm on water sector cybersecurity

The U.S. Environmental Protection Agency (EPA) identified 97 drinking water systems serving approximately 26.6 million users as having either critical or high-risk cybersecurity vulnerabilities. Water utility leaders are especially worried about ransomware, malware, and phishing attacks. American Water, the largest water and wastewater utility company in the US, experienced a cybersecurity incident that forced the company to shut down some of its systems. That came shortly after a similar incident forced Arkansas City’s water treatment facility to temporarily switch to manual operations. These attacks are not limited to the US. Recently, UK-based Southern Water admitted that criminals had breached its IT systems. In Denmark, hackers targeted the consumer data services of water provider Fanø Vand, resulting in data theft and operational hijack. These incidents show that this is a global risk, and authorities believe they may be the work of foreign actors. ... The EU is taking a serious approach to cybersecurity, with stricter enforcement and long-term investment in essential services. Through the NIS2 Directive, member states are required to follow security standards, report incidents, and coordinate national oversight. These steps are designed to help utilities strengthen their defenses and improve resilience.


AI and the Democratization of Cybercrime

Cheap, off-the-shelf language models are erasing the technical hurdles. FraudGPT and WormGPT subscriptions start at roughly $200 per month, promising ‘undetectable’ malware, flawless spear-phishing prose, and step-by-step exploit guidance. An aspiring criminal no longer needs the technical knowledge to tweak GitHub proof-of-concepts. They paste a prompt such as ‘Write a PowerShell loader that evades EDR’ and receive usable code in seconds. ... Researchers pushed the envelope further with ReaperAI and AutoAttacker, proof-of-concept ‘agentic’ systems that chain LLM reasoning with vulnerability scanners and exploit libraries. In controlled tests, they breached outdated Web servers, deployed ransomware, and negotiated payment over Tor, without human input once launched. Fully automated cyberattacks are just around the corner. ... Core defensive practice now revolves around four themes. First, reducing the attack surface through relentless automated patching. Second, assuming breach via Zero-Trust segmentation and immutable off-line backups that neuter double-extortion leverage. Third, hardening identity with universal multi-factor authentication (MFA) and phishing-resistant authentication. Finally, exercising incident-response plans with table-top and red-team drills that mirror AI-assisted adversaries.


Digital Twins and AI: Powering the future of creativity at Nestlé

NVIDIA Omniverse on Azure allows for building and seamlessly integrating advanced simulation and generative AI into existing 3D workflows. This cloud-based platform includes APIs and services enabling developers to easily integrate OpenUSD, as well as other sensor and rendering applications. OpenUSD’s capabilities accelerate workflows, teams, and projects when creating 3D assets and environments for large-scale, AI-enabled virtual worlds. The Omniverse Development Workstation on Azure accelerates the process of building Omniverse apps and tools, removing the time and complexity of configuring individual software packages and GPU drivers. With NVIDIA Omniverse on Azure and OpenUSD, marketing teams can create ultra-realistic 3D product previews and environments so that customers can explore a retailer’s products in an engaging and informative way. The platform also can deliver immersive augmented and virtual reality experiences for customers, such as virtually test-driving a car or seeing how new furniture pieces would look in an existing space. For retailers, NVIDIA Omniverse can help create digital twins of stores or in-store displays to simulate and evaluate different layouts to optimize how customers interact with them. 


Why data deletion – not retention – is the next big cyber defence

Emerging data privacy regulations, coupled with escalating cybersecurity risks, are flipping the script. Organisations can no longer afford to treat deletion as an afterthought. From compliance violations to breach fallout, retaining data beyond its lifecycle has a real downside. Many organisations still don’t have a reliable, scalable way to delete data. Policies may exist on paper, but consistent execution across environments, from cloud storage to aging legacy systems, is rare. That gap is no longer sustainable. In fact, failing to delete data when legally required is quickly becoming a regulatory, security, and reputational risk. ... From a cybersecurity perspective, every byte of retained data is a potential breach exposure. In many recent cases, post-incident investigations have uncovered massive amounts of sensitive data that should have been deleted, turning routine breaches into high-stakes regulatory events. But beyond the legal risks, excess data carries hidden operational costs. ... Most CISOs, privacy officers, and IT leaders understand the risks. But deletion is difficult to operationalise. Data lives across multiple systems, formats, and departments. Some repositories are outdated or no longer supported. Others are siloed or partially controlled by third parties. And in many cases, existing tools lack the integration or governance controls needed to automate deletion at scale.


IT Strategies to Navigate the Ever-Changing Digital Workspace

IT teams need to look for flexible, agnostic workspace management solutions that can respond to whether endpoints are running Windows 11, MacOS, ChromeOS, virtual desktops, or cloud PCs. They want to future proof their endpoint investments, knowing that their workspace management must be highly adaptable as business requirements change. To support this disparate endpoint estate, DEX solutions have come to the forefront as they have evolved from a one-off tool for monitoring employee experience to an integrated platform by which administrators can manage endpoints, security tools, and performance remediation. ... In the composite environment IT has the challenge of securing workflows across the endpoint estate, regardless of delivery platform, and doing so without interfering with the employee experience. As the number of both installed and SaaS applications grows, IT teams can leverage automation to streamline patching and other security updates and to monitor SaaS credentials effectively. Automation becomes invaluable in operational efficiency across an increasingly complex application landscape. Another security challenge is the existence of ‘Shadow SaaS’ in which employees, like shadow IT/AI, use unsanctioned tools they believe will help productivity.


Who’s Really Behind the Mask? Combatting Identity Fraud

Effective identity investigations start with asking the right questions and not merely responding to alerts. Security teams need to look deeper: Is this login location normal for the user? Is the device consistent with their normal configuration? Is the action standard for their role? Are there anomalies between systems? These questions create necessary context, enabling defenders to differentiate between standard deviations and hostile activity. Without that investigative attitude, security teams might pursue false positives or overlook actual threats. By structuring identity events with focused, behavior-based questions, analysts can get to the heart of the activity and react with accuracy and confidence. ... Identity theft often hides in plain sight, flourishing in the ordinary gaps between expected and actual behavior. Its deception lies in normalcy, where activity at the surface appears authentic but deviates quietly from established patterns. That’s why trust in a multi-source approach to truth is essential. Connecting insights from network traffic, authentication logs, application access, email interactions, and external integrations can help teams build a context-aware, layered picture of every user. This blended view helps uncover subtle discrepancies, confirm anomalies, and shed light on threats that routine detection will otherwise overlook, minimizing false positives and revealing actual risks.


The hidden crisis behind AI’s promise: Why data quality became an afterthought

Addressing AI data quality requires more human involvement, not less. Organizations need data stewardship frameworks that include subject matter experts who understand not just technical data structures, but business context and implications. These data stewards can identify subtle but crucial distinctions that pure technical analysis might miss. In educational technology, for example, combining parents, teachers, and students into a single “users” category for analysis would produce meaningless insights. Someone with domain expertise knows these groups serve fundamentally different roles and should be analyzed separately. ... Despite the industry’s excitement about new AI model releases, a more disciplined approach focused on clearly defined use cases rather than maximum data exposure proves more effective. Instead of opting for more data to be shared with AI, sticking to the basics and thinking about product concepts produces better results. You don’t want to just throw a lot of good stuff in a can and assume that something good will happen. ... Future AI systems will need “data entitlement” capabilities that automatically understand and respect access controls and privacy requirements. This goes beyond current approaches that require manual configuration of data permissions for each AI application.


Agentic AI is reshaping the API landscape

With agentic AI, APIs evolve from passive endpoints into active dialogue partners. They need to handle more than single, fixed transactions. Instead, APIs must support iterative engagement, where agents adjust their calls based on prior results and current context. This leads to more flexible communication models. For instance, an agent might begin by querying one API to gather user data, process it internally, and then call another endpoint to trigger a workflow. APIs in such environments must be reliable, context-aware and be able to handle higher levels of interaction – including unexpected sequences of calls. One of the most powerful capabilities of agentic AI is its ability to coordinate complex workflows across multiple APIs. Agents can manage chains of requests, evaluate priorities, handle exceptions, and optimise processes in real time. ... Agentic AI is already setting the stage for more responsive, autonomous API ecosystems. Get ready for systems that can foresee workload shifts, self-tune performance, and coordinate across services without waiting for any command from a human. Soon, agentic AI will enable seamless collaboration between multiple AI systems—each managing its own workflow, yet contributing to larger, unified business goals. To support this evolution, APIs themselves must transform. 


Removing Technical Debt Supports Cybersecurity and Incident Response for SMBs

Technical debt is a business’s running tally of aging or defunct software and systems. While workarounds can keep the lights on, they come with risks. For instance, there are operational challenges and expenses associated with managing older systems. Additionally, necessary expenses can accumulate if technical debt is allowed to get out of control, ballooning the costs of a proper fix. While eliminating technical debt is challenging, it’s fundamentally an investment in a business’s future security. Excess technical debt doesn’t just lead to operational inefficiencies. It also creates cybersecurity weaknesses that inhibit threat detection and response. ... “As threats evolve, technical debt becomes a roadblock,” says Jeff Olson, director of software-defined WAN product and technical marketing at Aruba, a Hewlett Packard Enterprise company. “Security protocols and standards have advanced to address common threats, but if you have older technology, you’re at risk until you can upgrade your devices.” Upgrades can prove challenging, however. ... The first step to reducing technical debt is to act now, Olson says. “Sweating it out” for another two or three years will only make things worse. Waiting also stymies innovation, as reducing technical debt can help SMBs take advantage of advanced technologies such as artificial intelligence.


Third-party risk is everyone’s problem: What CISOs need to know now

The best CISOs now operate less like technical gatekeepers and more like orchestral conductors, aligning procurement, legal, finance, and operations around a shared expectation of risk awareness. ... The responsibility for managing third-party risk no longer rests solely on IT security teams. CISOs must transform their roles from technical protectors to strategic leaders who influence enterprise risk management at every level. This evolution involves:Embracing enterprise-wide collaboration: Effective management of third-party risk requires cooperation among diverse departments such as procurement, legal, finance, and operations. By collaborating across the organization, CISOs ensure that third-party risk management is comprehensive and proactive rather than reactive. Integrating risk management into governance frameworks: Third-party risk should be a top agenda item in board meetings and strategic planning sessions. CISOs need to work with senior leadership to embed vendor risk management into the organization’s overall risk landscape. Fostering transparency and accountability: Establishing clear reporting lines and protocols ensures that issues related to third-party risk are promptly escalated and addressed. Accountability should span every level of the organization to ensure effective risk management.

Daily Tech Digest - July 04, 2025


Quote for the day:

"Rank does not confer privilege or give power. It imposes responsibility." -- Peter F. Drucker


The one secret to using genAI to boost your brain

Research published by Carnegie Mellon University this month found that groups that turned to Google Search came up with fewer creative ideas during brainstorming sessions compared to groups without access to Google Search. Not only did each Google Search group come up with the same ideas as the other Search groups, they also presented them in the same order, suggesting that the search results replaced their actual creativity. The researchers called this a “fixation effect.” When people see a few examples, they tend to get stuck on those and struggle to think beyond them. ... Our knowledge of and perspective on the world becomes less our own and more what the algorithms feed us. They do this by showing us content that triggers strong feelings — anger, joy, fear. Instead of feeling a full range of emotions, we bounce between extremes. Researchers call this “emotional dysregulation.” The constant flood of attention-grabbing posts can make it hard to focus or feel calm. AI algorithms on social grab our attention with endless new content. ... To elevate both the quality of your work and the performance of your mind, begin by crafting your paper, email, or post entirely on your own, without any assistance from genAI tools. Only after you have thoroughly explored a topic and pushed your own capabilities should you turn to chatbots, using them as a catalyst to further enhance your output, generate new ideas, and refine your results.


How AI-Powered DevSecOps Can Reinvent Agile Planning

The shift toward AI-enhanced agile planning requires a practical assessment of your current processes and tool chain.Start by evaluating whether your current processes create bottlenecks between development and deployment, looking for gaps where agile ceremonies exist, but traditional approval workflows still dominate critical path decisions. Next, assess how much time your teams spend on planning ceremonies versus actual development work. Consider whether AI could automate the administrative aspects, such as backlog grooming, estimation sessions and status updates, while preserving human strategic input on priorities and technical decisions. Examine your current tool chain to identify where manual coordination is required between the planning, development and deployment phases. Look for opportunities where AI can automate data synchronization and provide predictive insights about capacity and timeline risks, reducing the context switching that fragments developer focus. Finally, review your current planning overhead and identify which administrative tasks can be automated, allowing your team to focus on delivering customer value and making strategic technical decisions rather than adhering to process compliance. The goal is not to eliminate human judgment but to elevate it from routine tasks to the strategic thinking that drives innovation.


The AI power swingers

AI workloads - generative AI and the training of large language models in particular - demand a profusion of power in just a fraction of a second, and this in itself brings some complications. “When you are engaging a training model, you engage all of these GPUs simultaneously, and there’s a very quick rise to pretty much maximum power, and we are seeing that at a sub-second pace,” Ed Ansett, director at I3 Solutions, tells DCD. “The problem is that you have, for example, 50MW of IT load that the utility is about to see, but it will see it very quickly, and the utility won’t be able to respond that quickly. It will cause frequency problems, and the utility will almost certainly disconnect the data center, so there needs to be a way of buffering those workloads.” ... Despite this, AWS and Annapurna Labs have made some moves with the second generation of their home-grown AI accelerator - Trainium. These chips differ from GPUs in both an architectural standpoint, and their end capabilities. “If you look at GPU architecture, it's thousands of small tensor cores, small CPUs that are all running in parallel. Here, the architecture is called a systolic array, which is a completely different architecture,” says Gadi Hutt, director of product and customer engineering at Annapurna Labs. “Basically data flows through the logic of the systolic array that then does the efficient linear algebra acceleration.”


Why database security needs a seat at the cyber strategy table

Too often, security gaps exist because database environments are siloed from the broader IT infrastructure, making visibility and coordination difficult. This is especially true in hybrid environments, where legacy on-premises systems coexist with cloud-based assets. The lack of centralised oversight can allow misconfigurations and outdated software to go unnoticed, until it’s too late. ... Comprehensive monitoring plays a central role in securing database environments. Organisations need visibility into performance, availability, and security indicators in real time. Solutions like Paessler PRTG enable IT and security teams to proactively detect deviations from the norm, whether it’s a sudden spike in access requests or performance degradation that might signal malicious activity. Monitoring also helps bridge the gap between IT operations and security teams. ... Ultimately, database security is not just about technology, it’s about visibility, accountability, and ownership. Security teams must collaborate with database administrators, IT operations, and compliance functions to ensure policies are enforced, risks are mitigated, and monitoring is continuous.


Cyber Vaults: How Regulated Sectors Fight Cyberattacks

Cyber vaults are emerging as a key way organizations achieve that assurance. These highly secure environments protect immutable copies of mission-critical data – typically the data that enables the “minimum viable company” (i.e. the essential functions and systems that must remain). Cyber vaults achieve this by creating logically and physically isolated environments that sever the connection between production and backup systems. This isolation ensures known-good copies remain untouched, ready for recovery in the event of a ransomware attack. ... A cyber vault only fulfills its promise when built on all three pillars. Each must function as an enforceable control. Increasingly, boards and regulators aren’t just expecting these controls — they’re demanding proof they are in place, operational, and effective. Leave one out, and the entire recovery strategy is at risk. ... In regulated industries, failure to demonstrate recoverability can lead to fines, public scrutiny, and regulatory sanctions. These pressures are elevating backup and recovery from IT hygiene to boardroom priority, where resilience is increasingly viewed as a fiduciary responsibility. Organizations are coming to terms with a new reality: prevention will fail. Recovery is what defines resilience. It’s not just about whether you have backups – it’s whether you can trust them to work when it matters most.


Africa’s cybersecurity crisis and the push to mobilizing communities to safeguard a digital future

Trained youth are also acting as knowledge multipliers. After receiving foundational cybersecurity education, many go on to share their insights with parents, siblings, and local networks. This creates a ripple effect of awareness and behavioral change, extending far beyond formal institutions. In regions where internet use is rising faster than formal education systems can adapt, such peer-to-peer education is proving invaluable. Beyond defense, cybersecurity also offers a pathway to economic opportunity. As demand for skilled professionals grows, early exposure to the field can open doors to employment in both local and global markets. This supports broader development goals by linking digital safety with job creation and innovation. ... Africa’s digital future cannot be built on insecure foundations. Cybersecurity is not a luxury, it is a prerequisite for sustainable growth, social trust, and national security. Grassroots efforts across the continent are already demonstrating that meaningful progress is possible, even in resource-constrained environments. However, these efforts must be scaled, formalized, and supported at the highest levels. By equipping communities, especially youth, with the knowledge and tools to defend themselves online, a resilient digital culture can be cultivated from the ground up. 


Gartner unveils top strategic software engineering trends for 2025 and beyond

Large language models (LLMs) are enabling software to interact more intelligently and autonomously. Gartner predicts that by 2027, 55% of engineering teams will build LLM-based features. Successful adoption will require rethinking strategies, upskilling teams, and implementing robust guardrails for risk management. ... Organizations will increasingly integrate generative AI (GenAI) capabilities into internal developer platforms, with 70% of platform teams expected to do so by 2027. This trend supports discoverability, security, and governance while accelerating AI-powered application development. ... High talent density—building teams with a high concentration of skilled professionals—will be a crucial differentiator. Organizations should go beyond traditional hiring to foster a culture of continuous learning and collaboration, enabling greater adaptability and customer value delivery. ... Open GenAI models are gaining traction for their flexibility, cost-effectiveness, and freedom from vendor lock-in. By 2028, Gartner forecasts 30% of global GenAI spending will target open models customized for domain-specific needs, making advanced AI more accessible and affordable. ... Green software engineering will become vital to meet sustainability goals, focusing on carbon-efficient and carbon-aware practices across the entire software lifecycle. 


6 techniques to reduce cloud observability cost

Cloud observability is really important for most modern organizations in that it dives deep when it comes down to keeping application functionality, problems, and those little bumps in the road along the way, a smooth overall user experience. Meanwhile, the growing toll of telemetry data that keeps piling up, such as logs, metrics and traces, becomes costlier by the minute. But one thing is clear: You do not have to compromise visibility just to reduce costs. ... For high-volume data streams (especially traces and logs), consider some intelligent sampling methods that will allow you to capture a statistically significant subset of data, thus reducing volume while still allowing for anomaly detection and trend analysis. ... Consider the periodicity of metric collection. Do you really need to scrape metrics every 10 seconds, when every 60 seconds would have been enough to get a view of this service? Adjusting these intervals can greatly reduce the number of data points. ... Utilize autoscaling to automatically scale the compute capacity proportional to demand so that you pay only for what you actually use. This removes over-allocation of resources in the low usage times. ... For predictable workloads, check out the discounts offered by the cloud providers in the form of Reserved Instances or Savings Plans. For the fault-tolerant and interruptible workloads, Spot instances offer a considerable discount. ...


Three Tech Moves That Pay Off for Small Banks Fighting Margin Squeeze

For smaller institutions looking to strengthen profitability, tech may now be the most direct and controllable lever they have at their disposal. Projects that might once have been seen as back-office improvements are starting to look like strategic choices that can enhance performance — helping banks reduce cost per account, increase revenue per relationship, and act with greater speed and precision. ... Banks are using robotic process automation (RPA) to handle repetitive, rules-based tasks that previously absorbed valuable staff time. These bots operate across existing platforms, moving data, triggering workflows, and reducing manual error without requiring major system changes. In parallel, chatbot-style knowledge hubs help frontline staff find information quickly and consistently — reducing service bottlenecks and freeing people to focus on more complex work. These are targeted projects with measurable operational impact. ... Banks can also use customer data to guide customer acquisition strategies. By combining internal insights with external datasets — like NAICS codes, firmographics, or geographic overlays — institutions can build profiles of their most valuable relationships and find others that fit the same criteria. This kind of modeling helps sales and business development teams focus on the opportunities with the greatest long-term potential.


The Cost of Bad Data: Why Time Series Integrity Matters More Than You Think

Data plays a critical role in shaping operational decisions. From sensor streams in factories to API response times in cloud environments, organizations rely on time-stamped metrics to understand what’s happening and determine what to do next. But when that data is inaccurate or incomplete, systems make the wrong call. Teams waste time chasing false alerts, miss critical anomalies, and make high-stakes decisions based on flawed assumptions. When trust in data breaks down, risk increases, response slows, and costs rise. ... Real-time transformations shape data as it flows through the system. Apache Arrow Flight enables high-speed streaming, and SQL or Python logic transforms values on the fly. Whether enriching metadata, filtering out noise, or converting units, InfluxDB 3 handles these tasks before data reaches long-term storage. A manufacturing facility using temperature sensors in production lines can automatically convert Fahrenheit to Celsius, label zones, and discard noisy heartbeat values before the data hits storage. This gives operators clean dashboards and real-time control insights without requiring extra processing time or manual data correction—saving teams hours of rework and helping businesses maintain fast, reliable decision-making under pressure.

Daily Tech Digest - July 03, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


The Goldilocks Theory – preparing for Q-Day ‘just right’

When it comes to quantum readiness, businesses currently have two options: Quantum key distribution (QKD) and post quantum cryptography (PQC). Of these, PQC reigns supreme. Here’s why. On the one hand, you have QKD which leverages principles of quantum physics, such as superposition, to securely distribute encryption keys. Although great in theory, it needs extensive new infrastructure, including bespoke networks and highly specialised hardware. More importantly, it also lacks authentication capabilities, severely limiting its practical utility. PQC, on the other hand, comprises classical cryptographic algorithms specifically designed to withstand quantum attacks. It can be integrated into existing digital infrastructures with minimal disruption. ... Imagine installing new quantum-safe algorithms prematurely, only to discover later they’re vulnerable, incompatible with emerging standards, or impractical at scale. This could have the opposite effect and could inadvertently increase attack surface and bring severe operational headaches, ironically becoming less secure. But delaying migration for too long also poses serious risks. Malicious actors could be already harvesting encrypted data, planning to decrypt it when quantum technology matures – so businesses protecting sensitive data such as financial records, personal details, intellectual property cannot afford indefinite delays.


Sovereign by Design: Data Control in a Borderless World

The regulatory framework for digital sovereignty is a national priority. The EU has set the pace with GDPR and GAIA-X. It prioritizes data residency and local infrastructure. China's cybersecurity law and personal information protection law enforce strict data localization. India's DPDP Act mandates local storage for sensitive data, aligning with its digital self-reliance vision through platforms such as Aadhaar. Russia's federal law No. 242-FZ requires citizen data to stay within the country for the sake of national security. Australia's privacy act focuses on data privacy, especially for health records, and Canada's PIPEDA encourages local storage for government data. Saudi Arabia's personal data protection law enforces localization for sensitive sectors, and Indonesia's personal data protection law covers all citizen-centric data. Singapore's PDPA balances privacy with global data flows, and Brazil's LGPD, mirroring the EU's GDPR, mandates the protection of privacy and fundamental rights of its citizens. ... Tech companies have little option but to comply with the growing demands of digital sovereignty. For example, Amazon Web Services has a digital sovereignty pledge, committing to "a comprehensive set of sovereignty controls and features in the cloud" without compromising performance.


Agentic AI Governance and Data Quality Management in Modern Solutions

Agentic AI governance is a framework that ensures artificial intelligence systems operate within defined ethical, legal, and technical boundaries. This governance is crucial for maintaining trust, compliance, and operational efficiency, especially in industries such as Banking, Financial Services, Insurance, and Capital Markets. In tandem with robust data quality management, Agentic AI governance can substantially enhance the reliability and effectiveness of AI-driven solutions. ... In industries such as Banking, Financial Services, Insurance, and Capital Markets, the importance of Agentic AI governance cannot be overstated. These sectors deal with vast amounts of sensitive data and require high levels of accuracy, security, and compliance. Here’s why Agentic AI governance is essential: Enhanced Trust: Proper governance fosters trust among stakeholders by ensuring AI systems are transparent, fair, and reliable. Regulatory Compliance: Adherence to legal and regulatory requirements helps avoid penalties and safeguard against legal risks. Operational Efficiency: By mitigating risks and ensuring accuracy, AI governance enhances overall operational efficiency and decision-making. Protection of Sensitive Data: Robust governance frameworks protect sensitive financial data from breaches and misuse, ensuring privacy and security. 


Fundamentals of Dimensional Data Modeling

Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. Data modelers organize these facts and descriptive dimensions into separate tables within the data warehouse, aligning them with the different subject areas and business processes. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. Additionally, dimensional data modeling proves to be flexible as business needs evolve. The data warehouse updates technology according to the concept of slowly changing dimensions (SCD) as business contexts emerge. ... Alignment in the design requires these processes, and data governance plays an integral role in getting there. Once the organization is on the same page about the dimensional model’s design, it chooses the best kind of implementation. Implementation choices include the star or snowflake schema around a fact. When organizations have multiple facts and dimensions, they use a cube. A dimensional model defines how technology needs to build a data warehouse architecture or one of its components using good design and implementation.


IDE Extensions Pose Hidden Risks to Software Supply Chain

The latest research, published this week by application security vendor OX Security, reveals the hidden dangers of verified IDE extensions. While IDEs provide an array of development tools and features, there are a variety of third-party extensions that offer additional capabilities and are available in both official marketplaces and external websites. ... But OX researchers realized they could add functionality to verified extensions after the fact and still maintain the checkmark icon. After analyzing traffic for Visual Studio Code, the researchers found a server request to the marketplace that determines whether the extension is verified; they discovered they could modify the values featured in the server request and maintain the verification status even after creating malicious versions of the approved extensions. ... Using this attack technique, a threat actor could inject malicious code into verified and seemingly safe extensions that would maintain their verified status. "This can result in arbitrary code execution on developers' workstations without their knowledge, as the extension appears trusted," Siman-Tov Bustan and Zadok wrote. "Therefore, relying solely on the verified symbol of extensions is inadvisable." ... "It only takes one developer to download one of these extensions," he says. "And we're not talking about lateral movement. ..."


Business Case for Agentic AI SOC Analysts

A key driver behind the business case for agentic AI in the SOC is the acute shortage of skilled security analysts. The global cybersecurity workforce gap is now estimated at 4 million professionals, but the real bottleneck for most organizations is the scarcity of experienced analysts with the expertise to triage, investigate, and respond to modern threats. One ISC2 survey report from 2024 shows that 60% of organizations worldwide reported staff shortages significantly impacting their ability to secure the organizations, with another report from the World Economic Forum showing that just 15% of organizations believe they have the right people with the right skills to properly respond to a cybersecurity incident. Existing teams are stretched thin, often forced to prioritize which alerts to investigate and which to leave unaddressed. As previously mentioned, the flood of false positives in most SOCs means that even the most experienced analysts are too distracted by noise, increasing exposure to business-impacting incidents. Given these realities, simply adding more headcount is neither feasible nor sustainable. Instead, organizations must focus on maximizing the impact of their existing skilled staff. The AI SOC Analyst addresses this by automating routine Tier 1 tasks, filtering out noise, and surfacing the alerts that truly require human judgment. 


Microservice Madness: Debunking Myths and Exposing Pitfalls

Microservices will reduce dependencies, because it forces you to serialize your types into generic graph objects (read; JSON or XML or something similar). This implies that you can just transform your classes into a generic graph object at its interface edges, and accomplish the exact same thing. ... There are valid arguments for using message brokers, and there are valid arguments for decoupling dependencies. There are even valid points of scaling out horizontally by segregating functionality on to different servers. But if your argument in favor of using microservices is "because it eliminates dependencies," you're either crazy, corrupt through to the bone, or you have absolutely no idea what you're talking about (make your pick!) Because you can easily achieve the same amount of decoupling using Active Events and Slots, combined with a generic graph object, in-process, and it will execute 2 billion times faster in production than your "microservice solution" ... "Microservice Architecture" and "Service Oriented Architecture" (SOA) have probably caused more harm to our industry than the financial crisis in 2008 caused to our economy. And the funny thing is, the damage is ongoing because of people repeating mindless superstitious belief systems as if they were the truth.


Sustainability and social responsibility

Direct-to-chip liquid cooling delivers impressive efficiency but doesn’t manage the entire thermal load. That’s why hybrid systems that combine liquid and traditional air cooling are increasingly popular. These systems offer the ability to fine-tune energy use, reduce reliance on mechanical cooling, and optimize server performance. HiRef offers advanced cooling distribution units (CDUs) that integrate liquid-cooled servers with heat exchangers and support infrastructure like dry coolers and dedicated high-temperature chillers. This integration ensures seamless heat management regardless of local climate or load fluctuations. ... With liquid cooling systems capable of operating at higher temperatures, facilities can increasingly rely on external conditions for passive cooling. This shift not only reduces electricity usage, but also allows for significant operational cost savings over time. But this sustainable future also depends on regulatory compliance, particularly in light of the recently updated F-Gas Regulation, which took effect in March 2024. The EU regulation aims to reduce emissions of fluorinated greenhouse gases to net-zero by 2050 by phasing out harmful high-GWP refrigerants like HFCs. “The F-Gas regulation isn’t directly tailored to the data center sector,” explains Poletto.


Infrastructure Operators Leaving Control Systems Exposed

Threat intelligence firm Censys has scanned the internet twice a month for the last six months, looking for a representative sample composed of four widely used types of ICS devices publicly exposed to the internet. Overall exposure slightly increased from January through June, the firm said Monday. One of the devices Censys scanned for is programmable logic controllers made by an Israel-based Unitronics. The firm's Vision-series devices get used in numerous industries, including the water and wastewater sector. Researchers also counted publicly exposed devices built by Israel-based Orpak - a subsidiary of Gilbarco Veeder-Root - that run SiteOmat fuel station automation software. It also looked for devices made by Red Lion that are widely deployed for factory and process automation, as well as in oil and gas environments. It additionally probed for instances of a facilities automation software framework known as Niagara, made by Tridium. ... Report author Emily Austin, principal security researcher at Censys, said some fluctuation over time isn't unusual, given how "services on the internet are often ephemeral by nature." The greatest number of publicly exposed systems were in the United States, except for Unitronics, which are also widely used in Australia.


Healthcare CISOs must secure more than what’s regulated

Security must be embedded early and consistently throughout the development lifecycle, and that requires cross-functional alignment and leadership support. Without an understanding of how regulations translate into practical, actionable security controls, CISOs can struggle to achieve traction within fast-paced development environments. ... Security objectives should be mapped to these respective cycles—addressing tactical issues like vulnerability remediation during sprints, while using PI planning cycles to address larger technical and security debt. It’s also critical to position security as an enabler of business continuity and trust, rather than a blocker. Embedding security into existing workflows rather than bolting it on later builds goodwill and ensures more sustainable adoption. ... The key is intentional consolidation. We prioritize tools that serve multiple use cases and are extensible across both DevOps and security functions. For example, choosing solutions that can support infrastructure-as-code security scanning, cloud posture management, and application vulnerability detection within the same ecosystem. Standardizing tools across development and operations not only reduces overhead but also makes it easier to train teams, integrate workflows, and gain unified visibility into risk.

Daily Tech Digest - May 10, 2025


Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.



Building blocks – what’s required for my business to be SECURE?

Zero Trust Architecture involves a set of rules that will ensure that you will not let anyone in without proper validation. You will assume there is a breach. You will reduce privileges to their minimum and activate them only as needed and you will make sure that devices connecting to your data are protected and monitored. Enclave is all about aligning your data’s sensitivity with your cybersecurity requirements. For example, to download a public document, no authentication is required, but to access your CRM, containing all your customers’ data, you will require a username, password, an extra factor of authentication, and to be in the office. You will not be able to download the data. Two different sensitivities, two experiences. ... The leadership team is the compass for the rest of the company – their north star. To make the right decision during a crisis, you much be prepared to face it. And how do you make sure that you’re not affected by all this adrenaline and stress that is caused by such an event? Practice. I am not saying that you must restore all your company’s backups every weekend. I am saying that once a month, the company executives should run through the plan. ... Most plans that were designed and rehearsed five years ago are now full of holes. 


Beyond Culture: Addressing Common Security Frustrations

A majority of security respondents (58%) said they have difficulty getting development to prioritize remediation of vulnerabilities, and 52% reported that red tape often slows their efforts to quickly fix vulnerabilities. In addition, security respondents pointed to several specific frustrations related to their jobs, including difficulty understanding security findings, excessive false positives and testing happening late in the software development process. ... If an organization sees many false positives, that could be a sign that they haven’t done all they can to ensure their security findings are high fidelity. Organizations should narrow the focus of their security efforts to what matters. That means traditional static application security testing (SAST) solutions are likely insufficient. SAST is a powerful tool, but it loses much of its value if the results are unmanageable or lack appropriate context. ... Although AI promises to help simplify software development processes, many organizations still have a long road ahead. In fact, respondents who are using AI were significantly more likely than those not using AI to want to consolidate their toolchain, suggesting that the proliferation of different point solutions running different AI models could be adding complexity, not taking it away.


Significant Gap Exists in UK Cyber Resilience Efforts

A persistent lack of skilled cybersecurity professionals in the civil service is one reason for the persistent gap in resilience, parliamentarians wrote. "Government has been unwilling to pay the salaries necessary to hire the experienced and skilled people it desperately needs to manage its cybersecurity effectively." Government figures show the workforce has grown and there are plans to recruit more experts - but a third of cybersecurity roles are either vacant "or filled by expensive contractors," the report states. "Experience suggests government will need to be realistic about how many of the best people it can recruit and retain." The report also faults government departments for not taking sufficient ownership over cybersecurity. The prime minister's office for years relied on departments to perform a cybersecurity self-assessment, until in 2023 when it launched GovAssure, a program to bring in independent assessors. GovAssure turned the self-assessments on their head, finding that the departments that ranked themselves the highest through self-assessment were among the less secure. Continued reliance on legacy systems have figured heavily in recent critiques of British government IT, and it does in the parliamentary report, as well. "It is unacceptable that the center of government does not know how many legacy IT systems exist in government and therefore cannot manage the associated cyber risks."


How CIOs Can Boost AI Returns With Smart Partnerships

CIOs face an overwhelming array of possibilities, making prioritization critical. The CIO Playbook 2025 helps by benchmarking priorities across markets and disciplines. Despite vast datasets, data challenges persist as only a small, relevant portion is usable after cleansing. Generative AI helps uncover correlations humans might miss, but its outputs require rigorous validation for practical use. Static budgets, growing demands and a shortage of skilled talent further complicate adoption. Unlike traditional IT, AI affects sales, marketing and customer service, necessitating cross-departmental collaboration. For example, Lenovo's AI unifies customer service channels such as email and WhatsApp, creating seamless interactions. ... First, go slow to go fast. Spend days or months - not years - exploring innovations through POCs. A customer who builds his or her own LLM faces pitfalls; using existing solutions is often smarter. Second, prioritize cross-collaboration, both internally across departments and externally with the ecosystem. Even Lenovo, operating in 180 markets, relies on partnerships to address AI's layers - the cloud, models, data, infrastructure and services. Third, target high-ROI functions such as customer service, where CIOs expect a 3.6-fold return, to build boardroom support for broader adoption.


How to Stop Increasingly Dangerous AI-Generated Phishing Scams

With so many avenues of attack being used by phishing scammers, you need constant vigilance. AI-powered detection platforms can simultaneously analyze message content, links, and user behavior patterns. Combined with sophisticated pattern recognition and anomaly identification techniques, these systems can spot phishing attempts that would bypass traditional signature-based approaches. ... Security awareness programs have progressed from basic modules to dynamic, AI-driven phishing simulations reflecting real-world scenarios. These simulations adapt to participant responses, providing customized feedback and improving overall effectiveness. Exposing team members to various sophisticated phishing techniques in controlled environments better prepares them for the unpredictable nature of AI-powered attacks. AI-enhanced incident response represents another promising development. AI systems can quickly determine an attack's scope and impact by automating phishing incident analysis, allowing security teams to respond more efficiently and effectively. This automation not only reduces response time but also helps prevent attacks from spreading by rapidly isolating compromised systems. 


Immutable Secrets Management: A Zero-Trust Approach to Sensitive Data in Containers

We address the critical vulnerabilities inherent in traditional secrets management practices, which often rely on mutable secrets and implicit trust. Our solution, grounded in the principles of Zero-Trust security, immutability, and DevSecOps, ensures that secrets are inextricably linked to container images, minimizing the risk of exposure and unauthorized access. We introduce ChaosSecOps, a novel concept that combines Chaos Engineering with DevSecOps, specifically focusing on proactively testing and improving the resilience of secrets management systems. Through a detailed, real-world implementation scenario using AWS services and common DevOps tools, we demonstrate the practical application and tangible benefits of this approach. The e-commerce platform case study showcases how immutable secrets management leads to improved security posture, enhanced compliance, faster time-to-market, reduced downtime, and increased developer productivity. Key metrics demonstrate a significant reduction in secrets-related incidents and faster deployment times. The solution directly addresses all criteria outlined for the Global Tech Awards in the DevOps Technology category, highlighting innovation, collaboration, scalability, continuous improvement, automation, cultural transformation, measurable outcomes, technical excellence, and community contribution.


The Network Impact of Cloud Security and Operations

Network security and monitoring also change. With cloud-based networks, the network staff no longer has all its management software under its direct control. It now must work with its various cloud providers on security. In this environment, some small company network staff opt to outsource security and network management to their cloud providers. Larger companies that want more direct control might prefer to upskill their network staff on the different security and configuration toolsets that each cloud provider makes available. ... The move of applications and systems to more cloud services is in part fueled by the growth of citizen IT. This is when end users in departments have mini IT budgets and subscribe to new IT cloud services, of which IT and network groups aren't always aware. This creates potential security vulnerabilities, and it forces more network groups to segment networks into smaller units for greater control. They should also implement zero-trust networks that can immediately detect any IT resource, such as a cloud service, that a user adds, subtracts or changes on the network. ... Network managers are also discovering that they need to rewrite their disaster recovery plans for cloud. The strategies and operations that were developed for the internal network are still relevant. 


Three steps to integrate quantum computing into your data center or HPC facility

Just as QPU hardware has yet to become commoditized, the quantum computing stack remains in development, with relatively little consistency in how machines are accessed and programmed. Savvy buyers will have an informed opinion on how to leverage software abstraction to accomplish their key goals. With the right software abstractions, you can begin to transform quantum processors from fragile, research-grade tools into reliable infrastructure for solving real-world problems. Here are three critical layers of abstraction that make this possible. First, there’s hardware management. Quantum devices need constant tuning to stay in working shape, and achieving that manually takes serious time and expertise. Intelligent autonomy provided by specialist vendors can now handle the heavy lifting – booting, calibrating, and keeping things stable – without someone standing by to babysit the machine. Then there’s workload execution. Running a program on a quantum computer isn’t just plug-and-play. You usually have to translate your high-level algorithm into something that works with the quirks of the specific QPU being used, and address errors along the way. Now, software can take care of that translation and optimization behind the scenes, so users can just focus on building quantum algorithms and workloads that address key research or business needs.


Where Apple falls short for enterprise IT

First, enterprise tools in many ways could be considered a niche area of software. As a result, enterprise functionality doesn’t get the same attention as more mainstream features. This can be especially obvious when Apple tries to bring consumer features into enterprise use cases — like managed Apple Accounts and their intended integration with things like Continuity and iCloud, for example — and things like MDM controls for new features such a Apple Intelligence and low-level enterprise-specific functions like Declarative Device Management. The second reason is obvious: any piece of software that isn’t ready for prime time — and still makes it into a general release — is a potential support ticket when a business user encounters problems. ... Deployment might be where the lack of automation is clearest, but the issue runs through most aspects of Apple device and user onboarding and management. Apple Business Manager doesn’t offer any APIs that vendors or IT departments can tap into to automate routine tasks. This can be anything from redeploying older devices, onboarding new employees, assigning app licenses or managing user groups and privileges. Although Apple Business Manager is a great tool and it functions as a nexus for device management and identity management, it still requires more manual lifting than it should.


Getting Started with Data Quality

Any process to establish or update a DQ program charter must be adaptable. For example, a specific project management or a local office could start the initial DQ offering. As other teams see the program’s value, they would show initiative. In the meantime, the charter tenets change to meet the situation. So, any DQ charter documentation must have the flexibility to transform into what is currently needed. Companies must keep track of any charter amendments or additions to provide transparency and accountability. Expect that various teams will have overlapping or conflicting needs in a DQ program. These people will need to work together to find a solution. They will need to know the discussion rules to consistently advocate for the DQ they need and express their challenges. Ambiguity will heighten dissent. So, charter discussions and documentation must come from a well-defined methodology. As the white paper notes, clarity, consistency, and alignment sit at the charter’s core. While getting there can seem challenging, an expertly structured charter template can prompt critical information to show the way. ... The best practices documented by the charter stem from clarity, consistency, and alignment. They need to cover the DQ objectives mentioned above and ground DQ discussions.

Daily Tech Digest - May 01, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden



Bridging the IT and security team divide for effective incident response

One reason IT and security teams end up siloed is the healthy competitiveness that often exists between them. IT wants to innovate, while security wants to lock things down. These teams are made up from brilliant minds. However, faced with the pressure of a crisis, they might hesitate to admit they feel out of control, simmering issues may come to a head, or they may become so fixated on solving the issue that they fail to update others. To build an effective incident response strategy, identifying a shared vision is essential. Here, leadership should host joint workshops where teams learn more about each other and share ideas about embedding security into system architecture. These sessions should also simulate real-world crises, so that each team is familiar with how their roles intersect during a high-pressure situation and feel comfortable when an actual crisis arises. ... By simulating realistic scenarios – whether it’s ransomware incidents or malware attacks – those in leadership positions can directly test and measure the incident response plan so that is becomes an ingrained process. Throw in curveballs when needed, and use these exercises to identify gaps in processes, tools, or communication. There’s a world of issues to uncover disconnected tools and systems; a lack of automation that could speed up response times; and excessive documentation requirements.


First Principles in Foundation Model Development

The mapping of words and concepts into high-dimensional vectors captures semantic relationships in a continuous space. Words with similar meanings or that frequently appear in similar contexts are positioned closer to each other in this vector space. This allows the model to understand analogies and subtle nuances in language. The emergence of semantic meaning from co-occurrence patterns highlights the statistical nature of this learning process. Hierarchical knowledge structures, such as the understanding that “dog” is a type of “animal,” which is a type of “living being,” develop organically as the model identifies recurring statistical relationships across vast amounts of text. ... The self-attention mechanism represents a significant architectural innovation. Unlike recurrent neural networks that process sequences sequentially, self-attention allows the model to consider all parts of the input sequence simultaneously when processing each word. The “dynamic weighting of contextual relevance” means that for any given word in the input, the model can attend more strongly to other words that are particularly relevant to its meaning in that specific context. This ability to capture long-range dependencies is critical for understanding complex language structures. The parallel processing capability significantly speeds up training and inference. 


The best preparation for a password-less future is to start living there now

One of the big ideas behind passkeys is to keep us users from behaving as our own worst enemies. For nearly two decades, malicious actors -- mainly phishers and smishers -- have been tricking us into giving them our passwords. You'd think we would have learned how to detect and avoid these scams by now. But we haven't, and the damage is ongoing. ... But let's be clear: Passkeys are not passwords. If we're getting rid of passwords, shouldn't we also get rid of the phrase "password manager?" Note that there are two primary types of credential managers. The first is the built-in credential manager. These are the ones from Apple, Google, Microsoft, and some browser makers built into our platforms and browsers, including Windows, Edge, MacOS, Android, and Chrome. With passkeys, if you don't bring your own credential manager, you'll likely end up using one of these. ... The FIDO Alliance defines a "roaming authenticator" as a separate device to which your passkeys can be securely saved and recalled. Examples are hardware security keys (e.g., Yubico) and recent Android phones and tablets, which can act in the capacity of a hardware security key. Since your credentials to your credential manager are literally the keys to your entire kingdom, they deserve some extra special security.


Mind the Gap: Assessing Data Quality Readiness

Data Quality Readiness is defined as the ratio of the number of fully described Data Quality Measure Elements that are being calculated and/or collected to the number of Data Quality Measure Elements in the desired set of Data Quality Measures. By fully described I mean both the “number of data values” part and the “that are outliers” part. The first prerequisite activity is determining which Quality Measures you want to implement. The ISO standard defines 15 different Data Quality Characteristics. I covered those last time. The Data Quality Characteristics are made up of 63 Quality Measures. The Quality Measures are categorized as Highly Recommendable (19), Recommendable (36), and For Reference (8). This provides a starting point for prioritization. Begin with a few measures that are most applicable to your organization and that will have the greatest potential to improve the quality of your data. The reusability of the Quality Measures can factor into the decision, but it shouldn’t be the primary driver. The objective is not merely to collect information for its own sake, but to use that information to generate value for the enterprise. The result will be a set of Data Quality Measure Elements to collect and calculate. You do the ones that are best for you, but I would recommend looking at two in particular.


Why non-human identity security is the next big challenge in cybersecurity

What makes this particularly challenging is that each of these identities requires access to sensitive resources and carries potential security risks. Unlike human users, who follow predictable patterns and can be managed through traditional IAM solutions, non-human identities operate 24/7, often with elevated privileges, making them attractive targets for attackers. ... We’re witnessing a paradigm shift in how we need to think about identity security. Traditional security models were built around human users – focusing on aspects like authentication, authorisation and access management from a human-centric perspective. But this approach is inadequate for the machine-dominated future we’re entering. Organisations need to adopt a comprehensive governance framework specifically designed for non-human identities. This means implementing automated discovery and classification of all machine identities and their secrets, establishing centralised visibility and control and enforcing consistent security policies across all platforms and environments. ... First, organisations need to gain visibility into their non-human identity landscape. This means conducting a thorough inventory of all machine identities and their secrets, their access patterns and their risk profiles.


Preparing for the next wave of machine identity growth

First, let’s talk about the problem of ownership. Even organizations that have conducted a thorough inventory of the machine identities in their environments often lack a clear understanding of who is responsible for managing those identities. In fact, 75% of the organizations we surveyed indicated that they don’t have assigned ownership for individual machine identities. That’s a real problem—especially since poor (or insufficient) governance practices significantly increase the likelihood of compromised access, data loss, and other negative outcomes. Another critical blind spot is around understanding what data each machine identity can or should be able to access—and just as importantly, what it cannot and should not access. Without clarity, it becomes nearly impossible to enforce proper security controls, limit unnecessary exposure, or maintain compliance. Each machine identity is a potential access point to sensitive data and critical systems. Failing to define and control their access scope opens the door to serious risk. Addressing the issue starts with putting a comprehensive machine identity security solution in place—ideally one that lets organizations govern machine identities just as they do human identities. Automation plays a critical role: with so many identities to secure, a solution that can discover, classify, assign ownership, certify, and manage the full lifecycle of machine identities significantly streamlines the process.


To Compete, Banking Tech Needs to Be Extensible. A Flexible Platform is Key

The banking ecosystem includes three broad stages along the trajectory toward extensibility, according to Ryan Siebecker, a forward deployed engineer at Narmi, a banking software firm. These include closed, non-extensible systems — typically legacy cores with proprietary software that doesn’t easily connect to third-party apps; systems that allow limited, custom integrations; and open, extensible systems that allow API-based connectivity to third-party apps. ... The route to extensibility can be enabled through an internally built, custom middleware system, or institutions can work with outside vendors whose systems operate in parallel with core systems, including Narmi. Michigan State University Federal Credit Union, which began its journey toward extensibility in 2009, pursued an independent route by building in-house middleware infrastructure to allow API connectivity to third-party apps. Building in-house made sense given the early rollout of extensible capabilities, but when developing a toolset internally, institutions need to consider appropriate staffing levels — a commitment not all community banks and credit unions can make. For MSUFCU, the benefit was greater customization, according to the credit union’s chief technology officer Benjamin Maxim. "With the timing that we started, we had to do it all ourselves," he says, noting that it took about 40 team members to build a middleware system to support extensibility.


5 Strategies for Securing and Scaling Streaming Data in the AI Era

Streaming data should never be wide open within the enterprise. Least-privilege access controls, enforced through role-based (RBAC) or attribute-based (ABAC) access control models, limit each user or application to only what’s essential. Fine-grained access control lists (ACLs) add another layer of protection, restricting read/write access to only the necessary topics or channels. Combine these controls with multifactor authentication, and even a compromised credential is unlikely to give attackers meaningful reach. ... Virtual private cloud (VPC) peering and private network setups are essential for enterprises that want to keep streaming data secure in transit. These configurations ensure data never touches the public internet, thus eliminating exposure to distributed denial of service (DDoS), man-in-the-middle attacks and external reconnaissance. Beyond security, private networking improves performance. It reduces jitter and latency, which is critical for applications that rely on subsecond delivery or AI model responsiveness. While VPC peering takes thoughtful setup, the benefits in reliability and protection are well worth the investment. ... Just as importantly, security needs to be embedded into culture. Enterprises that regularly train their employees on privacy and data protection tend to identify issues earlier and recover faster.


Supply Chain Cybersecurity – CISO Risk Management Guide

Modern supply chains often span continents and involve hundreds or even thousands of third-party vendors, each with their security postures and vulnerabilities. Attackers have recognized that breaching a less secure supplier can be the easiest way to compromise a well-defended target. Recent high-profile incidents have shown that supply chain attacks can lead to data breaches, operational disruptions, and significant financial losses. The interconnectedness of digital systems means that a single compromised vendor can have a cascading effect, impacting multiple organizations downstream. For CISOs, this means that traditional perimeter-based security is no longer sufficient. Instead, a holistic approach must be taken that considers every entity with access to critical systems or data as a potential risk vector. ... Building a secure supply chain is not a one-time project—it’s an ongoing journey that demands leadership, collaboration, and adaptability. CISOs must position themselves as business enablers, guiding the organization to view cybersecurity not as a barrier but as a competitive advantage. This starts with embedding cybersecurity considerations into every stage of the supplier lifecycle, from onboarding to offboarding. Leadership engagement is crucial: CISOs should regularly brief the executive team and board on supply chain risks, translating technical findings into business impacts such as potential downtime, reputational damage, or regulatory penalties.


Developers Must Slay the Complexity and Security Issues of AI Coding Tools

Beyond adding further complexity to the codebase, AI models also lack the contextual nuance that is often necessary for creating high-quality, secure code, primarily when used by developers who lack security knowledge. As a result, vulnerabilities and other flaws are being introduced at a pace never before seen. The current software environment has grown out of control security-wise, showing no signs of slowing down. But there is hope for slaying these twin dragons of complexity and insecurity. Organizations must step into the dragon’s lair armed with strong developer risk management, backed by education and upskilling that gives developers the tools they need to bring software under control. ... AI tools increase the speed of code delivery, enhancing efficiency in raw production, but those early productivity gains are being overwhelmed by code maintainability issues later in the SDLC. The answer is to address those issues at the beginning, before they put applications and data at risk. ... Organizations involved in software creation need to change their culture, adopting a security-first mindset in which secure software is seen not just as a technical issue but as a business priority. Persistent attacks and high-profile data breaches have become too common for boardrooms and CEOs to ignore. 

Daily Tech Digest - April 19, 2025


Quote for the day:

"Good things come to people who wait, but better things come to those who go out and get them." -- Anonymous



AI Agents Are Coming to Work: Are Organizations Equipped?

The promise of agentic AI is already evident in organizations adopting it. Fiserv, the global fintech powerhouse, developed an agentic AI application that autonomously assigns merchant codes to businesses, reducing human intervention to under 1%. Sharbel Shaaya, director of AI operations and intelligent automation at Fiserv, said, "Tomorrow's agentic systems will handle this groundwork natively, amplifying their value." In the automotive world, Ford Motor Company is using agentic AI to amplify car design. Bryan Goodman, director of AI at Ford Motor Company, said, "Traditionally, Ford's designers sculpt physical clay models, a time-consuming process followed by lengthy engineering simulations. One computational fluid dynamics run used to take 15 hours, which AI model predicts the outcome in 10 seconds." ... In regulated industries, compliance adds complexity. Ramnik Bajaj, chief data and analytics officer at United Services Automobile Association, sees agentic AI interpreting requests in insurance but insists on human oversight for tasks such as claims adjudication. "Regulatory constraints demand a human in the loop," Bajaj said. Trust is another hurdle - 61% of organizations cite concerns about errors, bias and data quality. "Scaling AI requires robust governance. Without trust, pilots stay pilots," Sarker said.


Code, cloud, and culture: The tech blueprint transforming Indian workplaces

The shift to hybrid cloud infrastructure is enabling Indian enterprises to modernise their legacy systems while scaling with agility. According to a report by EY India, 90% of Indian businesses believe that cloud transformation is accelerating their AI initiatives. Hybrid cloud environments—which blend on-premise infrastructure with public and private cloud—are becoming the default architecture for industries like banking, insurance, and manufacturing. HDFC Bank, for example, has adopted a hybrid cloud model to offer hyper-personalised customer services and real-time transaction capabilities. This digital core is helping financial institutions respond faster to market changes while maintaining strict regulatory compliance. ... No technological transformation is complete without human capability. The demand for AI-skilled professionals in India has grown 14x between 2016 and 2023, and the country is expected to need over one million AI professionals by 2026. Companies are responding with aggressive reskilling strategies. ... The strategic convergence of AI, SaaS, cloud, and human capital is rewriting the rules of productivity, innovation, and global competitiveness. With forward-looking investments, grassroots upskilling efforts, and a vibrant startup culture, India is poised to define the future of work, not just for itself, but for the world.


Bridging the Gap Between Legacy Infrastructure and AI-Optimized Data Centers

Failure to modernize legacy infrastructure isn’t just a technical hurdle; it’s a strategic risk. Outdated systems increase operational costs, limit scalability, and create inefficiencies that hinder innovation. However, fully replacing existing infrastructure is rarely a practical or cost-effective solution. The path forward lies in a phased approach – modernizing legacy systems incrementally while introducing AI-optimized environments capable of meeting future demands. ... AI’s relentless demand for compute power requires a more diversified and resilient approach to energy sourcing. While Small modular reactors (SMRs) present a promising future solution for scalable, reliable, and low-carbon power generation, they are not yet equipped to serve critical loads in the near term. Consequently, many operators are prioritizing behind-the-meter (BTM) generation, primarily gas-focused solutions, with the potential to implement combined cycle technologies that capture and repurpose steam for additional energy efficiency. ... The future of AI-optimized data centers lies in adaptation, not replacement. Substituting legacy infrastructure on a large scale is prohibitively expensive and disruptive. Instead, a hybrid approach – layering AI-optimized environments alongside existing systems while incrementally retrofitting older infrastructure – provides a more pragmatic path forward.


Why a Culture of Observability Is Key to Technology Success

A successful observability strategy requires fostering a culture of shared responsibility for observability across all teams. By embedding observability throughout the software development life cycle, organizations create a proactive environment where issues are detected and resolved early. This will require observability buy-in across all teams within the organization. ... Teams that prioritize observability gain deeper insights into system performance and user experiences, resulting in faster incident resolution and improved service delivery. Promoting an organizational mindset that values transparency and continuous monitoring is key. ... Shifting observability left into the development process helps teams catch issues earlier, reducing the cost of fixing bugs and enhancing product quality. Developers can integrate observability into code from the outset, ensuring systems are instrumented and monitored at every stage. This is a key step toward the establishment of a culture of observability. ... A big part is making sure that all the stakeholders across the organization, whether high or low in the org chart, understand what’s going on. This means taking feedback. Leadership needs to be involved. This means communicating what you are doing, why you are doing it and what the implications are of doing or not doing it.


Why Agile Software Development is the Future of Engineering

Implementing iterative processes can significantly decrease time-to-market for projects. Statistics show that organizations using adaptive methodologies can increase their release frequency by up to 25%. This approach enables teams to respond promptly to market changes and customer feedback, leading to improved alignment with user expectations. Collaboration among cross-functional teams enhances productivity. In environments that prioritize teamwork, 85% of participants report higher engagement levels, which directly correlates with output quality. Structured daily check-ins allow for quick problem resolution, keeping projects on track and minimizing delays. Frequent iteration facilitates continuous testing and integration, which reduces errors early in the process. According to industry data, teams that deploy in short cycles experience up to 50% fewer defects compared to traditional methodologies. This not only expedites delivery but also enhances the overall reliability of the product. The focus on customer involvement significantly impacts product relevance. Engaging clients throughout the development process can lead to a 70% increase in user satisfaction, as adjustments are made in real time. Clients appreciate seeing their feedback implemented quickly, fostering a sense of ownership over the final product.


Why Risk Management Is Key to Sustainable Business Growth

Recent bank collapses demonstrate that a lack of effective risk management strategies can cause serious consequences for financial institutions, their customers, and the economy. A comprehensive risk management strategy is a tool to help banks protect assets, customers, and larger economic problems. ... Risk management is heavily driven by data analytics and identifying patterns in historical data. Predictive models and machine learning can forecast financial losses and detect risks and customer fraud. Additionally, banks can use predictive analytics for proactive decision-making. Data accuracy is of the utmost importance in this case because analysts use that information to make decisions about investments, customer loans, and more. Some banks rely on artificial intelligence (AI) to help detect customer defaults in more dynamic ways. For example, AI could be used in training cross-domain data to better understand customer behavior, or it could be used to make real-time decisions by incorporating real-time changes in the market data. It also improves the customer experience by offering answers through highly trained chatbots, thereby increasing customer satisfaction and reducing reputation risk. Enterprises are training generative AI (GenAI) to be virtual regulatory and policy experts to answer questions about regulations, company policies, and guidelines. 


How U.S. tariffs could impact cloud computing

Major cloud providers are commonly referred to as hyperscalers, and include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. They initially may absorb rising cloud costs to avoid risking market share by passing them on to customers. However, tariffs on hardware components such as servers and networking equipment will likely force them to reconsider their financial models, which means enterprises can expect eventual, if not immediate, price increases. ... As hyperscalers adapt to increased costs by exploring nearshoring or regional manufacturing, these shifts may permanently change cloud pricing dynamics. Enterprises that rely on public cloud services may need to plan for contract renegotiations and higher costs in the coming years, particularly as hardware supply chains remain volatile. The financial strain imposed by tariffs also has a ripple effect, indirectly affecting cloud adoption rates. ... Adaptability and agility remain essential for both providers and enterprises. For cloud vendors, resilience in the supply chain and efficiency in hardware will be critical. Meanwhile, enterprise leaders must balance cost containment with their broader strategic goals for digital growth. By implementing thoughtful planning and proactive strategies, organizations can navigate these challenges and continue to derive value from the cloud in the years ahead.


CIOs must mind their own data confidence gap

A lack of good data can lead to several problems, says Aidora’s Agarwal. C-level executives — even CIOs — may demand that new products be built when the data isn’t ready, leading to IT leaders who look incompetent because they repeatedly push back on timelines, or to those who pass burden down to their employees. “The teams may get pushed on to build the next set of things that they may not be ready to build,” he says. “This can result in failed initiatives, significantly delayed delivery, or burned-out teams.” To fix this data quality confidence gap, companies should focus on being more transparent across their org charts, Palaniappan advises. Lower-level IT leaders can help CIOs and the C-suite understand their organization’s data readiness needs by creating detailed roadmaps for IT initiatives, including a timeline to fix data problems, he says. “Take a ‘crawl, walk, run’ approach to drive this in the right direction, and put out a roadmap,” he says. “Look at your data maturity in order to execute your roadmap, and then slowly improve upon it.” Companies need strong data foundations, including data strategies focused on business cases, data accessibility, and data security, adds Softserve’s Myronov. Organizations should also employ skeptics to point out potential data problems during AI and other data-driven projects, he suggests.


AI has grown beyond human knowledge, says Google's DeepMind unit

Not only is human judgment an impediment, but the short, clipped nature of prompt interactions never allows the AI model to advance beyond question and answer. "In the era of human data, language-based AI has largely focused on short interaction episodes: e.g., a user asks a question and the agent responds," the researchers write. "The agent aims exclusively for outcomes within the current episode, such as directly answering a user's question." There's no memory, there's no continuity between snippets of interaction in prompting. "Typically, little or no information carries over from one episode to the next, precluding any adaptation over time," write Silver and Sutton. However, in their proposed Age of Experience, "Agents will inhabit streams of experience, rather than short snippets of interaction." Silver and Sutton draw an analogy between streams and humans learning over a lifetime of accumulated experience, and how they act based on long-range goals, not just the immediate task. ... The researchers suggest that the arrival of "thinking" or "reasoning" AI models, such as Gemini, DeepSeek's R1, and OpenAI's o1, may be surpassed by experience agents. The problem with reasoning agents is that they "imitate" human language when they produce verbose output about steps to an answer, and human thought can be limited by its embedded assumptions.


Understanding API Security: Insights from GoDaddy’s FTC Settlement

The FTC’s action against GoDaddy stemmed from the company’s inadequate security practices, which led to multiple data breaches from 2019 to 2022. These breaches exposed sensitive customer data, including usernames, passwords, and employee credentials. ... GoDaddy did not implement multi-factor authentication (MFA) and encryption, leaving customer data vulnerable. Without MFA and robust checks against credential stuffing, attackers could easily exploit stolen or weak credentials to access user accounts. Even with authentication, attackers can abuse authenticated sessions if the underlying API authorization is flawed. ... The absence of rate-limiting, logging, and anomaly detection allowed unauthorized access to 1.2 million customer records. More critically, this lack of deep inspection meant an inability to baseline normal API behavior and detect subtle reconnaissance or the exploitation of unique business logic flaws – attacks that often bypass traditional signature-based tools. ... Inadequate Access Controls: The exposure of admin credentials and encryption keys enabled attackers to compromise websites. Strong access controls are essential to restrict access to sensitive information to authorized personnel only. This highlights the risk not just of credential theft, but of authorization flaws within APIs themselves, where authenticated users gain access to data they shouldn’t.