Showing posts with label model. Show all posts
Showing posts with label model. Show all posts

Daily Tech Digest - July 25, 2025


 Quote for the day:

"Technology changes, but leadership is about clarity, courage, and creating momentum where none exists." -- Inspired by modern digital transformation principles


Why foundational defences against ransomware matter more than the AI threat

The 2025 Cyber Security Breaches Survey paints a concerning picture. According to the study, ransomware attacks doubled between 2024 and 2025 – a surge less to do with AI innovation and more about deep-rooted economic, operational and structural changes within the cybercrime ecosystem. At the heart of this growth in attacks is the growing popularity of the ransomware-as-a-service (RaaS) business model. Groups like DragonForce or Ransomhub sell ready-made ransomware toolkits to affiliates in exchange for a cut of the profits, enabling even low-skilled attackers to conduct disruptive campaigns. ... Breaches often stem from common, preventable issues such as poor credential hygiene or poorly configured systems – areas that often sit outside scheduled assessments. When assessments happen only once or twice a year, new gaps may go unnoticed for months, giving attackers ample opportunity. To keep up, organisations need faster, more continuous ways of validating defences. ... Most ransomware actors follow well-worn playbooks, making them frequent visitors to company networks but not necessarily sophisticated ones. That’s why effective ransomware prevention is not about deploying cutting-edge technologies at every turn – it’s about making sure the basics are consistently in place. 


Subliminal learning: When AI models learn what you didn’t teach them

“Subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development,” the researchers from Anthropic, Truthful AI, the Warsaw University of Technology, the Alignment Research Center, and UC Berkeley, wrote in their paper. “Distillation could propagate unintended traits, even when developers try to prevent this via data filtering.” ... Models trained on data generated by misaligned models, where AI systems diverge from their original intent due to bias, flawed algorithms, data issues, insufficient oversight, or other factors, and produce incorrect, lewd or harmful content, can also inherit that misalignment, even if the training data had been carefully filtered, the researchers found. They offered examples of harmful outputs when student models became misaligned like their teachers, noting, “these misaligned responses are egregious far beyond anything in the training data, including endorsing the elimination of humanity and recommending murder.” ... Today’s multi-billion parameter models are able to discern extremely complicated relationships between a dataset and the preferences associated with that data, even if it’s not immediately obvious to humans, he noted. This points to a need to look beyond semantic and direct data relationships when working with complex AI models.


Why people-first leadership wins in software development

It frequently involves pushing for unrealistic deadlines, with project schedules made without enough input from the development team about the true effort needed and possible obstacles. This results in ongoing crunch periods and mandatory overtime. ... Another indicator is neglecting signs of burnout and stress. Leaders may ignore or dismiss signals such as team members consistently working late, increased irritability, or a decline in productivity, instead pushing for more output without addressing the root causes. Poor work-life balance becomes commonplace, often without proper recognition or rewards for the extra effort. ... Beyond the code, there’s a stifled innovation and creativity. When teams are constantly under pressure to just “ship it,” there’s little room for creative problem-solving, experimentation, or thinking outside the box. Innovation, often born from psychological safety and intellectual freedom, gets squashed, hindering your company’s ability to adapt to new trends and stay competitive. Finally, there’s damage to your company’s reputation. In the age of social media and employer review sites, news travels fast. ... It’s vital to invest in team growth and development. Provide opportunities for continuous learning, training, and skill enhancement. This not only boosts individual capabilities but also shows your commitment to their long-term career paths within your organization. This is a crucial retention strategy.


Achieving resilience in financial services through cloud elasticity and automation

In an era of heightened regulatory scrutiny, volatile markets, and growing cybersecurity threats, resilience isn’t just a nice-to-have—it’s a necessity. A lack of robust operational resilience can lead to regulatory penalties, damaged reputations, and crippling financial losses. In this context, cloud elasticity, automation, and cutting-edge security technologies are emerging as crucial tools for financial institutions to not only survive but thrive amidst these evolving pressures. ... Resilience ensures that financial institutions can maintain critical operations during crises, minimizing disruptions and maintaining service quality. Efficient operations are crucial for maintaining competitive advantage and customer satisfaction. ... Effective resilience strategies help institutions manage diverse risks, including cyber threats, system failures, and third-party vulnerabilities. The complexity of interconnected systems and the rapid pace of technological advancement add layers of risk that are difficult to manage. ... Financial institutions are particularly susceptible to risks such as system failures, cyberattacks, and third-party vulnerabilities. ... As financial institutions navigate a landscape marked by heightened risk, evolving regulations, and increasing customer expectations, operational resilience has become a defining imperative.


Digital attack surfaces expand as key exposures & risks double

Among OT systems, the average number of exposed ports per organisation rose by 35%, with Modbus (port 502) identified as the most commonly exposed, posing risks of unauthorised commands and potential shutdowns of key devices. The exposure of Unitronics port 20256 surged by 160%. The report cites cases where attackers, such as the group "CyberAv3ngers," targeted industrial control systems during conflicts, exploiting weak or default passwords. ... The number of vulnerabilities identified on public-facing assets more than doubled, rising from three per organisation in late 2024 to seven in early 2025. Critical vulnerabilities dating as far back as 2006 and 2008 still persist on unpatched systems, with proof-of-concept code readily available online, making exploitation accessible even to attackers with limited expertise. The report also references the continued threat posed by ransomware groups who exploit such weaknesses in internet-facing devices. ... Incidents involving exposed access keys, including cloud and API keys, doubled from late 2024 to early 2025. Exposed credentials can enable threat actors to enter environments as legitimate users, bypassing perimeter defenses. The report highlights that most exposures result from accidental code pushes to public repositories or leaks on criminal forums.


How Elicitation in MCP Brings Human-in-the-Loop to AI Tools

Elicitation represents more than an incremental protocol update. It marks a shift toward collaborative AI workflows, where the system and human co-discover missing context rather than expecting all details upfront. Python developers building MCP tools can now focus on core logic and delegate parameter gathering to the protocol itself, allowing for a more streamlined approach. Clients declare an elicitation capability during initialization, so servers know they may elicit input at any time. That standardized interchange liberates developers from generating custom UIs or creating ad hoc prompts, ensuring coherent behaviour across diverse MCP clients. ... Elicitation transforms human-in-the-loop (HITL) workflows from an afterthought to a core capability. Traditional AI systems often struggle with scenarios that require human judgment, approval, or additional context. Developers had to build custom solutions for each case, leading to inconsistent experiences and significant development overhead. With elicitation, HITL patterns become natural extensions of tool functionality. A database migration tool can request confirmation before making irreversible changes. A document generation system can gather style preferences and content requirements through guided interactions. An incident response tool can collect severity assessments and stakeholder information as part of its workflow.


Cognizant Agents Gave Hackers Passwords, Clorox Says in Lawsuit

“Cognizant was not duped by any elaborate ploy or sophisticated hacking techniques,” the company says in its partially redacted 19-page complaint. “The cybercriminal just called the Cognizant Service Desk, asked for credentials to access Clorox’s network, and Cognizant handed the credentials right over. Cognizant is on tape handing over the keys to Clorox’s corporate network to the cybercriminal – no authentication questions asked.” ... The threat actors made multiple calls to the Cognizant help desk, essentially asking for new passwords and getting them without any effort to verify them, Clorox wrote. They then used those new credentials to gain access to the corporate network, launching a “debilitating” attack that “paralyzed Clorox’s corporate network and crippled business operations. And to make matters worse, when Clorox called on Cognizant to provide incident response and disaster recovery support services, Cognizant botched its response and compounded the damage it had already caused.” In statement to media outlets, a Cognizant spokesperson said it was “shocking that a corporation the size of Clorox had such an inept internal cybersecurity system to mitigate this attack.” While Clorox is placing the blame on Cognizant, “the reality is that Clorox hired Cognizant for a narrow scope of help desk services which Cognizant reasonably performed. Cognizant did not manage cybersecurity for Clorox,” the spokesperson said.


Digital sovereignty becomes a matter of resilience for Europe

Open-source and decentralized technologies are essential to advancing Europe’s strategic autonomy. Across cybersecurity, communications, and foundational AI, we’re seeing growing support for open-source infrastructure, now treated with the same strategic importance once reserved for energy, water and transportation. The long-term goal is becoming clear: not to sever global ties, but to reduce dependencies by building credible, European-owned alternatives to foreign-dominated systems. Open-source is a cornerstone of this effort. It empowers European developers and companies to innovate quickly and transparently, with full visibility and control, essential for trust and sovereignty. Decentralized systems complement this by increasing resilience against cyber threats, monopolistic practices and commercial overreach by “big tech”. While public investment is important, what Europe needs most is a more “risk-on” tech environment, one that rewards ambition, accelerated growth and enables European players to scale and compete globally. Strategic autonomy won’t be achieved by funding alone, but by creating the right innovation and investment climate for open technologies to thrive. Many sovereign platforms emphasize end-to-end encryption, data residency, and open standards. Are these enough to ensure trust, or is more needed to truly protect digital independence?



Building better platforms with continuous discovery

Platform teams are often judged by stability, not creativity. Balancing discovery with uptime and reliability takes effort. So does breaking out of the “tickets and delivery” cycle to explore problems upstream. But the teams that manage it? They build platforms that people want to use, not just have to use. Start by blocking time for discovery in your sprint planning, measuring both adoption and friction metrics, and most importantly, talking to your users periodically rather than waiting for them to come to you with problems. Cultural shifts like this take time because you're not just changing the process; you're changing what people believe is acceptable or expected. That kind of change doesn't happen just because leadership says it should, or because a manager adds a new agenda to planning meetings. It sticks when ICs feel inspired and safe enough to work differently and when managers back that up with support and consistency. Sometimes a C-suite champion helps set the tone, but day-to-day, it's middle managers and senior ICs who do the slow, steady work of normalizing new behavior. You need repeated proof that it's okay to pause and ask why, to explore, to admit uncertainty. Without that psychological safety, people just go back to what they know: deliverables and deadlines. 


AI-enabled software development: Risk of skill erosion or catalyst for growth?

We need to reframe AI not as a rival, but as a tool—one that has its own pros and cons and can extend human capability, not devalue it. This shift in perspective opens the door to a broader understanding of what it means to be a skilled engineer today. Using AI doesn’t eliminate the need for expertise—it changes the nature of that expertise. Classical programming, once central to the developer’s identity, becomes one part of a larger repertoire. In its place emerge new competencies: critical evaluation, architectural reasoning, prompt literacy, source skepticism, interpretative judgment. These are not hard skills, but meta-cognitive abilities—skills that require us to think about how we think. We’re not losing cognitive effort—we’re relocating it. This transformation mirrors earlier technological shifts. ... Some of the early adopters of AI enablement are already looking ahead—not just at the savings from replacing employees with AI, but at the additional gains those savings might unlock. With strategic investment and redesigned expectations, AI can become a growth driver—not just a cost-cutting tool. But upskilling alone isn’t enough. As organizations embed AI deeper into the development workflow, they must also confront the technical risks that come with automation. The promise of increased productivity can be undermined if these tools are applied without adequate context, oversight, or infrastructure.

Daily Tech Digest - July 22, 2025


Quote for the day:

“Being responsible sometimes means pissing people off.” -- Colin Powell


It might be time for IT to consider AI models that don’t steal

One option that has many pros and cons is to use genAI models that explicitly avoid training on any information that is legally dicey. There are a handful of university-led initiatives that say they try to limit model training data to information that is legally in the clear, such as open source or public domain material. ... “Is it practical to replace the leading models of today right now? No. But that is not the point. This level of quality was built on just 32 ethical data sources. There are millions more that can be used,” Wiggins wrote in response to a reader’s comment on his post. “This is a baseline that proves that Big AI lied. Efforts are underway to add more data that will bring it up to more competitive levels. It is not there yet.” Still, enterprises are investing in and planning for genAI deployments for the long term, and they may find in time that ethically sourced models deliver both safety and performance. ... Tipping the scales in the other direction is the big model makers’ promises of indemnification. Some genAI vendors have said they will cover the legal costs for customers who are sued over content produced by their models. “If the model provides indemnification, this is what enterprises should shoot for,” Moor’s Andersen said. 


The unique, mathematical shortcuts language models use to predict dynamic scenarios

One go-to pattern the team observed, called the “Associative Algorithm,” essentially organizes nearby steps into groups and then calculates a final guess. You can think of this process as being structured like a tree, where the initial numerical arrangement is the “root.” As you move up the tree, adjacent steps are grouped into different branches and multiplied together. At the top of the tree is the final combination of numbers, computed by multiplying each resulting sequence on the branches together. The other way language models guessed the final permutation was through a crafty mechanism called the “Parity-Associative Algorithm,” which essentially whittles down options before grouping them. It determines whether the final arrangement is the result of an even or odd number of rearrangements of individual digits. ... “These behaviors tell us that transformers perform simulation by associative scan. Instead of following state changes step-by-step, the models organize them into hierarchies,” says MIT PhD student and CSAIL affiliate Belinda Li SM ’23, a lead author on the paper. “How do we encourage transformers to learn better state tracking? Instead of imposing that these systems form inferences about data in a human-like, sequential way, perhaps we should cater to the approaches they naturally use when tracking state changes.”


Role of AI in fortifying cryptocurrency security

In the rapidly expanding realm of Decentralised Finance (DeFi), AI will play a critical role in optimising complex lending, borrowing, and trading protocols. AI can intelligently manage liquidity pools, optimise yield farming strategies for better returns and reduced impermanent loss, and even identify subtle arbitrage opportunities across various platforms. Crucially, AI will also be vital in identifying and mitigating novel types of exploits that are unique to the intricate and interconnected world of DeFi. Looking further ahead, AI will be crucial in developing Quantum-Resistant Cryptography. As quantum computing advances, it poses a theoretical threat to the underlying cryptographic methods that secure current blockchain networks. AI can significantly accelerate the research and development of “post-quantum cryptography” (PQC) algorithms, which are designed to withstand the immense computational power of future quantum computers. AI can also be used to simulate quantum attacks, rigorously testing existing and new cryptographic designs for vulnerabilities. Finally, the concept of Autonomous Regulation could redefine oversight in the crypto space. Instead of traditional, reactive regulatory approaches, AI-driven frameworks could provide real-time, proactive oversight without stifling innovation. 


From Visibility to Action: Why CTEM Is Essential for Modern Cybersecurity Resilience

CTEM shifts the focus from managing IT vulnerabilities in isolation to managing exposure in collaboration, something that’s far more aligned with the operational priorities of today’s organizations. Where traditional approaches center around known vulnerabilities and technical severity, CTEM introduces a more business-driven lens. It demands ongoing visibility, context-rich prioritization, and a tighter alignment between security efforts and organizational impact. In doing so, it moves the conversation from “What’s vulnerable?” to “What actually matters right now?” – a far more useful question when resilience is on the line. What makes CTEM particularly relevant beyond security teams is its emphasis on continuous alignment between exposure data and operational decision-making. This makes it valuable not just for threat reduction, but for supporting broader resilience efforts, ensuring resources are directed toward the exposures most likely to disrupt critical operations. It also complements, rather than replaces, existing practices like attack surface management (ASM). CTEM builds on these foundations with more structured prioritization, validation, and mobilization, turning visibility into actionable risk reduction. 


Driving Platform Adoption: Community Is Your Value

Remember that in a Platform as a Product approach, developers are your customers. If they don’t know what’s available, how to use it or what’s coming next, they’ll find workarounds. These conferences and speaker series are a way to keep developers engaged, improve adoption and ensure the platform stays relevant.There’s a human side to this, too often left out of focusing on “the business value” and outcomes in corporate-land: just having a friendly community of humans who like to spend time with each other and learn. ... Successful platform teams have active platform advocacy. This requires at least one person working full time to essentially build empathy with your users by working with and listening to the people who use your platforms. You may start with just one platform advocate who visits with developer teams, listening for feedback while teaching them how to use the platform and associated methodologies. The advocate acts as both a councilor and delegate for your developers.  ... The journey to successful platform adoption is more than just communicating technical prowess. Embracing systematic approaches to platform marketing that include clear messaging and positioning based on customers’ needs and a strong brand ethos is the key to communicating the value of your platform.


9 AI development skills tech companies want

“It’s not enough to know how a transformer model works; what matters is knowing when and why to use AI to drive business outcomes,” says Scott Weller, CTO of AI-powered credit risk analysis platform EnFi. “Developers need to understand the tradeoffs between heuristics, traditional software, and machine learning, as well as how to embed AI in workflows in ways that are practical, measurable, and responsible.” ... “In AI-first systems, data is the product,” Weller says. “Developers must be comfortable acquiring, cleaning, labeling, and analyzing data, because poor data hygiene leads to poor model performance.” ... AI safety and reliability engineering “looks at the zero-tolerance safety environment of factory operations, where AI failures could cause safety incidents or production shutdowns,” Miller says. To ensure the trust of its customers, IFS needs developers who can build comprehensive monitoring systems to detect when AI predictions become unreliable and implement automated rollback mechanisms to traditional control methods when needed, Miller says. ... “With the rapid growth of large language models, developers now require a deep understanding of prompt design, effective management of context windows, and seamless integration with LLM APIs—skills that extend well beyond basic ChatGPT interactions,” Tupe says.


Why AI-Driven Logistics and Supply Chains Need Resilient, Always-On Networks

Something worth noting about increased AI usage in supply chains is that as AI-enabled systems become more complex, they also become more delicate, which increases the potential for outages. Something as simple as a single misconfiguration or unintentional interaction between automated security gates can lead to a network outage, preventing supply chain personnel from accessing critical AI applications. During an outage, AI clusters (interconnected GPU/TPU nodes used for training and inference) can also become unavailable. .. Businesses must increase network resiliency to ensure their supply chain and logistics teams always have access to key AI applications, even during network outages and other disruptions. One approach that companies can take to strengthen network resilience is to implement purpose-built infrastructure like out of band (OOB) management. With OOB management, network administrators can separate and containerize functions of the management plane, allowing it to operate freely from the primary in-band network. This secondary network acts as an always-available, independent, dedicated channel that administrators can use to remotely access, manage, and troubleshoot network infrastructure.


From architecture to AI: Building future-ready data centers

In some cases, the pace of change is so fast that buildings are being retrofitted even as they are being constructed. Once CPUs are installed, O'Rourke has observed data center owners opting to upgrade racks row by row, rather than converting the entire facility to liquid cooling at once – largely because the building wasn’t originally designed to support higher-density racks. To accommodate this reality, Tate carries out in-row upgrades by providing specialized structures to mount manifolds, which distribute coolant from air-cooled chillers throughout the data halls. “Our role is to support the physical distribution of that cooling infrastructure,” explains O'Rourke. “Manifold systems can’t be supported by existing ceilings or hot aisle containment due to weight limits, so we’ve developed floor-mounted frameworks to hold them.” He adds: “GPU racks also can’t replace all CPU racks one-to-one, as the building structure often can’t support the added load. Instead, GPUs must be strategically placed, and we’ve created solutions to support these selective upgrades.” By designing manifold systems with actuators that integrate with the building management system (BMS), along with compatible hot aisle containment and ceiling structures, Tate has developed a seamless, integrated solution for the white space. 


Weaving reality or warping it? The personalization trap in AI systems

At first, personalization was a way to improve “stickiness” by keeping users engaged longer, returning more often and interacting more deeply with a site or service. Recommendation engines, tailored ads and curated feeds were all designed to keep our attention just a little longer, perhaps to entertain but often to move us to purchase a product. But over time, the goal has expanded. Personalization is no longer just about what holds us. It is what it knows about each of us, the dynamic graph of our preferences, beliefs and behaviors that becomes more refined with every interaction. Today’s AI systems do not merely predict our preferences. They aim to create a bond through highly personalized interactions and responses, creating a sense that the AI system understands and cares about the user and supports their uniqueness. The tone of a chatbot, the pacing of a reply and the emotional valence of a suggestion are calibrated not only for efficiency but for resonance, pointing toward a more helpful era of technology. It should not be surprising that some people have even fallen in love and married their bots. The machine adapts not just to what we click on, but to who we appear to be. It reflects us back to ourselves in ways that feel intimate, even empathic. 


Microsoft Rushes to Stop Hackers from Wreaking Global Havoc

Multiple different hackers are launching attacks through the Microsoft vulnerability, according to representatives of two cybersecurity firms, CrowdStrike Holdings, Inc. and Google's Mandiant Consulting. Hackers have already used the flaw to break into the systems of national governments in Europe and the Middle East, according to a person familiar with the matter. In the US, they've accessed government systems, including ones belonging to the US Department of Education, Florida's Department of Revenue and the Rhode Island General Assembly, said the person, who spoke on condition that they not be identified discussing the sensitive information. ... The breaches have drawn new scrutiny to Microsoft's efforts to shore up its cybersecurity after a series of high-profile failures. The firm has hired executives from places like the US government and holds weekly meetings with senior executives to make its software more resilient. The company's tech has been subject to several widespread and damaging hacks in recent years, and a 2024 US government report described the company's security culture as in need of urgent reforms. ... "There were ways around the patches," which enabled hackers to break into SharePoint servers by tapping into similar vulnerabilities, said Bernard. "That allowed these attacks to happen." 

Daily Tech Digest - July 21, 2025


Quote for the day:

"Absolute identity with one's cause is the first and great condition of successful leadership." -- Woodrow Wilson


Is AI here to take or redefine your cybersecurity role?

Unlike Thibodeaux, Watson believes the level-one SOC analyst role “is going to be eradicated” by AI eventually. But he agrees with Thibodeaux that AI will move the table stakes forward on the skills needed to land a starter job in cyber. “The thing that will be cannibalized first is the sort of entry-level basic repeatable tasks, the things that people traditionally might have cut their teeth on in order to sort of progress to the next level. Therefore, the skill requirement to get a role in cybersecurity will be higher than what it has been traditionally,” says Watson. To help cyber professionals attain AI skills, CompTIA is developing a new certification program called SecAI. The course will target cyber people who already have three to four years of experience in a core cybersecurity job. The curriculum will include practical AI skills to proactively combat emerging cyber threats, integrating AI into security operations, defending against AI-driven attacks, and compliance for AI ethics and governance standards. ... As artificial intelligence takes over a rising number of technical cybersecurity tasks, Watson says one of the best ways security workers can boost their employment value is by sharpening their human skills like business literacy and communication: “The role is shifting to be one of partnering and advising because a lot of the technology is doing the monitoring, triaging, quarantining and so on.”


5 tips for building foundation models for AI

"We have to be mindful that, when it comes to training these models, we're doing it purposefully, because you can waste a lot of cycles on the exercise of learning," he said. "The execution of these models takes far less energy and resources than the actual training." OS usually feeds training data to its models in chunks. "Building up the label data takes quite a lot of time," he said. "You have to curate data across the country with a wide variety of classes that you're trying to learn from, so a different mix between urban and rural, and more." The organisation first builds a small model that uses several hundred examples. This approach helps to constrain costs and ensures OS is headed in the right direction. "Then we slowly build up that labelled set," Jethwa said. "I think we're now into the hundreds of thousands of labelled examples. Typically, these models are trained with millions of labelled datasets." While the organization's models are smaller, the results are impressive. "We're already outperforming the existing models that are out there from the large providers because those models are trained on a wider variety of images," he said. "The models might solve a wider variety of problems, but, for our specific domain, we outperform those models, even at a smaller scale."


Reduce, re-use, be frugal with AI and data

By being more selective with the data included in language models, businesses can better control their carbon emissions, limiting energy to be spent on the most important resources. In healthcare, for example, separating the most up-to-date medical information and guidance from the rest of the information on that topic will mean safer, more reliable and faster responses to patient treatment. ... Frugal AI means adopting an intelligent approach to data that focuses on using the most valuable information only. When businesses have a greater understanding of their data, how to label it, identify it and which teams are responsible for its deletion, then the storage of single use data can be significantly reduced. Only then can frugal AI systems be put in place, allowing businesses to adopt a resource aware and efficient approach to both their data consumption and AI usage. It’s important to stress here though that frugal AI doesn’t mean that the end results are lesser or of a reduced impact of technology, it means that the data that goes into AI is concentrated, smaller but just as impactful. Think of it like making a drink with extra concentrated squash. Frugal AI is that extra concentrate squash that puts data efficiency, consideration and strategy at the centre of an organisation’s AI ambitions.


Cyber turbulence ahead as airlines strap in for a security crisis

Although organizations have acknowledged the need to boost spending, progress remains to be made and new measures adopted. Legacy OT systems, which often lack security features such as automated patching and built-in encryption, should be addressed as a top priority. Although upgrading these systems can be costly, it is essential to prevent further disruptions and vulnerabilities. Mapping the aviation supply chain helps identify all key partners, which is important for conducting security audits and enforcing contractual cybersecurity requirements. This should be reinforced with multi-layered perimeter defenses, including encryption, firewalls, and intrusion detection systems, alongside zero-trust network segmentation to minimize the risk of attackers moving laterally within networks. Companies should implement real-time threat monitoring and response by deploying intrusion detection systems, centralizing analysis with SIEM, and maintaining a regularly tested incident response plan to identify, contain, and mitigate cyberattacks. ... One of the most important steps is to train all staff, including pilots and ground crews, to recognize scams. Since recent security breaches have mostly relied on social engineering tactics, this type of training is essential. A single phone call or a convincing email can be enough to trigger a data breach. 


What Does It Mean to Be Data-Driven?

A data-driven organization understands the value of its data and the best ways to capitalize on that value. Its data assets are aligned with its goals and the processes in place to achieve those goals. Protecting the company’s data assets requires incorporating governance practices to ensure managers and employees abide by privacy, security, and integrity guidelines. In addition to proper data governance, the challenges to implementing a data-driven infrastructure for business processes are data quality and integrity, data integration, talent acquisition, and change management. ... To ensure the success of their increasingly critical data initiatives, organizations look to the characteristics that led to effective adoption of data-driven programs at other companies. Management services firm KPMG identifies four key characteristics of successful data-driven initiatives: leadership involvement, investments in digital literacy, seamless access to data assets, and promotion and monitoring. ... While data-as-a-service (DaaS) emphasizes the sale of external data, data as a product (DaaP) considers all of a company’s data and the mechanisms in place for moving and storing the data as a product that internal operations rely on. The data team becomes a “vendor” serving “customers” throughout the organization.


AI Needs a Firewall and Cloud Needs a Rethink

Hyperscalers dominate most of enterprise IT today, and few are willing to challenge the status quo of cloud economics, artificial intelligence infrastructure and cybersecurity architectures. But Tom Leighton, co-founder and CEO of Akamai, does just that. He argues that the cloud has become bloated, expensive and overly centralized. The internet needs a new kind of infrastructure that is distributed, secure by design and optimized for performance at the edge, Leighton told Information Security Media Group. From edge-native AI inference and API security to the world's first firewall for artificial intelligence, Akamai is no longer just delivering content - it's redesigning the future. ... Among the most notable developments Leighton discussed was a new product category: an AI firewall. "People are training models on sensitive data and then exposing them to the public. That creates a new attack surface," Leighton said. "AI hallucinates. You never know what it's going to do. And the bad guys have figured out how to trick models into leaking data or doing bad things." Akamai's AI firewall monitors prompts and responses to prevent malicious prompts from manipulating the model and to avoid leaking sensitive data. "It can be implemented on-premises, in the cloud or within Akamai's platform, providing flexibility based on customer preference. 


Human and machine: Rediscovering our humanity in the age of AI

In an era defined by the rapid advancement of AI, machines are increasingly capable of tasks once considered uniquely human. ... Ethical decision-making, relationship building and empathy have been identified as the most valuable, both in our present reality and in the AI-driven future. ... As we navigate this era of AI, we must remember that technology is a tool, not a replacement for humanity. By embracing our capacity for creativity, connection and empathy, we can ensure that AI serves to enhance our humanity, not diminish it. This means accepting that preserving our humanness sometimes requires assistance. It means investing in education and training that fosters critical thinking, problem-solving and emotional intelligence. It means creating workplaces that value human connection and collaboration, where employees feel supported and empowered to bring their whole selves to work. And it means fostering a culture that celebrates creativity, innovation and the pursuit of knowledge. At a time when seven out of every ten companies are already using AI in at least one business function, let us embrace the challenge of this new era with both optimism and intentionality. Let us use AI to build a better future for ourselves and for generations to come – a future where technology serves humanity, and where every individual has the opportunity to thrive.


‘Interoperable but not identical’: applying ID standards across diverse communities

Exchanging knowledge and experiences with identity systems to improve future ID projects is central to the concept of ID4Africa’s mission. At this year’s ID4Africa AGM in Addis Ababa, Ethiopia, a tension was more evident than ever before between the quest for transferable insights and replicable successes and the uniqueness of each African nation. Thales Cybersecurity and Digital Identity Field Marketing Director for the Middle East and Africa Jean Lindner wrote in an emailed response to questions from Biometric Update following the event that the mix of attendees reflected that “every African country has its own diverse history or development maturity and therefore unique legacy identity systems, with different constraints. Let us recognize here there is no unique quick-fix to country-specific hurdles,” he says. The lessons of one country can only benefit another to the extent that common ground is identified. The development of the concept of digital public infrastructure has mapped out some common ground, but standards and collaborative organizations have a major role to play. Unfortunately, Stéphanie de Labriolle, executive director services at the Secure Identity Alliance says “the widespread lack of clarity around standards and what compliance truly entails” was striking at this year’s ID4Africa AGM.


The Race to Shut Hackers out of IoT Networks

Considered among the weakest links in enterprise networks, IoT devices are used across industries to perform critical tasks at a rapid rate. An estimated 57% of deployed units "are susceptible to medium- or high-severity attacks," according to research from security vendor Palo Alto Networks. IoT units are inherently vulnerable to security attacks, and enterprises are typically responsible for protecting against threats. Additionally, the IoT industry hasn't settled on standardized security, as time to market is sometimes a priority over standards. ... 3GPP developed RedCap to provide a viable option for enterprises seeking a higher-performance, feature-rich 5G alternative to traditional IoT connectivity options such as low-power WANs (LPWANs). LPWANs are traditionally used to transmit limited data over low-speed cellular links at a low cost. In contrast, RedCap offers moderate bandwidth and enhanced features for more demanding use cases, such as video surveillance cameras, industrial control systems in manufacturing and smart building infrastructure. ... From a security standpoint, RedCap inherits strong capabilities in 5G, such as authentication, encryption and integrity protection. It can also be supplemented at application and device levels for a multilayered security approach.


Architecting the MVP in the Age of AI

A key aspect of architecting an MVP is forming and testing hypotheses about how the system will meet its QARs. Understanding and prioritizing these QARs is not an easy task, especially for teams without a lot of architecture experience. AI can help when teams provide context by describing the QARs that the system must satisfy in a prompt and asking the LLM to suggest related requirements. The LLM may suggest additional QARs that the team may have overlooked. For example, if performance, security, and usability are the top 3 QARs that a team is considering, an LLM may suggest looking at scalability and resilience as well. This can be especially helpful for people who are new to software architecture. ... Sometimes validating the AI’s results may require more skills than would be required to create the solution from scratch, just as is sometimes the case when seeing someone else’s code and realizing that it’s better than what you would have developed on your own. This can be an effective way to improve developers’ skills, provided that the code is good. AI can also help you find and fix bugs in your code that you may miss. Beyond simple code inspection, experimentation provides a means of validating the results produced by AI. In fact, experimentation is the only real way to validate it, as some researchers have discovered.

Daily Tech Digest - May 01, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden



Bridging the IT and security team divide for effective incident response

One reason IT and security teams end up siloed is the healthy competitiveness that often exists between them. IT wants to innovate, while security wants to lock things down. These teams are made up from brilliant minds. However, faced with the pressure of a crisis, they might hesitate to admit they feel out of control, simmering issues may come to a head, or they may become so fixated on solving the issue that they fail to update others. To build an effective incident response strategy, identifying a shared vision is essential. Here, leadership should host joint workshops where teams learn more about each other and share ideas about embedding security into system architecture. These sessions should also simulate real-world crises, so that each team is familiar with how their roles intersect during a high-pressure situation and feel comfortable when an actual crisis arises. ... By simulating realistic scenarios – whether it’s ransomware incidents or malware attacks – those in leadership positions can directly test and measure the incident response plan so that is becomes an ingrained process. Throw in curveballs when needed, and use these exercises to identify gaps in processes, tools, or communication. There’s a world of issues to uncover disconnected tools and systems; a lack of automation that could speed up response times; and excessive documentation requirements.


First Principles in Foundation Model Development

The mapping of words and concepts into high-dimensional vectors captures semantic relationships in a continuous space. Words with similar meanings or that frequently appear in similar contexts are positioned closer to each other in this vector space. This allows the model to understand analogies and subtle nuances in language. The emergence of semantic meaning from co-occurrence patterns highlights the statistical nature of this learning process. Hierarchical knowledge structures, such as the understanding that “dog” is a type of “animal,” which is a type of “living being,” develop organically as the model identifies recurring statistical relationships across vast amounts of text. ... The self-attention mechanism represents a significant architectural innovation. Unlike recurrent neural networks that process sequences sequentially, self-attention allows the model to consider all parts of the input sequence simultaneously when processing each word. The “dynamic weighting of contextual relevance” means that for any given word in the input, the model can attend more strongly to other words that are particularly relevant to its meaning in that specific context. This ability to capture long-range dependencies is critical for understanding complex language structures. The parallel processing capability significantly speeds up training and inference. 


The best preparation for a password-less future is to start living there now

One of the big ideas behind passkeys is to keep us users from behaving as our own worst enemies. For nearly two decades, malicious actors -- mainly phishers and smishers -- have been tricking us into giving them our passwords. You'd think we would have learned how to detect and avoid these scams by now. But we haven't, and the damage is ongoing. ... But let's be clear: Passkeys are not passwords. If we're getting rid of passwords, shouldn't we also get rid of the phrase "password manager?" Note that there are two primary types of credential managers. The first is the built-in credential manager. These are the ones from Apple, Google, Microsoft, and some browser makers built into our platforms and browsers, including Windows, Edge, MacOS, Android, and Chrome. With passkeys, if you don't bring your own credential manager, you'll likely end up using one of these. ... The FIDO Alliance defines a "roaming authenticator" as a separate device to which your passkeys can be securely saved and recalled. Examples are hardware security keys (e.g., Yubico) and recent Android phones and tablets, which can act in the capacity of a hardware security key. Since your credentials to your credential manager are literally the keys to your entire kingdom, they deserve some extra special security.


Mind the Gap: Assessing Data Quality Readiness

Data Quality Readiness is defined as the ratio of the number of fully described Data Quality Measure Elements that are being calculated and/or collected to the number of Data Quality Measure Elements in the desired set of Data Quality Measures. By fully described I mean both the “number of data values” part and the “that are outliers” part. The first prerequisite activity is determining which Quality Measures you want to implement. The ISO standard defines 15 different Data Quality Characteristics. I covered those last time. The Data Quality Characteristics are made up of 63 Quality Measures. The Quality Measures are categorized as Highly Recommendable (19), Recommendable (36), and For Reference (8). This provides a starting point for prioritization. Begin with a few measures that are most applicable to your organization and that will have the greatest potential to improve the quality of your data. The reusability of the Quality Measures can factor into the decision, but it shouldn’t be the primary driver. The objective is not merely to collect information for its own sake, but to use that information to generate value for the enterprise. The result will be a set of Data Quality Measure Elements to collect and calculate. You do the ones that are best for you, but I would recommend looking at two in particular.


Why non-human identity security is the next big challenge in cybersecurity

What makes this particularly challenging is that each of these identities requires access to sensitive resources and carries potential security risks. Unlike human users, who follow predictable patterns and can be managed through traditional IAM solutions, non-human identities operate 24/7, often with elevated privileges, making them attractive targets for attackers. ... We’re witnessing a paradigm shift in how we need to think about identity security. Traditional security models were built around human users – focusing on aspects like authentication, authorisation and access management from a human-centric perspective. But this approach is inadequate for the machine-dominated future we’re entering. Organisations need to adopt a comprehensive governance framework specifically designed for non-human identities. This means implementing automated discovery and classification of all machine identities and their secrets, establishing centralised visibility and control and enforcing consistent security policies across all platforms and environments. ... First, organisations need to gain visibility into their non-human identity landscape. This means conducting a thorough inventory of all machine identities and their secrets, their access patterns and their risk profiles.


Preparing for the next wave of machine identity growth

First, let’s talk about the problem of ownership. Even organizations that have conducted a thorough inventory of the machine identities in their environments often lack a clear understanding of who is responsible for managing those identities. In fact, 75% of the organizations we surveyed indicated that they don’t have assigned ownership for individual machine identities. That’s a real problem—especially since poor (or insufficient) governance practices significantly increase the likelihood of compromised access, data loss, and other negative outcomes. Another critical blind spot is around understanding what data each machine identity can or should be able to access—and just as importantly, what it cannot and should not access. Without clarity, it becomes nearly impossible to enforce proper security controls, limit unnecessary exposure, or maintain compliance. Each machine identity is a potential access point to sensitive data and critical systems. Failing to define and control their access scope opens the door to serious risk. Addressing the issue starts with putting a comprehensive machine identity security solution in place—ideally one that lets organizations govern machine identities just as they do human identities. Automation plays a critical role: with so many identities to secure, a solution that can discover, classify, assign ownership, certify, and manage the full lifecycle of machine identities significantly streamlines the process.


To Compete, Banking Tech Needs to Be Extensible. A Flexible Platform is Key

The banking ecosystem includes three broad stages along the trajectory toward extensibility, according to Ryan Siebecker, a forward deployed engineer at Narmi, a banking software firm. These include closed, non-extensible systems — typically legacy cores with proprietary software that doesn’t easily connect to third-party apps; systems that allow limited, custom integrations; and open, extensible systems that allow API-based connectivity to third-party apps. ... The route to extensibility can be enabled through an internally built, custom middleware system, or institutions can work with outside vendors whose systems operate in parallel with core systems, including Narmi. Michigan State University Federal Credit Union, which began its journey toward extensibility in 2009, pursued an independent route by building in-house middleware infrastructure to allow API connectivity to third-party apps. Building in-house made sense given the early rollout of extensible capabilities, but when developing a toolset internally, institutions need to consider appropriate staffing levels — a commitment not all community banks and credit unions can make. For MSUFCU, the benefit was greater customization, according to the credit union’s chief technology officer Benjamin Maxim. "With the timing that we started, we had to do it all ourselves," he says, noting that it took about 40 team members to build a middleware system to support extensibility.


5 Strategies for Securing and Scaling Streaming Data in the AI Era

Streaming data should never be wide open within the enterprise. Least-privilege access controls, enforced through role-based (RBAC) or attribute-based (ABAC) access control models, limit each user or application to only what’s essential. Fine-grained access control lists (ACLs) add another layer of protection, restricting read/write access to only the necessary topics or channels. Combine these controls with multifactor authentication, and even a compromised credential is unlikely to give attackers meaningful reach. ... Virtual private cloud (VPC) peering and private network setups are essential for enterprises that want to keep streaming data secure in transit. These configurations ensure data never touches the public internet, thus eliminating exposure to distributed denial of service (DDoS), man-in-the-middle attacks and external reconnaissance. Beyond security, private networking improves performance. It reduces jitter and latency, which is critical for applications that rely on subsecond delivery or AI model responsiveness. While VPC peering takes thoughtful setup, the benefits in reliability and protection are well worth the investment. ... Just as importantly, security needs to be embedded into culture. Enterprises that regularly train their employees on privacy and data protection tend to identify issues earlier and recover faster.


Supply Chain Cybersecurity – CISO Risk Management Guide

Modern supply chains often span continents and involve hundreds or even thousands of third-party vendors, each with their security postures and vulnerabilities. Attackers have recognized that breaching a less secure supplier can be the easiest way to compromise a well-defended target. Recent high-profile incidents have shown that supply chain attacks can lead to data breaches, operational disruptions, and significant financial losses. The interconnectedness of digital systems means that a single compromised vendor can have a cascading effect, impacting multiple organizations downstream. For CISOs, this means that traditional perimeter-based security is no longer sufficient. Instead, a holistic approach must be taken that considers every entity with access to critical systems or data as a potential risk vector. ... Building a secure supply chain is not a one-time project—it’s an ongoing journey that demands leadership, collaboration, and adaptability. CISOs must position themselves as business enablers, guiding the organization to view cybersecurity not as a barrier but as a competitive advantage. This starts with embedding cybersecurity considerations into every stage of the supplier lifecycle, from onboarding to offboarding. Leadership engagement is crucial: CISOs should regularly brief the executive team and board on supply chain risks, translating technical findings into business impacts such as potential downtime, reputational damage, or regulatory penalties.


Developers Must Slay the Complexity and Security Issues of AI Coding Tools

Beyond adding further complexity to the codebase, AI models also lack the contextual nuance that is often necessary for creating high-quality, secure code, primarily when used by developers who lack security knowledge. As a result, vulnerabilities and other flaws are being introduced at a pace never before seen. The current software environment has grown out of control security-wise, showing no signs of slowing down. But there is hope for slaying these twin dragons of complexity and insecurity. Organizations must step into the dragon’s lair armed with strong developer risk management, backed by education and upskilling that gives developers the tools they need to bring software under control. ... AI tools increase the speed of code delivery, enhancing efficiency in raw production, but those early productivity gains are being overwhelmed by code maintainability issues later in the SDLC. The answer is to address those issues at the beginning, before they put applications and data at risk. ... Organizations involved in software creation need to change their culture, adopting a security-first mindset in which secure software is seen not just as a technical issue but as a business priority. Persistent attacks and high-profile data breaches have become too common for boardrooms and CEOs to ignore.