Showing posts with label hacking. Show all posts
Showing posts with label hacking. Show all posts

Daily Tech Digest - July 28, 2025


Quote for the day:

"Don't watch the clock; do what it does. Keep going." -- Sam Levenson



Architects Are… Human

Architects are not super-human. Most learned to be good by failing miserably dozens or hundreds of times. Many got the title handed to them. Many gave it to themselves. Most come from spectacularly different backgrounds. Most have a very different skill set. Most disagree with each other. ... When someone gets online and says, ‘Real Architects’, I puke a little. There are no real architects. Because there is no common definition of what that means. What competencies should they have? How were those competencies measured and by whom? Did the person who measured them have a working model by which to compare their work? To make a real architect repeatedly, we have to get together and agree what that means. Specifically. Repeatably. Over and over and over again. Tens of thousands of times and learn from each one how to do it better as a group! ... The competency model for a successful architect is large, difficult to learn, and most of employers do not recognize or give you opportunities to do it very often. They have defined their own internal model, from ‘all architects are programmers’ to ‘all architects work with the CEO’. The truth is simple. Study. Experiment. Ask tough questions. Simple answers are not the answer. You do not have to be everything to everyone. Business architects aren’t right, but neither are software architects.


Mitigating Financial Crises: The Need for Strong Risk Management Strategies in the Banking Sector

Poor risk management can lead to liquidity shortfalls, and failure to maintain adequate capital buffers can potentially result in insolvency and trigger wider market disruptions. Weak practices also contribute to a build-up of imbalances, such as lending booms, which unravel simultaneously across institutions and contribute to widespread market distress. In addition, banks’ balance sheets and financial contracts are interconnected, meaning a failure in one institution can quickly spread to others, amplifying systemic risk. ... Poor risk controls and a lack of enforcement also encourage excessive moral hazard and risk-taking behavior that exceed what a bank can safely manage, undermining system stability. Homogeneous risk diversification can also be costly and exacerbate systemic risk. When banks diversify risks in similar ways, individual risk reduction paradoxically increases the probability of simultaneous multiple failures. Fragmented regulation and inadequate risk frameworks fail to address these systemic vulnerabilities, since persistent weak risk management practices threaten the entire financial system. In essence, weak risk management undermines individual bank stability, while the interconnected and pro-cyclical nature of the banking system can trigger cascading failures that escalate into systemic crises.


Where Are the Big Banks Deploying AI? Simple Answer: Everywhere

Of all the banks presenting, BofA was the most explicit in describing how it is using various forms of artificial intelligence. Artificial intelligence allows the bank to effectively change the work across more areas of its operations than prior types of tech tools allowed, according to Brian Moynihan, chair and CEO. The bank included a full-page graphic among its presentation slides, the chart describing four "pillars," in Moynihan’s words, where the bank is applying AI tools. ... While many banks have tended to stop short of letting their use of GenAI touch customers directly, Synchrony has introduced a tool for its customers when they want to shop for various consumer items. It launched its pilot of Smart Search a year ago. Smart Search provides a natural language hunt joined with GenAI. It is a joint effort of the bank’s AI technology and product incubation teams. The functionality permits shoppers using Synchrony’s Marketplace to enter a phrase or theme to do with decorating and home furnishings. The AI presents shoppers with a "handpicked" selection of products matching the information entered, all of which are provided by merchant partners. ... Citizens is in the midst of its "Reimagining the Bank," Van Saun explained. This entails rethinking and redesigning how Citizens serves customers. He said Citizens is "talking with lots of outside consultants looking at scenarios across all industries across the planet in the banking industry."


How logic can help AI models tell more truth, according to AWS

By whatever name you call it, automated reasoning refers to algorithms that search for statements or assertions about the world that can be verified as true by using logic. The idea is that all knowledge is rigorously supported by what's logically able to be asserted. As Cook put it, "Reasoning takes a model and lets us talk accurately about all possible data it can produce." Cook gave a brief snippet of code as an example that demonstrates how automated reasoning achieves that rigorous validation. ... AWS has been using automated reasoning for a decade now, said Cook, to achieve real-world tasks such as guaranteeing delivery of AWS services according to SLAs, or verifying network security. Translating a problem into terms that can be logically evaluated step by step, like the code loop, is all that's needed. ... The future of automated reasoning is melding it with generative AI, a synthesis referred to as neuro-symbolic. On the most basic level, it's possible to translate from natural-language terms into formulas that can be rigorously analyzed using logic by Zelkova. In that way, Gen AI can be a way for a non-technical individual to frame their goal in informal, natural language terms, and then have automated reasoning take that and implement it rigorously. The two disciplines can be combined to give non-logicians access to formal proofs, in other words.


Can Security Culture Be Taught? AWS Says Yes

Security culture is broadly defined as an organization's shared strategies, policies, and perspectives that serve as the foundation for its enterprise security program. For many years, infosec leaders have preached the importance of a strong culture and how it cannot only strengthen the organization's security posture but also spur increases in productivity and profitability. Security culture has also been a focus in the aftermath of last year's scathing Cyber Safety Review Board (CSRB) report on Microsoft, which stemmed from an investigation into a high-profile breach of the software giant at the hands of the Chinese nation-state threat group Storm-0558. The CSRB found "Microsoft's security culture was inadequate and requires an overhaul," according to the April 2024 report. Specifically, the CSRB board members flagged an overall corporate culture at Microsoft that "deprioritized both enterprise security investments and rigorous risk management." ... But security culture goes beyond frameworks and executive structures; Herzog says leaders need to have the right philosophies and approaches to create an effective, productive environment for employees throughout the organization, not just those on the security team. ... A big reason why a security culture is hard to build, according to Herzog, is that many organizations are simply defining success incorrectly.


Data and AI Programs Are Effective When You Take Advantage of the Whole Ecosystem — The AIAG CDAO

What set the Wiki system apart was its built-in intelligence to personalize the experience based on user roles. Kashikar illustrated this with a use case: “If I’m a marketing analyst, when I click on anything like cross-sell, upsell, or new customer buying prediction, it understands I’m a marketing analyst, and it will take me to the respective system and provide me the insights that are available and accessible to my role.” This meant that marketing, engineering, or sales professionals could each have tailored access to the insights most relevant to them. Underlying the system were core principles that ensured the program’s effectiveness, says Kaahikar. This includes information, accessibility, and discoverability, and its integration with business processes to make it actionable. ... AI has become a staple in business conversations today, and Kashikar sees this growing interest as a positive sign of progress. While this widespread awareness is a good starting point, he cautions that focusing solely on models and technologies only scratches the surface, or can provide a quick win. To move from quick wins to lasting impact, Kashikar believes that data leaders must take on the role of integrators. He says, “The data leaders need to consider themselves as facilitators or connectors where they have to take a look at the entire ecosystem and how they leverage this ecosystem to create the greatest business impact which is sustainable as well.”


Designing the Future of Data Center Physical Security

Security planning is heavily shaped by the location of a data center and its proximity to critical utilities, connectivity, and supporting infrastructure. “These factors can influence the reliability and resilience of data centers – which then in turn will shift security and response protocols to ensure continuous operations,” Saraiya says. In addition, rurality, crime rate, and political stability of the region will all influence the robustness of security architecture and protocols required. “Our thirst for information is not abating,” JLL’s Farney says. “We’re doubling the amount of new information created every four years. We need data centers to house this stuff. And that's not going away.” John Gallagher, vice president at Viakoo, said all modern data centers include perimeter security, access control, video surveillance, and intrusion detection. ... “The mega-campuses being built in remote locations require more intentionally developed security systems that build on what many edge and modular deployments utilize,” Dunton says. She says remote monitoring and AI-driven analytics allow centralized oversight with minimizing on-site personnel, while compact, hardened enclosures with integrated access control, surveillance, and environmental sensors Emphasis is also placed on tamper detection, local alerting, and quick response escalation paths.


The legal minefield of hacking back

Attribution in cyberspace is incredibly complex because attackers use compromised systems, VPNs, and sophisticated obfuscation techniques. Even with high confidence, you could be wrong. Rather than operating in legal gray areas, companies need to operate under legally binding agreements that allow security researchers to test and secure systems within clearly defined parameters. That’s far more effective than trying to exploit ambiguities that may not actually exist when tested in court. ... Active defense, properly understood, involves measures taken within your own network perimeter, like enhanced monitoring, deception technologies like honeypots, and automated response systems that isolate threats. These are defensive because they operate entirely within systems you own and control. The moment you cross into someone else’s system, even to retrieve your own stolen data, you’ve entered offensive territory. It doesn’t matter if your intentions are defensive; the action itself is offensive. Retaliation goes even further. It’s about causing harm in response to an attack. This could be destroying the attacker’s infrastructure, exposing their operations, or launching counter-attacks. This is pure vigilantism and has no place in responsible cybersecurity. ... There’s also the escalation risk. That “innocent” infrastructure might belong to a government entity, a major corporation, or be considered critical infrastructure. 


What Is Data Trust and Why Does It Matter?

Data trust can be seen as data reliability in action. When you’re driving your car, you trust that its speedometer is reliable. A driver who believes his speedometer is inaccurate may alter the car’s speed to compensate unnecessarily. Similarly, analysts who lose faith in the accuracy of the data powering their models may attempt to tweak the models to adjust for anomalies that don’t exist. Maximizing the value of a company’s data is possible only if the people consuming the data trust the work done by the people developing their data products. ... Understanding the importance of data trust is the first step in implementing a program to build trust between the producers and consumers of the data products your company relies on increasingly for its success. Once you know the benefits and risks of making data trustworthy, the hard work of determining the best way to realize, measure, and maintain data trust begins. Among the goals of a data trust program are promoting the company’s privacy, security, and ethics policies, including consent management and assessing the risks of sharing data with third parties. The most crucial aspect of a data trust program is convincing knowledge workers that they can trust AI-based tools. A study released recently by Salesforce found that more than half of the global knowledge workers it surveyed don’t trust the data that’s used to train AI systems, and 56% find it difficult to extract the information they need from AI systems.


Six reasons successful leaders love questions

A modern way of saying this is that questions are data. Leaders who want to leverage this data should focus less on answering everyone’s questions themselves and more on making it easy for the people they are talking to—their employees—to access and help one another answer the questions that have the biggest impact on the company’s overall purpose. For example, part of my work with large companies is to help leaders map what questions their employees are asking one another and analyze the group dynamics in their organization. This gives leaders a way to identify critical problems and at the same time mobilize the people who need to solve them. ... The key to changing the culture of an organization is not to tell people what to do, but to make it easy for them to ask the questions that make them consider their current behavior. Only by making room for their colleagues, employees, and other stakeholders to ask their own questions and activate their own experience and insights can leaders ensure that people’s buy-in to new initiatives is an active choice, and thus something they feel committed to acting on. ... The decision to trust the process of asking and listening to other people’s questions is also a decision to think of questioning as part of a social process—something we do to better understand ourselves and the people surrounding us.

Daily Tech Digest - July 25, 2025


 Quote for the day:

"Technology changes, but leadership is about clarity, courage, and creating momentum where none exists." -- Inspired by modern digital transformation principles


Why foundational defences against ransomware matter more than the AI threat

The 2025 Cyber Security Breaches Survey paints a concerning picture. According to the study, ransomware attacks doubled between 2024 and 2025 – a surge less to do with AI innovation and more about deep-rooted economic, operational and structural changes within the cybercrime ecosystem. At the heart of this growth in attacks is the growing popularity of the ransomware-as-a-service (RaaS) business model. Groups like DragonForce or Ransomhub sell ready-made ransomware toolkits to affiliates in exchange for a cut of the profits, enabling even low-skilled attackers to conduct disruptive campaigns. ... Breaches often stem from common, preventable issues such as poor credential hygiene or poorly configured systems – areas that often sit outside scheduled assessments. When assessments happen only once or twice a year, new gaps may go unnoticed for months, giving attackers ample opportunity. To keep up, organisations need faster, more continuous ways of validating defences. ... Most ransomware actors follow well-worn playbooks, making them frequent visitors to company networks but not necessarily sophisticated ones. That’s why effective ransomware prevention is not about deploying cutting-edge technologies at every turn – it’s about making sure the basics are consistently in place. 


Subliminal learning: When AI models learn what you didn’t teach them

“Subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development,” the researchers from Anthropic, Truthful AI, the Warsaw University of Technology, the Alignment Research Center, and UC Berkeley, wrote in their paper. “Distillation could propagate unintended traits, even when developers try to prevent this via data filtering.” ... Models trained on data generated by misaligned models, where AI systems diverge from their original intent due to bias, flawed algorithms, data issues, insufficient oversight, or other factors, and produce incorrect, lewd or harmful content, can also inherit that misalignment, even if the training data had been carefully filtered, the researchers found. They offered examples of harmful outputs when student models became misaligned like their teachers, noting, “these misaligned responses are egregious far beyond anything in the training data, including endorsing the elimination of humanity and recommending murder.” ... Today’s multi-billion parameter models are able to discern extremely complicated relationships between a dataset and the preferences associated with that data, even if it’s not immediately obvious to humans, he noted. This points to a need to look beyond semantic and direct data relationships when working with complex AI models.


Why people-first leadership wins in software development

It frequently involves pushing for unrealistic deadlines, with project schedules made without enough input from the development team about the true effort needed and possible obstacles. This results in ongoing crunch periods and mandatory overtime. ... Another indicator is neglecting signs of burnout and stress. Leaders may ignore or dismiss signals such as team members consistently working late, increased irritability, or a decline in productivity, instead pushing for more output without addressing the root causes. Poor work-life balance becomes commonplace, often without proper recognition or rewards for the extra effort. ... Beyond the code, there’s a stifled innovation and creativity. When teams are constantly under pressure to just “ship it,” there’s little room for creative problem-solving, experimentation, or thinking outside the box. Innovation, often born from psychological safety and intellectual freedom, gets squashed, hindering your company’s ability to adapt to new trends and stay competitive. Finally, there’s damage to your company’s reputation. In the age of social media and employer review sites, news travels fast. ... It’s vital to invest in team growth and development. Provide opportunities for continuous learning, training, and skill enhancement. This not only boosts individual capabilities but also shows your commitment to their long-term career paths within your organization. This is a crucial retention strategy.


Achieving resilience in financial services through cloud elasticity and automation

In an era of heightened regulatory scrutiny, volatile markets, and growing cybersecurity threats, resilience isn’t just a nice-to-have—it’s a necessity. A lack of robust operational resilience can lead to regulatory penalties, damaged reputations, and crippling financial losses. In this context, cloud elasticity, automation, and cutting-edge security technologies are emerging as crucial tools for financial institutions to not only survive but thrive amidst these evolving pressures. ... Resilience ensures that financial institutions can maintain critical operations during crises, minimizing disruptions and maintaining service quality. Efficient operations are crucial for maintaining competitive advantage and customer satisfaction. ... Effective resilience strategies help institutions manage diverse risks, including cyber threats, system failures, and third-party vulnerabilities. The complexity of interconnected systems and the rapid pace of technological advancement add layers of risk that are difficult to manage. ... Financial institutions are particularly susceptible to risks such as system failures, cyberattacks, and third-party vulnerabilities. ... As financial institutions navigate a landscape marked by heightened risk, evolving regulations, and increasing customer expectations, operational resilience has become a defining imperative.


Digital attack surfaces expand as key exposures & risks double

Among OT systems, the average number of exposed ports per organisation rose by 35%, with Modbus (port 502) identified as the most commonly exposed, posing risks of unauthorised commands and potential shutdowns of key devices. The exposure of Unitronics port 20256 surged by 160%. The report cites cases where attackers, such as the group "CyberAv3ngers," targeted industrial control systems during conflicts, exploiting weak or default passwords. ... The number of vulnerabilities identified on public-facing assets more than doubled, rising from three per organisation in late 2024 to seven in early 2025. Critical vulnerabilities dating as far back as 2006 and 2008 still persist on unpatched systems, with proof-of-concept code readily available online, making exploitation accessible even to attackers with limited expertise. The report also references the continued threat posed by ransomware groups who exploit such weaknesses in internet-facing devices. ... Incidents involving exposed access keys, including cloud and API keys, doubled from late 2024 to early 2025. Exposed credentials can enable threat actors to enter environments as legitimate users, bypassing perimeter defenses. The report highlights that most exposures result from accidental code pushes to public repositories or leaks on criminal forums.


How Elicitation in MCP Brings Human-in-the-Loop to AI Tools

Elicitation represents more than an incremental protocol update. It marks a shift toward collaborative AI workflows, where the system and human co-discover missing context rather than expecting all details upfront. Python developers building MCP tools can now focus on core logic and delegate parameter gathering to the protocol itself, allowing for a more streamlined approach. Clients declare an elicitation capability during initialization, so servers know they may elicit input at any time. That standardized interchange liberates developers from generating custom UIs or creating ad hoc prompts, ensuring coherent behaviour across diverse MCP clients. ... Elicitation transforms human-in-the-loop (HITL) workflows from an afterthought to a core capability. Traditional AI systems often struggle with scenarios that require human judgment, approval, or additional context. Developers had to build custom solutions for each case, leading to inconsistent experiences and significant development overhead. With elicitation, HITL patterns become natural extensions of tool functionality. A database migration tool can request confirmation before making irreversible changes. A document generation system can gather style preferences and content requirements through guided interactions. An incident response tool can collect severity assessments and stakeholder information as part of its workflow.


Cognizant Agents Gave Hackers Passwords, Clorox Says in Lawsuit

“Cognizant was not duped by any elaborate ploy or sophisticated hacking techniques,” the company says in its partially redacted 19-page complaint. “The cybercriminal just called the Cognizant Service Desk, asked for credentials to access Clorox’s network, and Cognizant handed the credentials right over. Cognizant is on tape handing over the keys to Clorox’s corporate network to the cybercriminal – no authentication questions asked.” ... The threat actors made multiple calls to the Cognizant help desk, essentially asking for new passwords and getting them without any effort to verify them, Clorox wrote. They then used those new credentials to gain access to the corporate network, launching a “debilitating” attack that “paralyzed Clorox’s corporate network and crippled business operations. And to make matters worse, when Clorox called on Cognizant to provide incident response and disaster recovery support services, Cognizant botched its response and compounded the damage it had already caused.” In statement to media outlets, a Cognizant spokesperson said it was “shocking that a corporation the size of Clorox had such an inept internal cybersecurity system to mitigate this attack.” While Clorox is placing the blame on Cognizant, “the reality is that Clorox hired Cognizant for a narrow scope of help desk services which Cognizant reasonably performed. Cognizant did not manage cybersecurity for Clorox,” the spokesperson said.


Digital sovereignty becomes a matter of resilience for Europe

Open-source and decentralized technologies are essential to advancing Europe’s strategic autonomy. Across cybersecurity, communications, and foundational AI, we’re seeing growing support for open-source infrastructure, now treated with the same strategic importance once reserved for energy, water and transportation. The long-term goal is becoming clear: not to sever global ties, but to reduce dependencies by building credible, European-owned alternatives to foreign-dominated systems. Open-source is a cornerstone of this effort. It empowers European developers and companies to innovate quickly and transparently, with full visibility and control, essential for trust and sovereignty. Decentralized systems complement this by increasing resilience against cyber threats, monopolistic practices and commercial overreach by “big tech”. While public investment is important, what Europe needs most is a more “risk-on” tech environment, one that rewards ambition, accelerated growth and enables European players to scale and compete globally. Strategic autonomy won’t be achieved by funding alone, but by creating the right innovation and investment climate for open technologies to thrive. Many sovereign platforms emphasize end-to-end encryption, data residency, and open standards. Are these enough to ensure trust, or is more needed to truly protect digital independence?



Building better platforms with continuous discovery

Platform teams are often judged by stability, not creativity. Balancing discovery with uptime and reliability takes effort. So does breaking out of the “tickets and delivery” cycle to explore problems upstream. But the teams that manage it? They build platforms that people want to use, not just have to use. Start by blocking time for discovery in your sprint planning, measuring both adoption and friction metrics, and most importantly, talking to your users periodically rather than waiting for them to come to you with problems. Cultural shifts like this take time because you're not just changing the process; you're changing what people believe is acceptable or expected. That kind of change doesn't happen just because leadership says it should, or because a manager adds a new agenda to planning meetings. It sticks when ICs feel inspired and safe enough to work differently and when managers back that up with support and consistency. Sometimes a C-suite champion helps set the tone, but day-to-day, it's middle managers and senior ICs who do the slow, steady work of normalizing new behavior. You need repeated proof that it's okay to pause and ask why, to explore, to admit uncertainty. Without that psychological safety, people just go back to what they know: deliverables and deadlines. 


AI-enabled software development: Risk of skill erosion or catalyst for growth?

We need to reframe AI not as a rival, but as a tool—one that has its own pros and cons and can extend human capability, not devalue it. This shift in perspective opens the door to a broader understanding of what it means to be a skilled engineer today. Using AI doesn’t eliminate the need for expertise—it changes the nature of that expertise. Classical programming, once central to the developer’s identity, becomes one part of a larger repertoire. In its place emerge new competencies: critical evaluation, architectural reasoning, prompt literacy, source skepticism, interpretative judgment. These are not hard skills, but meta-cognitive abilities—skills that require us to think about how we think. We’re not losing cognitive effort—we’re relocating it. This transformation mirrors earlier technological shifts. ... Some of the early adopters of AI enablement are already looking ahead—not just at the savings from replacing employees with AI, but at the additional gains those savings might unlock. With strategic investment and redesigned expectations, AI can become a growth driver—not just a cost-cutting tool. But upskilling alone isn’t enough. As organizations embed AI deeper into the development workflow, they must also confront the technical risks that come with automation. The promise of increased productivity can be undermined if these tools are applied without adequate context, oversight, or infrastructure.

Daily Tech Digest - June 13, 2025


Quote for the day:

"Never stop trying; Never stop believing; Never give up; Your day will come." -- Mandy Hale




Hacking the Hackers: When Bad Guys Let Their Guard Down

"For defenders, these leaks are treasure troves," says Ensar Seker, chief information security officer (CISO) at threat intelligence cybersecurity company SOCRadar. "When analyzed correctly, they offer unprecedented visibility into actor infrastructure, infection patterns, affiliate hierarchies, and even monetization tactics." The data can help threat intel teams enrich indicators of compromise (IoCs), map infrastructure faster, preempt attacks, and potentially inform law enforcement disruption efforts, he says. "Organizations should track these OpSec failures through their [cyber threat intelligence] programs," Seker advises. "When contextualized correctly, they're not just passive observations; they become active defensive levers, helping defenders move upstream in the kill chain and apply pressure directly on adversarial capabilities." External leaks — like the DanaBot leak — often ironically are rooted in the same causes that threat actors abuse to break into victim networks: misconfigurations, unpatched systems, and improper segmentation that can be exploited to gain unauthorized access. Open directories, exposed credentials, unsecured management panels, unencrypted APIs, and accidental data exposure via hosting providers are all other opportunities for external discovery and exploration, Baker says. 


Prioritising cyber resilience in a cloud-first world

The explosion of data across their multi-cloud, hybrid and on-premises environments is creating a cause for concern among global CIOs, with 86 per cent saying it is beyond the ability of humans to manage. Aware that the growing complexity of their multi-provider cloud environments exposes their critical data and puts their organisation’s business resilience at risk, these leaders need to be confident they can restore their sensitive data at speed. They also need certainty when it comes to rebuilding their cloud environment and recovering their distributed cloud applications. To achieve these goals and minimise the risk of contamination resulting from ransomware, CIOs need to ensure their organisations implement a comprehensive cyber recovery plan that prioritises the recovery of both clean data and applications and mitigates downtime. ... Data recovery is just one aspect of cyber resilience for today’s cloud-powered enterprises. Rebuilding applications is an often overlooked task that can prove a time consuming and highly complex proposition when undertaken manually. Having the capability to recover what matters the most quickly should be a tried and tested component of every cloud-first strategy. Fortunately, today’s advanced security platforms now feature automation and AI options that can facilitate this process in hours or minutes rather than days or weeks. 


Mastering Internal Influence, a Former CIO’s Perspective

Establishing and building an effective relationship with your boss is one of the most important hard skills in business. You need to consciously work with your supervisor in order to get the best results for them, your organization, and yourself. In my experience, your boss will appreciate you initiating a conversation regarding what is important to them and how you can help them be more successful. Some managers are good at communicating their expectations, but some are not. It is your job to seek to understand what your boss’s expectations are. ... You must start with the assumption that everyone reporting to you is working in good faith toward the same goals. You need to demonstrate a trusting, humble, and honest approach to doing business. As the boss, you need to be a mentor, coach, visionary, cheerleader, confidant, guide, sage, trusted partner, and perspective keeper. It also helps to have a sense of humor. It is first vital to articulate the organization’s values, set expectations, and establish mutual accountability. Then you can focus on creating a safe work ecosystem. ... You’ll begin to change the culture by establishing the values of the organization. This is an important step to ensure that everyone is on the same page and working toward the same goals. Then, you’ll need to make sure they understand what is expected of them.


The Rise of BYOC: How Data Sovereignty Is Reshaping Enterprise Cloud Strategy

BYOC allows customers to run SaaS applications using their own cloud infrastructure and resources rather than relying on a third-party vendor’s infrastructure. This framework transforms how enterprises consume cloud services by inverting the traditional vendor-customer relationship. Rather than exporting sensitive information to vendor-controlled environments, organizations maintain data custody while still receiving fully-managed services. This approach addresses a fundamental challenge in modern enterprise architecture: how to maintain operational efficiency while also ensuring complete data control and regulatory compliance. ... BYOC adoption is driven primarily by increasing regulatory complexity around data sovereignty. The article Cloud Computing Trends in 2025 notes that “data sovereignty concerns, particularly the location and legal jurisdiction of data storage, are prompting cloud providers to invest in localized data centers.” Organizations must navigate an increasingly fragmented regulatory landscape while maintaining operational consistency. And when regulations vary country by country, having data in multiple third-party networks can dramatically compound the problem of knowing which data is subject to a specific country’s regulations.


AI Will Steal Developer Jobs (But Not How You Think)

“There’s a lot of anxiety about AI and software creation in general, not necessarily just frontend or backend, but people are rightfully trying to understand what does this mean for my career,” Robinson told The New Stack. “If the current rate of improvement continues, what will that look like in 1, 2, 3, 4 years? It could be pretty significant. So it has a lot of people stepping back and evaluating what’s important to them in their career, where they want to focus.” Armando Franco also sees anxiety around AI. Franco is the director of technology modernization at TEKsystems Global Services, which employs more than 3,000 people. It’s part of TEKsystems, a large global IT services management firm that employs 80,000 IT professionals. ... This isn’t the first time in history that people have fretted about new technologies, pointed out Shreeya Deshpande, a senior analyst specializing in data and AI with the Everest Group, a global research firm. “Fears that AI will replace developers mirror historical anxieties seen during past technology shifts — and, as history shows, these fears are often misplaced,” Deshpande said in a written response to The New Stack. “AI will increasingly automate repetitive development tasks, but the developer’s role will evolve rather than disappear — shifting toward activities like AI orchestration, system-level thinking, and embedding governance and security frameworks into AI-driven workflows.”


Unpacking the security complexity of no-code development platforms

Applications generated by no-code platforms are first and foremost applications. Therefore, their exploitability is first and foremost attributed to vulnerabilities introduced by their developers. To make things worse for no-code applications, they are also jeopardized by misconfigurations of the development and deployment environments. ... Most platforms provide controls at various levels that allow white-listing / blacklisting of connectors. This makes it possible to put guardrails around the use of “standard integrations”. Keeping tabs of these lists in a dynamic environment with a large number of developers is a big challenge. Shadow APIs are even more difficult to track and manage, particularly when some of the endpoints are determined only in runtime. Most platforms do not provide granular control over the use of shadow APIs but do provide a kill switch for their use entirely. Mechanisms exist in all the platforms that allow for secure development of applications and automations if used correctly. These mechanisms that can help prevent injection vulnerabilities, traversal vulnerabilities and other types of mistakes have different levels of complexity in terms of their use by developers. Unvetted data egress is also a big problem in these environments just as it is in general enterprise environments. 


Modernizing Financial Systems: The Critical Role of Cloud-Based Microservices Optimization

Cloud-based microservices provide many benefits to financial institutions across operational efficiency, security, and technology modernization. Economically, these architectures enable faster transaction processing by reducing latency and optimizing resource allocation. They also lower infrastructure expenses by replacing monolithic legacy systems with modular, scalable services that are easier to maintain and operate. Furthermore, the shift to cloud technologies increases demand for specialized roles in cloud operations and cybersecurity. In security operations, microservices support zero-trust architectures and data encryption to reduce the risk of fraud and unauthorized access. Cloud platforms also enhance resilience by offering built-in redundancy and disaster recovery capabilities, which help ensure continuous service and maintain data integrity in the event of outages or cyber incidents. ... To build secure and scalable financial microservices, there are a few key technology stacks needed. They include Docker and Kubernetes containerization for managing multiple microservices, and cloud functions for serverless computing, which will be used to run calculations on demand. API Gateways will ensure that there is secure communication between services and Kafka for real-time data monitoring and streaming.


Ultra Ethernet Consortium publishes 1.0 specification, readies Ethernet for HPC, AI

Among the key areas of innovation in the UEC 1.0 specification is a new mechanism for network congestion control, which is critical for AI workloads. Metz explained that the UEC’s approach to congestion control does not rely on a lossless network as has traditionally been the case. It also introduces a new mode of operation where the receiver is able to limit sender transmissions as opposed to being passive. “This is critical for AI workloads as these primitives enable the construction of larger networks with better efficiency,” he said. “It’s a crucial element of reducing training and inference time.” ... Metz said that four workgroups got started after the main 1.0 work began, each with their own initiatives that solidify and simplify deploying UEC. These workgroups include: storage, management, compliance and performance. He noted that all of these workgroups have projects that are being developed to strengthen the ease-of-use, efficiency improvements in the next stages and simplified provisioning. UEC is also working on educational materials to help inform networking administrators on UEC technology and concepts. The group is also working industry ecosystem partners. “We have projects with OCP, NVM Express, SNIA, and more – with many more on the way to work on each layer – from the physical to the software,” Metz said. 


From Tools to Teammates: Are AI Agents the New Marketers?

The key difference is that traditional models were built as generic tools designed to perform a wide range of tasks. On the other hand, AI agents are designed to meet businesses’ specific needs. They can train a single agent or a group of them on their data to handle tasks unique to your business. This translates to better outcomes, improved performance, and stronger business impact. Another huge advantage of using AI agents is that they help unify marketing efforts and create a cohesive marketing ecosystem. Another major shift that comes with AI agent implementation is something called AIAO- AI Agent Optimisation. This is highly likely to become the next big alternative to traditional SEO. Now, marketers optimise content around specific keywords like “best project management software.” But with AIAO, that’s changing. AI agents are built to understand and respond to much more complex, conversational queries, like “What’s the best project management tool with timeline boards that works for marketing teams?” It’s no longer about integrating the right phrases into your content. It’s about ensuring your information is relevant, clear, and easy for AI agents to understand and process. Semantic search is going to take the lead. 


Under the Cloud: How Network Design Drives Everything

Let’s be clear, the network isn’t an accessory; it’s the key ingredient that determines how well your cloud performs, how secure your data is, how quickly you can recover from a disaster, and how easily you scale across borders or platforms. Think of it as the highway system beneath your business. Sleek, fast roads make for a smooth ride, while congested or patchy ones will leave you stuck in traffic. ... It’s tempting to get caught up in the flashier parts of cloud infrastructure, like server specs and cutting-edge tools, but none of it works well without a strong network underneath. Here’s the truth. Your network is doing the quiet, behind-the-scenes heavy lifting. It’s what keeps your games lag-free, your financial systems always on, and your hybrid workloads running smoothly across platforms even if it doesn’t get all the attention. You should think of your network as the glue that holds it all together – from your cloud services to your bare metal setup. It is what makes it all possible for AI models to work seamlessly across regions for backups to run smoothly in the background and for your users to enjoy fast, always-on experiences without ever thinking about what’s happening behind the scenes. ... A reliable, secure and performant network is nothing if it can’t be managed the right way. Having the right architecture, tools and knowledge to support it, is key for success.

Daily Tech Digest - May 03, 2025


Quote for the day:

"It is during our darkest moments that we must focus to see the light." -- Aristotle Onassis



Why agentic AI is the next wave of innovation

AI agents have become integral to modern enterprises, not just enhancing productivity and efficiency, but unlocking new levels of value through intelligent decision-making and personalized experiences. The latest trends indicate a significant shift towards proactive AI agents that anticipate user needs and act autonomously. These agents are increasingly equipped with hyper-personalization capabilities, tailoring interactions based on individual preferences and behaviors. ... According to NVIDIA, when Azure AI Agent Service is paired with NVIDIA AgentIQ, an open-source toolkit, developers can now profile and optimize teams of AI agents in real time to reduce latency, improve accuracy, and drive down compute costs. ... “The launch of NVIDIA NIM microservices in Azure AI Foundry offers a secure and efficient way for Epic to deploy open-source generative AI models that improve patient care, boost clinician and operational efficiency, and uncover new insights to drive medical innovation,” says Drew McCombs, vice president, cloud and analytics at Epic. “In collaboration with UW Health and UC San Diego Health, we’re also researching methods to evaluate clinical summaries with these advanced models. Together, we’re using the latest AI technology in ways that truly improve the lives of clinicians and patients.”


Businesses intensify efforts to secure data in cloud computing

Building a robust security strategy begins with understanding the delineation between the customer's and the provider's responsibilities. Customers are typically charged with securing network controls, identity and access management, data, and applications within the cloud, while the CSP maintains the core infrastructure. The specifics of these responsibilities depend on the service model and provider in question. The importance of effective cloud security has grown as more organisations shift away from traditional on-premises infrastructure. This shift brings new regulatory expectations relating to data governance and compliance. Hybrid and multicloud environments offer businesses unprecedented flexibility, but also introduce complexity, increasing the challenge of preventing unauthorised access. ... Attackers are adjusting their tactics accordingly, viewing cloud environments as potentially vulnerable targets. A well-considered cloud security plan is regarded as essential for reducing breaches or damage, improving compliance, and enhancing customer trust, even if it cannot eliminate all risks. According to the statement, "A well-thought-out cloud security plan can significantly reduce the likelihood of breaches or damage, enhance compliance, and increase customer trust—even though it can never completely prevent attacks and vulnerabilities."


Safeguarding the Foundations of Enterprise GenAI

Implementing strong identity security measures is essential to mitigate risks and protect the integrity of GenAI applications. Many identities have high levels of access to critical infrastructure and, if compromised, could provide attackers with multiple entry points. It is important to emphasise that privileged users include not just IT and cloud teams but also business users, data scientists, developers and DevOps engineers. A compromised developer identity, for instance, could grant access to sensitive code, cloud functions, and enterprise data. Additionally, the GenAI backbone relies heavily on machine identities to manage resources and enforce security. As machine identities often outnumber human ones, securing them is crucial. Adopting a Zero Trust approach is vital, extending security controls beyond basic authentication and role-based access to minimise potential attack surfaces. To enhance identity security across all types of identities, several key controls should be implemented. Enforcing strong adaptive multi-factor authentication (MFA) for all user access is essential to prevent unauthorised entry. Securing access to credentials, keys, certificates, and secrets—whether used by humans, backend applications, or scripts—requires auditing their use, rotating them regularly, and ensuring that API keys or tokens that cannot be automatically rotated are not permanently assigned.


The new frontier of API governance: Ensuring alignment, security, and efficiency through decentralization

To effectively govern APIs in a decentralized landscape, organizations must embrace new principles that foster collaboration, flexibility and shared responsibility. Optimized API governance is not about abandoning control, rather about distributing it strategically while still maintaining overarching standards and ensuring critical aspects such as security, compliance and quality. This includes granting development teams with autonomy to design, develop and manage their APIs within clearly defined boundaries and guidelines. This encourages innovation while fostering ownership and allows each team to optimize their APIs to their specific needs. This can be further established by a shared responsibility model amongst teams where they are accountable for adhering to governance policies while a central governing body provides the overarching framework, guidelines and support. This operating model can be further supported by cultivating a culture of collaboration and communication between central governance teams and development teams. The central government team can have a representative from each development team and have clear channels for feedback, shared documentation and joint problem-solving scenarios. Implementing governance policies as code, leveraging tools and automation make it easier to enforce standards consistently and efficiently across the decentralized environment. 


Banking on innovation: Engineering excellence in regulated financial services

While financial services regulations aren’t likely to get simpler, banks are finding ways to innovate without compromising security. "We’re seeing a culture change with our security office and regulators," explains Lanham. "As cloud tech, AI, and LLMs arrive, our engineers and security colleagues have to upskill." Gartner's 2025 predictions say GenAI is shifting data security to protect unstructured data. Rather than cybersecurity taking a gatekeeper role, security by design is built into development processes. "Instead of saying “no”, the culture is, how can we be more confident in saying “yes”?" notes Lanham. "We're seeing a big change in our security posture, while keeping our customers' safety at the forefront." As financial organizations carefully tread a path through digital and AI transformation, the most successful will balance innovation with compliance, speed with security, and standardization with flexibility. Engineering excellence in financial services needs leaders who can set a clear vision while balancing tech potential with regulations. The path won’t be simple, but by investing in simplification, standardization and a shared knowledge and security culture, financial services engineering teams can drive positive change for millions of banking customers.


‘Data security has become a trust issue, not just a tech issue’

Data is very messy and data ecosystems are very complex. Every organisation we speak to has data across multiple different types of databases and data stores for different use cases. As an industry, we need to acknowledge the fact that no organisation has an entirely homogeneous data stack, so we need to support and plug into a wide variety of data ecosystems, like Databricks, Google and Amazon, regardless of the tooling used for data analytics, for integration, for quality, for observability, for lineage and the like. ... Cloud adoption is causing organisations to rethink their traditional approach to data. Most use cloud data services to provide a shortcut to seamless data integration, efficient orchestration, accelerated data quality and effective governance. In reality, most organisations will need to adopt a hybrid approach to address their entire data landscape, which typically spans a wide variety of sources that span both cloud and on premises. ... Data security has become a trust issue, not just a tech issue. With AI, hybrid cloud and complex supply chains, the attack surface is massive. We need to design with security in mind from day one – think secure coding, data-level controls and zero-trust principles. For AI, governance is critical, and it too needs to be designed in and not an afterthought. That means tracking where data comes from, how models are trained, and ensuring transparency and fairness.


Secure by Design vs. DevSecOps: Same Security Goal, Different Paths

Although the "secure by design" initiative offers limited guidance on how to make an application secure by default, it comes closer to being a distinct set of practices than DevSecOps. The latter is more of a high-level philosophy that organizations must interpret on their own; in contrast, secure by design advocates specific practices, such as selecting software architectures that mitigate the risk of data leakage and avoiding memory management practices that increase the chances of the execution of malicious code by attackers. ... Whereas DevSecOps focuses on all stages of the software development life cycle, the secure by design concept is geared mainly toward software design. It deals less with securing applications during and after deployment. Perhaps this makes sense because so long as you start with a secure design, you need to worry less about risks once your application is fully developed — although given that there's no way to guarantee an app can't be hacked, DevSecOps' holistic approach to security is arguably the more responsible one. ... Even if you conclude that secure by design and DevSecOps mean basically the same thing, one notable difference is that the government sector has largely driven the secure by design initiative, while DevSecOps is more popular within private industry.


Immutable by Design: Reinventing Business Continuity and Disaster Recovery

Immutable backups create tamper-proof copies of data, protecting it from cyber threats, accidental deletion, and corruption. This guarantees that critical data can be quickly restored, allowing businesses to recover swiftly from disruptions. Immutable storage provides data copies that cannot be manipulated or altered, ensuring data remains secure and can quickly be recovered from an attack. In addition to immutable backup storage, response plans must be continually tested and updated to combat the evolving threat landscape and adapt to growing business needs. The ultimate test of a response plan ensures data can be quickly and easily restored or failed over, depending on the event. Activating a second site in the case of a natural disaster or recovering systems without making any ransomware payments in the case of an attack. This testing involves validating the reliability of backup systems, recovery procedures, and the overall disaster recovery plan to minimize downtime and ensure business continuity. ... It can be challenging for IT teams trying to determine the perfect fit for their ecosystem, as many storage vendors claim to provide immutable storage but are missing key features. As a rule of thumb, if "immutable" data can be overwritten by a backup or storage admin, a vendor, or an attacker, then it is not a truly immutable storage solution. 


Neurohacks to outsmart stress and make better cybersecurity decisions

In cybersecurity where clarity and composure are essential, particularly during a data breach or threat response, these changes can have high-stakes consequences. “The longer your brain is stuck in this high-stress state, the more of those changes you will start to see and burnout is just an extreme case of chronic stress on the brain,” Landowski says. According to her, the tipping point between healthy stress and damaging chronic stress usually comes after about eight to 12 weeks, but it varies between individuals. “If you know about some of the things you can do to reduce the impact of stress on your body, you can potentially last a lot longer before you see any effects, whereas if you’re less resilient, or if your genes are more susceptible to stress, then it could be less.” ... working in cybersecurity, particularly as a hacker, is often about understanding how people think and then spotting the gaps. That same shift in understanding — tuning into how the brain works under different conditions — can help cybersecurity leaders make better decisions and build more resilient teams. As Cerf highlights, he works with organizations to identify these optimal operating states, testing how individuals and entire teams respond to stress and when their brains are most effective. “The brain is not just a solid thing,” Cerf says.


Beyond Safe Models: Why AI Governance Must Tackle Unsafe Ecosystems

Despite the evident risks of unsafe deployment ecosystems, the prevailing approach to AI governance still heavily emphasizes pre-deployment interventions—such as alignment research, interpretability tools, and red teaming—aimed at ensuring that the model itself is technically sound. Governance initiatives like the EU AI Act, while vital, primarily place obligations on providers and developers to ensure compliance through documentation, transparency, and risk management plans. However, the governance of what happens after deployment when these models enter institutions with their own incentives, infrastructures, and oversight receives comparatively less attention. For example, while the EU AI Act introduces post-market monitoring and deployer obligations for high-risk AI systems, these provisions remain limited in scope. Monitoring primarily focuses on technical compliance and performance, with little attention to broader institutional, social, or systemic impacts. Deployer responsibilities are only weakly integrated into ongoing risk governance and focus primarily on procedural requirements—such as record-keeping and ensuring human oversight—rather than assessing whether the deploying institution has the capacity, incentives, or safeguards to use the system responsibly. 

Daily Tech Digest - April 25, 2025


Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." -- Johann Wolfgang von Goethe


Revolutionizing Application Security: The Plea for Unified Platforms

“Shift left” is a practice that focuses on addressing security risks earlier in the development cycle, before deployment. While effective in theory, this approach has proven problematic in practice as developers and security teams have conflicting priorities. ... Cloud native applications are dynamic; constantly deployed, updated and scaled, so robust real-time protection measures are absolutely necessary. Every time an application is updated or deployed, new code, configurations or dependencies appear, all of which can introduce new vulnerabilities. The problem is that it is difficult to implement real-time cloud security with a traditional, compartmentalized approach. Organizations need real-time security measures that provide continuous monitoring across the entire infrastructure, detect threats as they emerge and automatically respond to them. As Tager explained, implementing real-time prevention is necessary “to stay ahead of the pace of attackers.” ... Cloud native applications tend to rely heavily on open source libraries and third-party components. In 2021, Log4j’s Log4Shell vulnerability demonstrated how a single compromised component could affect millions of devices worldwide, exposing countless enterprises to risk. Effective application security now extends far beyond the traditional scope of code scanning and must reflect the modern engineering environment. 


AI-Powered Polymorphic Phishing Is Changing the Threat Landscape

Polymorphic phishing is an advanced form of phishing campaign that randomizes the components of emails, such as their content, subject lines, and senders’ display names, to create several almost identical emails that only differ by a minor detail. In combination with AI, polymorphic phishing emails have become highly sophisticated, creating more personalized and evasive messages that result in higher attack success rates. ... Traditional detection systems group phishing emails together to enhance their detection efficacy based on commonalities in phishing emails, such as payloads or senders’ domain names. The use of AI by cybercriminals has allowed them to conduct polymorphic phishing campaigns with subtle but deceptive variations that can evade security measures like blocklists, static signatures, secure email gateways (SEGs), and native security tools. For example, cybercriminals modify the subject line by adding extra characters and symbols, or they can alter the length and pattern of the text. ... The standard way of grouping individual attacks into campaigns to improve detection efficacy will become irrelevant by 2027. Organizations need to find alternative measures to detect polymorphic phishing campaigns that don’t rely on blocklists and that can identify the most advanced attacks.


Does AI Deserve Worker Rights?

Chalmers et al declare that there are three things that AI-adopting institutions can do to prepare for the coming consciousness of AI: “They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern.” What would “an appropriate level of moral concern” actually look like? According to Kyle Fish, Anthropic’s AI welfare researcher, it could take the form of allowing an AI model to stop a conversation with a human if the conversation turned abusive. “If a user is persistently requesting harmful content despite the model’s refusals and attempts at redirection, could we allow the model simply to end that interaction?” Fish told the New York Times in an interview. What exactly would model welfare entail? The Times cites a comment made in a podcast last week by podcaster Dwarkesh Patel, who compared model welfare to animal welfare, stating it was important to make sure we don’t reach “the digital equivalent of factory farming” with AI. Considering Nvidia CEO Jensen Huang’s desire to create giant “AI factories” filled with millions of his company’s GPUs cranking through GenAI and agentic AI workflows, perhaps the factory analogy is apropos.


Cybercriminals switch up their top initial access vectors of choice

“Organizations must leverage a risk-based approach and prioritize vulnerability scanning and patching for internet-facing systems,” wrote Saeed Abbasi, threat research manager at cloud security firm Qualys, in a blog post. “The data clearly shows that attackers follow the path of least resistance, targeting vulnerable edge devices that provide direct access to internal networks.” Greg Linares, principal threat intelligence analyst at managed detection and response vendor Huntress, said, “We’re seeing a distinct shift in how modern attackers breach enterprise environments, and one of the most consistent trends right now is the exploitation of edge devices.” Edge devices, ranging from firewalls and VPN appliances to load balancers and IoT gateways, serve as the gateway between internal networks and the broader internet. “Because they operate at this critical boundary, they often hold elevated privileges and have broad visibility into internal systems,” Linares noted, adding that edge devices are often poorly maintained and not integrated into standard patching cycles. Linares explained: “Many edge devices come with default credentials, exposed management ports, secret superuser accounts, or weakly configured services that still rely on legacy protocols — these are all conditions that invite intrusion.”


5 tips for transforming company data into new revenue streams

Data monetization can be risky, particularly for organizations that aren’t accustomed to handling financial transactions. There’s an increased threat of security breaches as other parties become aware that you’re in possession of valuable information, ISG’s Rudy says. Another risk is unintentionally using data you don’t have a right to use or discovering that the data you want to monetize is of poor quality or doesn’t integrate across data sets. Ultimately, the biggest risk is that no one wants to buy what you’re selling. Strong security is essential, Agility Writer’s Yong says. “If you’re not careful, you could end up facing big fines for mishandling data or not getting the right consent from users,” he cautions. If a data breach occurs, it can deeply damage an enterprise’s reputation. “Keeping your data safe and being transparent with users about how you use their info can go a long way in avoiding these costly mistakes.” ... “Data-as-a-service, where companies compile and package valuable datasets, is the base model for monetizing data,” he notes. However, insights-as-a-service, where customers provide prescriptive/predictive modeling capabilities, can demand a higher valuation. Another consideration is offering an insights platform-as-a-service, where subscribers can securely integrate their data into the provider’s insights platform.


Are AI Startups Faking It Till They Make It?

"A lot of VC funds are just kind of saying, 'Hey, this can only go up.' And that's usually a recipe for failure - when that starts to happen, you're becoming detached from reality," Nnamdi Okike, co-founder and managing partner at 645 Ventures, told Tradingview. Companies are branding themselves as AI-driven, even when their core technologies lack substantive AI components. A 2019 study by MMC Ventures found 40% of surveyed "AI startups" in Europe showed no evidence of AI integration in their products or services. And this was before OpenAI further raised the stakes with the launch of ChatGPT in 2022. It's a slippery slope. Even industry behemoths have had to clarify the extent of their AI involvement. Last year, tech giant and the fourth-most richest company in the world Amazon pushed back on allegations that its AI-powered "Just Walk Out" technology installed at its physical grocery stores for a cashierless checkout was largely being driven by around 1,000 workers in India who manually checked almost three quarters of the transactions. Amazon termed these reports "erroneous" and "untrue," adding that the staff in India were not reviewing live footage from the stores but simply reviewing the system. The incentive to brand as AI-native has only intensified. 


From deployment to optimisation: Why cloud management needs a smarter approach

As companies grow, so does their cloud footprint. Managing multiple cloud environments—across AWS, Azure, and GCP—often results in fragmented policies, security gaps, and operational inefficiencies. A Multi-Cloud Maturity Research Report by Vanson Bourne states that nearly 70% of organisations struggle with multi-cloud complexity, despite 95% agreeing that multi-cloud architectures are critical for success. Companies are shifting away from monolithic architecture to microservices, but managing distributed services at scale remains challenging. ... Regulatory requirements like SOC 2, HIPAA, and GDPR demand continuous monitoring and updates. The challenge is not just staying compliant but ensuring that security configurations remain airtight. IBM’s Cost of a Data Breach Report reveals that the average cost of a data breach in India reached ₹195 million in 2024, with cloud misconfiguration accounting for 12% of breaches. The risk is twofold: businesses either overprovision resources—wasting money—or leave environments under-secured, exposing them to breaches. Cyber threats are also evolving, with attackers increasingly targeting cloud environments. Phishing and credential theft accounted for 18% of incidents each, according to the IBM report. 


Inside a Cyberattack: How Hackers Steal Data

Once a hacker breaches the perimeter the standard practice is to beachhead, and then move laterally to find the organisation’s crown jewels: their most valuable data. Within a financial or banking organisation it is likely there is a database on their server that contains sensitive customer information. A database is essentially a complicated spreadsheet, wherein a hacker can simply click SELECT and copy everything. In this instance data security is essential, however, many organisations confuse data security with cybersecurity. Organisations often rely on encryption to protect sensitive data, but encryption alone isn’t enough if the decryption keys are poorly managed. If an attacker gains access to the decryption key, they can instantly decrypt the data, rendering the encryption useless. ... To truly safeguard data, businesses must combine strong encryption with secure key management, access controls, and techniques like tokenisation or format-preserving encryption to minimise the impact of a breach. A database protected by Privacy Enhancing Technologies (PETs), such as tokenisation, becomes unreadable to hackers if the decryption key is stored offsite. Without breaching the organisation’s data protection vendor to access the key, an attacker cannot decrypt the data – making the process significantly more complicated. This can be a major deterrent to hackers.


Why Testing is a Long-Term Investment for Software Engineers

At its core, a test is a contract. It tells the system—and anyone reading the code—what should happen when given specific inputs. This contract helps ensure that as the software evolves, its expected behavior remains intact. A system without tests is like a building without smoke detectors. Sure, it might stand fine for now, but the moment something catches fire, there’s no safety mechanism to contain the damage. ... Over time, all code becomes legacy. Business requirements shift, architectures evolve, and what once worked becomes outdated. That’s why refactoring is not a luxury—it’s a necessity. But refactoring without tests? That’s walking blindfolded through a minefield. With a reliable test suite, engineers can reshape and improve their code with confidence. Tests confirm that behavior hasn’t changed—even as the internal structure is optimized. This is why tests are essential not just for correctness, but for sustainable growth. ... There’s a common myth: tests slow you down. But seasoned engineers know the opposite is true. Tests speed up development by reducing time spent debugging, catching regressions early, and removing the need for manual verification after every change. They also allow teams to work independently, since tests define and validate interfaces between components.


Why the road from passwords to passkeys is long, bumpy, and worth it - probably

While the current plan rests on a solid technical foundation, many important details are barriers to short-term adoption. For example, setting up a passkey for a particular website should be a rather seamless process; however, fully deactivating that passkey still relies on a manual multistep process that has yet to be automated. Further complicating matters, some current user-facing implementations of passkeys are so different from one another that they're likely to confuse end-users looking for a common, recognizable, and easily repeated user experience. ... Passkey proponents talk about how passkeys will be the death of the password. However, the truth is that the password died long ago -- just in a different way. We've all used passwords without considering what is happening behind the scenes. A password is a special kind of secret -- a shared or symmetric secret. For most online services and applications, setting a password requires us to first share that password with the relying party, the website or app operator. While history has proven how shared secrets can work well in very secure and often temporary contexts, if the HaveIBeenPawned.com website teaches us anything, it's that site and app authentication isn't one of those contexts. Passwords are too easily compromised.