Showing posts with label FinOps. Show all posts
Showing posts with label FinOps. Show all posts

Daily Tech Digest - August 06, 2025


Quote for the day:

"What you do has far greater impact than what you say." -- Stephen Covey


“Man in the Prompt”: New Class of Prompt Injection Attacks Pairs With Malicious Browser Extensions to Issue Secret Commands to LLMs

The so-called “Man in the Prompt” attack presents two priority risks. One is to internal LLMs that store sensitive company data and personal information, in the belief that it is appropriately fenced off from other software and apps. The other risk comes from particular LLMs that are broadly integrated into workspaces, such as Google Gemini’s interaction with Google Workspace tools such as Mail and Docs. This category of prompt injection attacks applies not just to any type of browser extension, but any model or deployment of LLM. And the malicious extension requires no special permissions to work, given that the DOM access already provides everything it needs. ... The other proof-of-concept targets Google Gemini, and by extension any elements of Google Workspace it has been integrated with. Gemini is meant to automate routine and tedious tasks in Workspace such as email responses, document editing and updating contacts. The trouble is that it has almost complete access to the contents of these accounts as well as anything the user has access permission for or has had shared with them by someone else. Prompt injection attacks conducted by these extensions can not only steal the contents of emails and documents with ease, but complex queries can be fed to the LLM to target particular types of data and file extensions; the autocomplete function can also be abused to enumerate available files.


EU seeks more age verification transparency amid contentious debate

The EU is considering setting minimum requirements for online platforms to disclose their use of age verification or age estimation tools in their terms and conditions. The obligation is contained in a new compromise draft text of the EU’s proposed law on detecting and removing online child sex abuse material (CSAM), dated July 24 and seen by MLex. A discussion of the proposal, which contains few other changes to a previous draft, is scheduled for September 12. The text also calls for online platforms to perform mandatory scans for CSAM, which critics say could result in false positives and break end-to-end cryptography. ... The way age verification is set to work under the OSA is described as a “privacy nightmare” by PC Gamer, but the article stands in stark contrast to the vague posturing of the political class. Author Jacob Ridley acknowledges the possibility for double-blind methods of age assurance among those that do not require any personal information at all to be shared with the website or app the individual is trying to access. At the same time, many age verification systems do not work this way. Also, age assurance pop-ups can be spoofed, and those spoofs could harvest a wealth of valuable personal information Privado ID Co-founder Evan McMullen calls it “like using a sledgehammer to crack a walnut.” McMullen, of course, prefers a decentralized approach that leans on zero-knowledge proofs (ZKPs).


AI Is Changing the Cybersecurity Game in Ways Both Big and Small

“People are rushing now to get [MCP] functionality while overlooking the security aspect,” he said. “But once the functionality is established and the whole concept of MCP becomes the norm, I would assume that security researchers will go in and essentially update and fix those security issues over time. But it will take a couple of years, and while that is taking time, I would advise you to run MCP somehow securely so that you know what’s going on.” Beyond the tactical security issues around MCP, there are bigger issues that are more strategic, more systemic in nature. They involve the big changes that large language models (LLMs) are having on the cybersecurity business and the things that organizations will have to do to protect themselves from AI-powered attacks in the future ... The sheer volume of threat data, some of which may be AI generated, demands more AI to be able to parse it and understand it, Sharma said. “It’s not humanly possible to do it by a SOC engineer or a vulnerability engineer or a threat engineer,” he said. Tuskira essentially functions as an AI-powered security analyst to detect traditional threats on IT systems as well as threats posed to AI-powered systems. Instead of using commercial AI models, Sharma adopted open-source foundation models running in private data centers. Developing AI tools to counter AI-powered security threats demands custom models, a lot of fine-tuning, and a data fabric that can maintain context of particular threats, he said.


AI burnout: A new challenge for CIOs

To take advantage of the benefits of smart tools and avoid overburdening the workforce, the board of directors must carefully manage their deployment. “As leaders, we must set clear limits, encourage training without overwhelming others, and open spaces for conversation about how people are experiencing this transition,” Blázquez says. “Technology must be an ally, not a threat, and the role of leadership will be key in that balance.” “It is recommended that companies take the first step. They must act from a preventative, humane, and structural perspective,” says De la Hoz. “In addition to all the human, ethical, and responsible components, it is in the company’s economic interest to maintain a happy, safe, and mission-focused workforce.” Regarding increasing personal productivity, he emphasizes the importance of “valuing their efforts, whether through higher salary returns or other forms of compensation.” ... From here, action must be taken, “implementing contingency plans to alleviate these areas.” One way: working groups, where the problems and barriers associated with technology can be analyzed. “From here, use these KPIs to change my strategy. Or to set it up, because often what happens is that I deploy the technology and forget how to get that technology adopted.” 


CIOs need a military mindset

While the battlefield feels very far away from the boardroom, this principle is something that CIOs can take on board when they’re tasked with steering a complex digital programme. Step back and clear the path so that you can trust your people to deliver; that’s when the real progress gets made. Contrary to popular belief, the military is not rigidly hierarchical. In fact, it teaches individuals to operate with autonomy within defined parameters. Officers set the boundaries of a mission and step back, allowing you to take full ownership of your actions. This approach is supported by the OODA Loop, a framework that cultivates awareness and decisive action under pressure. ... Resilience is perhaps the hardest leadership trait to teach and the most vital to embody. Military officers are taught to plan exhaustively, train rigorously, and prepare for all scenarios, but they’re also taught that ‘the first casualty of war is the plan.’ Adaptability under pressure is a non-negotiable mindset for you to adopt and instil in your team. When your team feels supported to grow, they stop fearing change and start responding to it; it is here that adaptability and resilience become second nature. There is also a practical opportunity to bring these principles in-house, as veterans transitioning out of the army may bring with them a refreshed leadership approach. Because they’re often confident under pressure and focused on outcomes, their transferrable skills allow them to thrive in the corporate world.


Backend FinOps: Engineering Cost-Efficient Microservices in the Cloud

Integrating cost management directly into Infrastructure-as-Code (IaC) frameworks such as Terraform enforces fiscal responsibility at the resource provisioning phase. By explicitly defining resource constraints and mandatory tagging, teams can preemptively mitigate orphaned cloud expenditures. ... Integrating cost awareness directly within Continuous Integration and Delivery (CI/CD) pipelines ensures proactive management of cloud expenditures throughout the development lifecycle. Tools such as Infracost automate the calculation of incremental cloud costs introduced by individual code changes. ... Cost-based pre-merge testing frameworks reinforce fiscal prudence by simulating peak-load scenarios prior to code integration. Automated tests measured critical metrics, including ninety-fifth percentile response times and estimated cost per ten thousand requests, to ensure compliance with established financial performance benchmarks. Pull requests failing predefined cost-efficiency criteria were systematically blocked. ... Comprehensive cost observability tools such as Datadog Cost Dashboards combine billing metrics with Application Performance Monitoring (APM) data, directly supporting operational and cost-related SLO compliance.


5 hard truths of a career in cybersecurity — and how to navigate them

Leadership and HR teams often gatekeep by focusing exclusively on candidates with certain educational degrees or specific credentials, typically from vendors such as Cisco, Juniper, or Palo Alto. Although Morrato finds this somewhat understandable given the high cost of hiring in cybersecurity, he believes this approach unfairly filters out capable individuals who, in a different era, would have had more opportunities. ... Because most team managers elevate from technical roles, they often lack the leadership and interpersonal skills needed to foster healthy team cultures or manage stakeholder relationships effectively. This cultural disconnect has a tangible impact on individuals. “People who work in security functions don’t always feel safe — psychologically safe — doing so,” Budge explains. ... Cybersecurity teams must also rethink how they approach risk, as relying solely on strict, one-size-fits-all controls is no longer tenable, Mistry says. Instead, he advocates for a more adaptive, business-aligned framework that considers overall exposure rather than just technical vulnerabilities. “Can I live with this risk? Can I not live with this risk? Can I do something to reduce the risk? Can I offload the risk? And it’s a risk conversation, not a ‘speeds and feeds’ conversation,” he says, emphasizing that cybersecurity leaders must actively build relationships across the organization to make these conversations possible.


How AI amplifies these other tech trends that matter most to business in 2025

Agentic AI is an artificial intelligence system capable of independently planning and executing complex, multistep tasks. Built on foundation models, these agents can autonomously perform actions, communicate with one another, and adapt to new information. Significant advancements have emerged, from general agent platforms to specialized agents designed for deep research. ... Application-specific semiconductors are purpose-built chips optimized to perform specialized tasks. Unlike general-purpose semiconductors, they are engineered to handle specific workloads (such as large-scale AI training and inference tasks) while optimizing performance characteristics, including offering superior speed, energy efficiency, and performance. ... Cloud and edge computing involve distributing workloads across locations, from hyperscale remote data centers to regional hubs and local nodes. This approach optimizes performance by addressing factors such as latency, data transfer costs, data sovereignty, and data security. ... Quantum-based technologies use the unique properties of quantum mechanics to execute certain complex calculations exponentially faster than classical computers; secure communication networks; and produce sensors with higher sensitivity levels than their classical counterparts.


Differentiable Economics: Strategic Behavior, Mechanisms, and Machine Learning

Differential economics is related to but different from the recent progress in building agents that achieve super-human performance in combinatorial games such as chess and Go. First, economic games such as auctions, oligopoly competition, or contests typically have a continuous action space expressed in money, and opponents are modeled as draws from a prior distribution that has continuous support. Second, differentiable economics is focused on modeling and achieving equilibrium behavior. The second opportunity in differentiable economics is to use data-driven methods and machine learning to discover rules, constraints, and affordances—mechanisms—for economic environments that promote good outcomes in the equilibrium behavior of a system. Mechanism design solves the inverse problem of game theory, finding rules of strategic interaction such that agents in equilibrium will effect an outcome with desired properties. Where possible, mechanisms promote strong equilibrium solution concepts such as dominant strategy equilibria, making it strategically easy for agents to participate. Think of a series of bilateral negotiations between buyers and a seller that is replaced by an efficient auction mechanism with simple dominant strategies for agents to report their preferences truthfully. 


Ownership Mindset Drives Innovation: Milwaukee Tool CEO

“Empowerment was not a free-for-all,” Richman explained. In fact, the company recently changed the wording around its core values from “empowerment” to “extreme ownership” to reflect the importance of accountability for results. Emphasizing ownership can also help employees do what is best for the company as a whole rather than just their own teams, particularly when it comes to reallocating resources. ... Surprises and setbacks are an unavoidable cost of trying new things while innovating. Since organizations cannot avoid these issues, leaders and employees need to discuss them frankly and quickly enough to minimize the downside while seizing the upside. “[Being] candid is the most challenging cultural element of any company,” Richman said. “And we believe that it really leads to success or failure.” … In successful cultures, teams, people, parts of the organization can bring problems up and bring them up in a way to be able to say, ‘How are we going to rally the troops as one team, come together, fix it, and figure out why we got into this mess, and what are we going to do to not do it again?’” Candor is a two-way street. To build trust, leaders need to provide an honest assessment of the state of the company and the path forward — a “candid communication of where you are,” Richman said. 

Daily Tech Digest - July 07, 2025


Quote for the day:

"To live a creative life, we must lose our fear of being wrong." -- Anonymous


Forget the hype — real AI agents solve bounded problems, not open-world fantasies

When people imagine AI agents today, they tend to picture a chat window. A user types a prompt, and the agent responds with a helpful answer (maybe even triggers a tool or two). That’s fine for demos and consumer apps, but it’s not how enterprise AI will actually work in practice. In the enterprise, most useful agents aren’t user-initiated, they’re autonomous. They don’t sit idly waiting for a human to prompt them. They’re long-running processes that react to data as it flows through the business. They make decisions, call services and produce outputs, continuously and asynchronously, without needing to be told when to start. ... The problems worth solving in most businesses are closed-world: Problems with known inputs, clear rules and measurable outcomes. But the models we’re using, especially LLMs, are inherently non-deterministic. They’re probabilistic by design. The same input can yield different outputs depending on context, sampling or temperature. That’s fine when you’re answering a prompt. But when you’re running a business process? That unpredictability is a liability. ... Closed-world problems don’t require magic. They need solid engineering. And that means combining the flexibility of LLMs with the structure of good software engineering. 


Has CISO become the least desirable role in business?

Being a CISO today is not for the faint of heart. To paraphrase Rodney Dangerfield, CISOs (some, anyway) get no respect. You’d think in a job where perpetual stress over the threat of a cyberattack is the norm, there would be empathy for security leaders. Instead, they face the growing challenge of trying to elicit support across departments and managing security threats, according to a recent report from WatchGuard. ... It’s no secret CISOs are under tremendous pressure. “They’ve got the regulatory scrutiny, they’ve got public visibility,” along with the increasing complexity of threats, and “AI is just adding to that fire, and the mismatch between the accountability and the authority,” says Myers, who wrote “The CISO Dilemma,” which explores CISO turnover rates and how companies can change that moving forward. Often, CISOs don’t have the mandate to influence the business systems or processes that are creating that risk, she says. “I think that’s a real disconnect and that’s what’s really driving the burnout and turnover.” ... Some CISOs are stepping back from operational roles into more advisory ones. Patricia Titus, who recently took a position as a field CISO at startup Abnormal AI after 25 years as a CISO, does not think the CISO role has become less desirable. “The regulatory scrutiny has been there all along,” she says. “It’s gotten a light shined on it.


Enforcement Gaps in India’s DPDP Act and the case for decentralized data protection boards

The DPDP Act’s centralized enforcement model suffers from structural weaknesses that hinder effective data protection. A primary concern is the lack of independence of the Data Protection Board. Because the DPB is both appointed and funded by the Union government, with its officials classified as civil servants under central rules , it does not enjoy the institutional autonomy typically expected of a watchdog agency. ... By design, the executive branch holds decisive power over who sits on the Board and can even influence its operations through service rules. This raises a conflict of interest, given that the government itself is a major collector and processor of citizens’ data. In the words of Justice B.N. Srikrishna, having a regulator under government control is problematic “since the State will be the biggest data processor” – a regulator must be “free from the clutches of the Government” to fairly oversee both private and government actors . ... Another structural limitation is the potential for executive interference in enforcement actions, which dilutes accountability. The DPDP Act contains provisions such as Section 27(3) enabling the Central Government to issue directions that the DPB “may modify or suspend” its own orders based on a government reference . 


The Good AI: Cultivating Excellence Through Data

In today’s enterprise landscape, the quality of AI systems depends fundamentally on the data that flows through them. While most organizational focus remains on AI models and algorithms, it’s the often-under-appreciated current of data flowing through these systems that truly determines whether an AI application becomes “good AI” or problematic technology. Just as ancient Egyptians developed specialized irrigation techniques to cultivate flourishing agriculture, modern organizations must develop specialized data practices to cultivate AI that is effective, ethical, and beneficial. My new column, “The Good AI,” will examine how proper data practices form the foundation for responsible and high-performing AI systems. We’ll explore how organizations can channel their data resources to create AI applications that are not just powerful, but trustworthy, inclusive, and aligned with human values. ... As organizations increasingly integrate artificial intelligence into their operations, the need for robust AI governance has never been more critical. However, establishing effective AI governance doesn’t happen in a vacuum—it must be built upon the foundation of solid data governance practices. The path to responsible AI governance varies significantly depending on your organization’s current data governance maturity level.


AI Infrastructure Inflection Point: 60% Cloud Costs Signal Time to Go Private

Perhaps the most immediate challenge facing IT teams identified in the research is the dramatic cost scaling of public cloud AI workloads. Unlike traditional applications where cloud costs scale somewhat linearly, AI workloads create exponential cost curves due to their intensive compute and storage requirements. The research identifies a specific economic threshold where cloud costs become unsustainable. When monthly cloud spending for a given AI workload reaches 60-70% of what it would cost to purchase and operate dedicated GPU-powered infrastructure, organizations hit their inflection point. At this threshold, the total cost of ownership calculation shifts decisively toward private infrastructure. IT teams can track this inflection point by monitoring data and model-hosting requirements relative to GPU transaction throughput. ... Identifying when to move from a public cloud to private cloud or some form of on-premises deployment is critical. Thomas noted that there are many flavors of hybrid FinOps tooling available in the marketplace that, when configured appropriately for an environment, will spot trend anomalies. Anomalies may be triggered by swings in GPU utilization, costs per token/inferences, idle percentages, and data-egress fees. On-premises factors include material variations in hardware, power, cooling, operations, and more over a set period of time.


AI built it, but can you trust it?

AI isn’t inherently bad nor inherently good from a security perspective. It’s another tool that can accelerate and magnify both good and bad behaviors. On the good side, if models can learn to assess the vulnerability state and general trustworthiness of app components, and factor that learning into code they suggest, AI can have a positive impact on the security of the resultant output. Open source projects can already leverage AI to help find potential vulnerabilities and even submit PRs to address them, but there still needs to be significant human oversight to ensure that the results actually improve the project’s security. ... If you simply trust an AI to generate all the artifacts needed to build, deploy, and run anything sophisticated it will be very difficult to know if it’s done so well and what risks it’s mitigated. In many ways, this looks a lot like the classic “curl and pipe to bash” kinds of risks that have long existed where users put blind trust in what they’re getting from external sources. Many times that can work out fine but sometimes it doesn’t. ... AI can create impressive results quickly but it doesn’t necessarily prioritize security and may in fact make many choices that degrade it. Have good architectures and controls and human experts that really understand the recommendations it’s making and can adapt and re-prompt as necessary to provide the right balance.


How to shift left on finops, and why you need to

Building cost awareness in devops requires asking an upfront question when spinning up new cloud environments. Developers and data scientists should ask if the forecasted cloud and other costs align with the targeted business value. When cloud costs do increase because of growing utilization, it’s important to relate the cost escalation to whether there’s been a corresponding increase in business value. The FinOps Foundation recommends that SaaS and cloud-driven commercial organizations measure cloud unit economics. The basic measure calculates the difference between marginal cost and marginal revenue and determines where cloud operations break even and begin to generate a profit. Other companies can use these concepts to correlate business value and cost and make smarter cloud architecture and automation decisions. ... “Engineers especially can get tunnel vision on delivering features and the art of code, and cost modeling should happen as a part of design, at the start of a project, not at the end,” says Mason of RecordPoint. “Companies generally limit the staff with access to and knowledge of cloud cost data, which is a mistake. Companies should strive to spread awareness of costs, educating users of services with the highest cost impacts, so that more people recognize opportunities to optimize or eliminate spend.”


How Cred Built Its Observability-Led Tech Stack

Third-party integrations are critical to any fintech ecosystem, and at Cred, we manage them through a rigorous, life cycle-based third-party risk management framework. This approach is designed to minimize risk and maximize reliability, with security and resilience built in from the start. Before onboarding any external partner, whether for KYC, APIs or payment rails, we conduct thorough due diligence to evaluate their security posture. Each partner is categorized as high, medium or low risk, which then informs the depth and frequency of ongoing assessments. These reviews go well beyond standard compliance checks. ... With user goals validated, our teams then move into secure architecture design. Every integration point, data exchange and system interaction are examined to preempt vulnerabilities and ensure that sensitive information is protected by default. We use ThreatShield, an internal AI-powered threat-modeling tool, to analyze documentation and architecture against the Stride framework, a threat model designed by Microsoft that is used in cybersecurity to identify potential security threats to applications and systems. This architecture-first thinking enables us to deliver powerful features, such as surfacing hidden charges in smart statements or giving credit insights without ever compromising the user's data or experience.


How To Tackle Tech Debt Without Slowing Innovation

Implement a “boy scout rule” under which developers are encouraged to make small improvements to existing code during feature work. This maintains development momentum while gradually improving code quality, and developers are more motivated to clean up code they’re already actively working with. ... Proactively analyze user engagement metrics to pinpoint friction points where users spend excessive time. Prioritize these areas for targeted debt reduction, aligning technical improvements closely with meaningful user experience enhancements. ... Pre-vacation handovers are an excellent opportunity to reduce tech debt. Planning and carrying out handovers before we take a holiday are crucial to maintaining smooth IT operations. Giving your employees the choice to hand tasks over to automation or a human colleague can help reduce tech debt and automate tasks. Critically, it utilizes time already allocated for addressing this work. ... Resolving technical debt is development. The Shangri-la of “no tech debt” does not survive contact with reality. It’s a balance of doing what’s right for the business. Making sure the product and engineering teams are on the same page is critical. You should have sprints where tech debt is the focus.


Why cybersecurity should be seen as a business enabler, not a blocker

Among the top challenges facing the IT sector today, says Jackson, is the rapid development of the tech world. “The pace of change is outpacing many organisations’ ability to adapt securely – whether due to AI, rapid cloud adoption, evolving regulatory frameworks like DORA, or the ongoing shortage of skilled cybersecurity professionals,” he says. “These challenges, combined with cost pressures and the perception that security is not always an enabler, make adaptation even harder.” AI in particular, to no surprise, is having a significant effect on the cybersecurity world – reshaping both sides of the “cybersecurity battlefield”, according to Jackson. “We’re seeing attackers utilise large language models (LLMs) like ChatGPT to scale social engineering and refine malicious code, while defenders are using the same tools (or leveraging them in some way) to enhance threat detection, streamline triage and gain broader context at much greater speed,” he says. While he doesn’t believe AI will have as great an impact as some suggest, he says it still represents an “exciting evolution”, particularly in how it can benefit organisations. “AI won’t replace individuals such as SOC analysts anytime soon, but it can augment and support their roles freeing up time to focus on higher priority tasks,” he says.

Daily Tech Digest - June 05, 2025


Quote for the day:

"The greatest accomplishment is not in never falling, but in rising again after you fall." -- Vince Lombardi


Your Recovery Timeline Is a Lie: Why They Fall Apart

Teams assume they can pull snapshots from S3 or recover databases from a backup tool. What they don’t account for is the reconfiguration time required to stitch everything back together. ... RTOs need to be redefined through the lens of operational reality and validated through regular, full-system DR rehearsals. This is where IaC and automation come in. By codifying all layers of your infrastructure — not just compute and storage, but IAM, networking, observability and external dependencies, too — you gain the ability to version, test and rehearse your recovery plans. Tools like Terraform, Helm, OpenTofu and Crossplane allow you to build immutable blueprints of your infrastructure, which can be automatically redeployed in disaster scenarios. But codification alone isn’t enough. Continuous testing is critical. Just as CI/CD pipelines validate application changes, DR validation pipelines should simulate failover scenarios, verify dependency restoration and track real mean time to recovery (MTTR) metrics over time. ... It’s also time to stop relying on aspirational RTOs and instead measure actual MTTR. It’s what matters when things go wrong, indicating how long it really takes to go from incident to resolution. Unlike RTOs, which are often set arbitrarily, MTTR is a tangible, trackable indicator of resilience.


The Dawn of Unified DataOps—From Fragmentation to Transformation

Data management has traditionally been the responsibility of IT, creating a disconnect between this function and the business departments that own and understand the data’s value. This separation has resulted in limited access to unified data across the organization, including the tools and processes to leverage it outside of IT. ... Organizations looking to embrace DataOps and transform their approach to data must start by creating agile DataOps teams that leverage software-oriented methodologies; investing in data management solutions that leverage DataOps and data mesh concepts; investing in scalable automation and integration; and cultivating a data-driven culture. Much like agile software teams, it’s critical to include product management, domain experts, test engineers, and data engineers. Approach delivery iteratively, incrementally delivering MVPs, testing, and improving capabilities and quality. ... Technology alone won’t solve data challenges. Truly transformative DataOps strategies align with unified teams that pair business users and subject matter experts with DataOps professionals, forming a culture where collaboration, accessibility, and transparency are at the core of decision making.


Redefining Cyber Value: Why Business Impact Should Lead the Security Conversation

A BVA brings clarity to that timeline. It identifies the exposures most likely to prolong an incident and estimates the cost of that delay based on both your industry and organizational profile. It also helps evaluate the return of preemptive controls. For example, IBM found that companies that deploy effective automation and AI-based remediation see breach costs drop by as much as $2.2 million. Some organizations hesitate to act when the value isn't clearly defined. That delay has a cost. A BVA should include a "cost of doing nothing" model that estimates the monthly loss a company takes on by leaving exposures unaddressed. We've found that for a large enterprise, that cost can exceed half a million dollars. ... There's no question about how well security teams are doing the work. The issue is that traditional metrics don't always show what their work means. Patch counts and tool coverage aren't what boards care about. They want to know what's actually being protected. A BVA helps connect the dots – showing how day-to-day security efforts help the business avoid losses, save time, and stay more resilient. It also makes hard conversations easier. Whether it's justifying a budget, walking the board through risk, or answering questions from insurers, a BVA gives security leaders something solid to point to. 


Fake REAL Ids Have Already Arrived, Here’s How to Protect Your Business

When the REAL ID Act of 2005 was introduced, it promised to strengthen national security by setting higher standards for state-issued IDs, especially when it came to air travel, access to federal buildings, and more. Since then, the roll-out of the REAL ID program has faced delays, but with an impending enforcement deadline, many are questioning if REAL IDs deliver the level of security intended. ... While the original aim was to prevent another 9/11-style attack, over 20 years later, the focus has shifted to protecting against identity theft and illegal immigration. The final deadline to get your REAL ID is now May 7th, 2025, owing in part to differing opinions and adoption rates state-by-state which has dragged enforcement on for two decades.  ... The delays and staggered adoption has given bad actors the chance to create templates for fraudulent REAL IDs. Businesses may incorrectly assume that an ID bearing a REAL ID star symbol are more likely to be legitimate, but as our data proves, this is not the case. REAL IDs can be faked just as easily as any other identity document, putting the onus on businesses to implement robust ID verification methods to ensure they don’t fall victim to ID fraud. ... AI-powered identity verification is one of the only ways to combat the increasing use of AI-powered criminal tools. 


How this 'FinOps for AI' certification can help you tackle surging AI costs

To really adopt AI into your enterprise, we're talking about costs that are orders of magnitude greater. Companies are turning to FinOps for help dealing with this. FinOps, a portmanteau of Finance and DevOps, combines financial management and collaborative, agile IT operations into a discipline to manage costs. It started as a way to get a handle on cloud pricing. FinOps' first job is to optimize cloud spending and align cloud costs with business objectives. ... Today, they're adding AI spending to their concerns. According to the FinOps Foundation, 63% of FinOps practitioners are already being asked to manage AI costs, a number expected to rise as AI innovation continues to surge. Mismanagement of these costs can not only erode business value but also stifle innovation. "FinOps teams are being asked to manage accelerating AI spend to allocate its cost, forecast its growth, and ultimately show its value back to the business," said Storment. "But the speed and complexity of the data make this a moving target, and cost overruns in AI can slow innovation when not well managed." Besides, Storment added, C-level executives are asking that painful question: "You're using this AI service and spending too much. Do you know what it's for?" 


Tackling Business Loneliness

Leaders who intentionally reach out to their employees do more than combat loneliness; they directly influence performance and business success. "To lead effectively, you need to lead with care. Because care creates connection. Connection fuels commitment. And commitment drives results. It's in those moments of real connection that collective brilliance is unlocked," she concludes. ... But it's not just women, with many men facing isolation in the workplace too, especially where a culture of 'put up and shut up' is frequently seen. Reflected in the high prevalence of suicide in the UK construction industry, it is essential that toxic cultures are dismantled and all employees feel valued and part of the team. "Whether they work on site or remotely, full time or part time, building an inclusive culture helps to ensure people do not experience prolonged loneliness or lack of connection. When we prioritise inclusion, everyone benefits," Allen concludes. ... Providing a safe, non-judgemental space for employees to discuss loneliness, things that are troubling them, and ways to manage any negative feelings is crucial. "This could be with a trusted line manager or colleague, but objective support from professional therapists and counsellors should also be accessible to prevent loneliness from manifesting into more serious issues," she emphasises. 


Revolutionizing Software Development: Agile, Shift-Left, and Cybersecurity Integration

While shift-left may cost more resources in the short term, in most cases, the long-term savings more than make up for the initial investment. Bugs discovered after a product release can cost up to 640 times more than those caught during development. In addition, late detection can increase the risk of fines from security breaches, as well as causing damage to a brand’s trust. Automation tools are the primary answer to these concerns and are at the core of what makes shift-left possible. The popular tech industry mantra, “automate everything,” continues to apply. Static analysis, dynamic analysis, and software composition analysis tools scan for known vulnerabilities and common bugs, producing instant feedback as code is first merged into development branches. ... Shift-left balances speed with quality. Performing regular checks on code as it is written reduces the likelihood that significant defects and vulnerabilities will surface after a release. Once software is out in the wild, the cost to fix issues is much higher and requires extensively more work than catching them in the early phases. Despite the advantages of shift-left, navigating the required cultural change can be a challenge. As such, it’s crucial for developers to be set up for success with effective tools and proper guidance.


Feeling Reassured by Your Cybersecurity Measures?

Organizations must pursue a data-driven approach that embraces comprehensive NHI management. This approach, combined with robust Secrets Security Management, can ensure that none of your non-human identities become security weak points. Remember, feeling reassured about your cybersecurity measures is not just about having security systems in place, but also about knowing how to manage them effectively. Effective NHI management will be a cornerstone in instilling peace of mind and enhancing security confidence. With these insights into the strategic importance of NHI management in promoting cybersecurity confidence, organizations can take a step closer to feeling reassured by their cybersecurity measures. ... Imagine a simple key, one that turns tumblers in the lock mechanism but isn’t alone in doing so. There are other keys that fit the same lock, and they all have the power to unlock the same door. This is similar to an NHI and its associated secret. There are numerous NHIs that could access the same system or part of a system, granted via their unique ‘Secret’. Now, here’s where it gets a little complex. ... Just as a busy airport needs security checkpoints to screen passengers and verify their credentials, a robust NHI management system is needed to accurately identify and manage all NHIs. 


How to Capitalize on Software Defined Storage, Securely and Compliantly

Because it fundamentally transforms data infrastructure, SDS is critical for technology executives to understand and capitalize on. It not only provides substantial cost savings and predictability and while reducing staff time required for managing physical hardware; SDS also makes companies much more agile and flexible in their business operations. For example, launching new initiatives or products that can start small and quickly scale is much easier with SDS. As a result, SDS does not just impact IT, it is a critical function across the enterprise. Software-defined storage in the cloud has brought major operational and cost benefits for enterprises. First, subscription business models enable buyers to make much more cost-conscious decisions and avoid wasting resources and usage. ... In addition, software-defined storage has also transformed technology management frameworks. SDS has enabled a move to agile DevOps, which includes real-time analytics resulting in faster iteration, less downtime and more efficient resource allocation. With real-time dashboards and alerts, organizations can now track key KPIs such as uptime and performance and react instantly. IT management can be more proactive by increasing storage or resource capacity when needed, rather than waiting for a crash to react.


The habits that set future-ready IT leaders apart

Constructive discomfort is the impetus to continuous learning, adaptability, agility, and anti-fragility. The concept of anti-fragile means designed for change. How do we build anti-fragile humans so they are unbreakable and prepared for tomorrow’s world, whatever it brings? We have these fault-tolerant designs where I can unplug a server and the system adapts and you don’t even know it. We want to create that same anti-fragility and fault tolerance in the human beings we train. We’re living in this ever-changing, accelerating VUCA [volatile, uncertain, complex, ambiguous] world, and there are two responses when you are presented with the unknown or the unexpected: You can freeze and be fearful and have it overcome you, or you can improvise, adapt, and overcome it by being a continuous learner and continuous adapter. I think resiliency in human beings is driven by this constructive discomfort, which creates a path to being continuous learners and continuous adapters. ... Strategic competence is knowing what hill to take, tactical competence is knowing how to take that hill safely, and technical competence is rolling up your sleeves and helping along the way. The leaders I admire have all three. The person who doesn’t have technical competence may set forth an objective and even chart the path to get there, but then they go have coffee. That leader is probably not going to do well. 

Daily Tech Digest - March 25, 2025


Quote for the day:

“Only put off until tomorrow what you are willing to die having left undone.” -- Pablo Picasso


Why FinOps Belongs in Your CI/CD Workflow

By codifying FinOps governance policies, teams can put guardrails in place while still granting developers autonomy to create resources. Guardrails don’t stifle innovation — they’re simply there to prevent costly mistakes. Every engineer makes mistakes, but guardrails ensure that those mistakes don’t lead to $10K-per-day cloud bills due to an overlooked database instance in a Terraform template taken off of GitHub. Additionally, policy enforcement must be dynamic and flexible, allowing organizations to adjust tagging, cost constraints and security requirements as they evolve. AI-driven governance can scale policy enforcement by identifying repeatable patterns and automating compliance checks across environments. ... Shifting left in FinOps isn’t just about cost visibility — it’s about ensuring cost efficiency is enforced as code, and continuously on your production systems. Legacy cost analysis tools provide visibility into cloud spending but rarely offer actionable cleanup recommendations. This includes actionable insights for cloud waste reduction, ensuring that predefined cost-saving policies highlight underutilized or orphaned resources while automated cleanup workflows help reclaim unused infrastructure.


How AI is changing cybersecurity for better or worse

“Agentic AI, capable of independently planning and acting to achieve specific goals, will be exploited by threat actors,” Lohrmann says. “These AI agents can automate cyberattacks, reconnaissance and exploitation, increasing attack speed and precision.” Malicious AI agents might adapt in real-time, bypassing traditional defenses and enhancing the complexity of attacks, Lohrmann says. AI-driven scams and social engineering will surge, Lohrmann says. “AI will enhance scams like ‘pig butchering’ — long-term financial fraud — and voice phishing, making social engineering attacks harder to detect,” he says. ... AI can also benefit organizations’ cybersecurity programs. “In general, AI-enabled platforms can provide a more robust, technology-backed line of defense against threat actors,” Cullen says. “Because AI can process huge amounts of data, it can provide faster and less obvious alerts to these threats.” Cybersecurity teams need to “fight fire with fire” by detecting and stopping threats with AI tool sets, Lohrmann says. For example, with new AI-enabled tools employee actions such as inappropriate clicking on links, sending emails to the wrong people, and other policy violations can be detected and stopped before a breach occurs.


Learning AI governance lessons from SaaS and Web2

Autonomous systems are advancing quickly, with the emergence of agents capable of communicating with each other, executing complex tasks, and interacting directly with stakeholders developing. While these autonomous systems introduce exciting new use cases, they also create substantial challenges. For example, an AI agent automating customer refunds might interact with financial systems, log reason codes for trends analysis, monitor transactions for anomalies, and ensure compliance with company and regulatory policies — all while navigating potential risks like fraud or misuse. ... Early SaaS and Web2 companies often relied on reactive strategies to address governance issues as they emerged, adopting a “wait and see” approach. SaaS companies focused on basics like release sign-offs, access controls, and encryption, while Web2 platforms struggled with user privacy, content moderation, and data misuse. This reactive approach was costly and inefficient. SaaS applications scaled with manual processes for user access management and threat detection that strained resources. ... A continuous, automated approach is the key to effective AI governance. By embedding tools that enable these features into their operations, companies can proactively address reputational, financial, and legal risks while adapting to evolving compliance demands.


7 types of tech debt that could cripple your business

As a software developer, writing code feels easier than reviewing someone else’s and understanding how to use it. Searching and integrating open source libraries and components can be even easier, as the weight of long-term support isn’t at the top of many developers’ minds when they are pressured to meet deadlines and deploy frequently. ... “The average app contains 180 components, and failing to update them leads to bloated code, security gaps, and mounting technical debt. Just as no one wants to run mission-critical systems on decade-old hardware, modern SDLC and DevOps practices must treat software dependencies the same way — keep them updated, streamlined, and secure.” ... CIOs with sprawling architectures should consider simplifications and one step to establish architectural observability practices. These include creating architecture and platform performance indicators by aggregating application-level monitoring, observability, code quality, total costs, DevOps cycle times, and incident metrics as a tool to evaluate where architecture impacts business operations. ... Joe Byrne, field CTO of LaunchDarkly, says, “Cultural debt can have several negative impacts, but specific to AI, a lack of proper engineering practices, resistance to innovation, tribal knowledge gaps, and failure to adopt modern practices all create significant roadblocks to successfully leveraging AI.”


Why people are the key to successful cloud migration

The consequences of overlooking the human element are significant. According to McKinsey’s research, European companies are five times more likely than their US counterparts to pursue an IT-led cloud migration, focusing primarily on ‘lifting and shifting’ existing workloads rather than transforming how people work. This approach might explain why many organisations are seeing limited returns on their investment. Migration creates a good opportunity to review methods and processes while ensuring teams have the tools they need to work efficiently. both human impact and technological enablement, even the most technically sound migration can fail to deliver the desired results. ... The true value of cloud transformation extends far beyond technical metrics and cost savings. Organisations need to track employee satisfaction and engagement levels alongside traditional technical key performance indicators (KPIs). This includes monitoring adoption rates of new tools, time saved through improved processes, and skill development achievements. Business impact measures should encompass customer satisfaction, process efficiency improvements, and innovation metrics. Long-term value indicators such as employee retention rates, internal mobility, and team productivity provide a more complete picture of transformation success. 


Evolving Technology and Corporate Culture Toward Autonomous IT and Agentic AI

Corporate culture will shape how seamlessly and effectively the modernization effort toward a more autonomous and intelligent enterprise operation will unfold. The best approaches align technology and culture along a structured journey model — assessing both the IT and workforce needs around data maturity, process automation, AI readiness, and success metrics. Such efforts can quickly propel organizations toward the largely self-sustaining capabilities and ecosystem of Agentic AI and autonomic IT. As IT teams become more comfortable relying on AI, machine learning, predictive analytics, and automation, they can begin to turn their attention to unlocking the power of Agentic AI. The term refers to advanced scenarios where machine and human resources blend to create an AI assistant capable of delivering accurate predictions, tailored recommendations, and intelligent automations that drive business efficiency and innovation. Such systems leverage generative AI and unsupervised ML combined with human-in-the-loop automation training models to revolutionize IT operations. Relinquishing the responsibility of mundane, repetitive tasks, IT teams can begin to reap the benefits of autonomic IT — a seamlessly integrated ecosystem of advanced technologies designed to enhance IT operations.


Building a Data Governance Strategy

In implementing a data strategy, a company can face several obstacles, including:Cultural resistance: Cultural resistance emerges throughout the DG journey, from initial strategy discussions through implementation and beyond. Teams and departments may resist changes to their established processes and workflows, requiring sustained change management efforts and clear communication of benefits. Lack of Resources: Viewing governance solely through a compliance lens leads to underinvestment, with 54% of data and analytics professionals finding the biggest hurdle is a lack of funding for their data programs. In the meantime, the demands of data governance have increased significantly due to a complex and evolving regulatory landscape and accelerated digital transformation where businesses must rely heavily on data-driven systems. Scalability: Modern enterprises must manage data across an increasingly complex ecosystem of cloud platforms, personal devices, and decentralized systems. This dispersed data environment creates significant challenges for maintaining consistent governance practices and data quality. Demands for unstructured data: The growing demand for AI-driven insights requires organizations to govern increasing volumes of unstructured data, including videos, emails, documents, and images. 


How CISOs can meet the demands of new privacy regulations

The responsibility for implementing and documenting privacy controls and policies falls primarily on the shoulders of the CISO, who must ensure that the organization’s procedures for managing information protects privacy data and meets regulatory requirements. Performing risk assessments that identify weaknesses and demonstrate that they are being addressed is a crucial step in the process, even more so now that they must be ready to produce risk assessments whenever regulatory bodies request them. As if CISOs needed an added incentive, regulators at the state and federal levels have been trending toward targeting organization management, particularly CISOs, in the wake of costly breaches. The consequences include hefty fines for organizations and, in worst-case scenarios, even jail sentences for CISOs. Responsibility for privacy protections also extends to third-party risks. Organizations can’t afford to rely solely on promises made by third-party providers because regulators and state attorneys generally can hold an organization responsible for a breach, even if the exploited vulnerability belonged to a provider. Organizations need to implement a framework for third-party risk management that includes performing due diligence on the security postures of third parties.


Guess Who’s Hiding in Your Supply Chain

There are plenty of high-profile attacks that demonstrate how hackers use the supply chain to access their target organisation. One of the most notable attacks on a supply chain was on SolarWinds, where hackers deployed malicious code into its IT monitoring and management software, enabling them to reach other companies within the supply chain. Once hackers were inside, they were able to compromise data, networks and systems of thousands of public and private organisations. This included spying on government agencies, in what became a major breach to national security. Government departments noticed that sensitive emails were missing from their systems and major private companies such as Microsoft, Intel, and Deloitte were also affected. With internal workings exposed, hackers could also gain access to data and networks of customers and partners of those originally affected, allowing the attack to spiral in impact and affect thousands of organisations. Visibility is key to guard against future attacks – without it an organisation can’t effectively or reliably identify suspicious activity. ... When you put this into perspective, it becomes unfathomable the amount of damage a cyber intruder could cause. Security teams must deploy a multi-layered arsenal of tools and tactics to cover their bases and should provision identities with only as much access as is absolutely necessary.


11 ways cybercriminals are making phishing more potent than ever

Brand impersonation continues to be a favored method to trick users into opening a malicious file or entering their details on a phishing site. Threat actors typically impersonate major brands, including document sharing platforms such as Microsoft’s OneDrive and SharePoint, and, increasingly frequently, DocuSign. Attackers exploit employees’ inherent trust in commonly used applications by spoofing their branding before tricking recipients into entering credentials or approving fraudulent document requests. ... Another significant phishing evolution involves abusing trusted services and content delivery platforms. Attackers are increasingly using legitimate document-signing and file-hosting services to distribute phishing lures. They first upload malicious content to a reputable provider, then craft phishing emails or messages that reference these trusted services and content delivery platforms. “Since these services host the attacker’s content, vigilant users who check URLs before clicking may still be misled, as the links appear to belong to legitimate and well-known platforms,” warns Greg ... Image-based phishing is becoming more complex. For example, fraudsters are crafting images to look like a text-based emails to improve their apparent authenticity, while still bypassing conventional email filters.


Daily Tech Digest - March 12, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you made them feel." -- Mary Kay Ash



Rethinking Firewall and Proxy Management for Enterprise Agility

Firewall and proxy management follows a simple rule: block all ports by default and allow only essential traffic. Recognizing that developers understand their applications best, why not empower them to manage firewall and proxy changes as part of a “shift security left” strategy? In practice, however, tight deadlines often lead developers to implement overly broad connectivity – opening up to the complete internet – with plans to refine later. Temporary fixes, if left unchecked, can evolve into serious vulnerabilities. Every security specialist understands what happens in practice. When deadlines are tight, developers may be tempted to take shortcuts. Instead of figuring out the exact needed IP range, they open connectivity to the entire internet with the intention of fixing this later. ... Periodically auditing firewall and proxy rule sets is essential to maintaining security, but it is not a substitute for a robust approval process. Firewalls and proxies are exposed to external threats, and attackers might exploit misconfigurations before periodic audits catch them. Blocking insecure connections on a firewall when the application is already live requires re-architecting the solution, which is costly and time-consuming. Thus, preventing risky changes must be the priority.


Multicloud: Tips for getting it right

It’s obvious that a multicloud strategy — regardless of what it actually looks like — will further increase complexity. This is simply because each cloud platform works with its own management tools, security protocols and performance metrics. Anyone who wants to integrate multicloud into their IT landscape needs a robust management system that can handle the specific requirements of the different environments while ensuring an overview and control across all platforms. This is necessary not only for reasons of handling and performance but also to be as free as possible when choosing the optimal provider for the respective application scenario. This requires cross-platform technologies and tools. The large hyperscalers do provide interfaces for data exchange with other platforms as standard. ... In general, anyone pursuing a multicloud strategy should take steps in advance to ensure that complexity does not lead to chaos but to more efficient IT processes. Security is one of the main issues. And it is twofold: on the one hand, the networked services must be protected in themselves and within their respective platforms. On the other hand, the entire construct with its various architectures and systems must be secure. It is well known that the interfaces are potential gateways for unwelcome “guests”.


FinOps and AI: A Winning Strategy for Cost-Efficient Growth

FinOps is a management approach focused on shared responsibility for cloud computing infrastructure and related costs. ... Companies are attempting to drink from the AI firehose, and unfortunately, they’re creating AI strategies in real-time as they rush to drive revenue and staff productivity. Ideally, you want a foundation in place before using AI in operations. This should include an emphasis on cost management, resource allocation, and keeping tabs on ROI. This is also the focus of FinOps, which can prevent errors and improve processes to further AI adoption. ... To begin, companies should create a budget and forecast the AI projects they want to take on. This planning is a pillar of FinOps and should accurately assess the total cost of initiatives, emphasizing resource allocation (including staffing) and eliminating billing overruns. Cost optimization can also help identify opportunities and reduce expenses. The new focus on AI services in the cloud could drive scalability and cost efficiency as they are much more sensitive to overruns and inefficient usage. Even if organizations are not implementing AI into end-user workloads, there is still an opportunity to craft internal systems utilizing AI to help identify operational efficiencies and implement cost controls on existing infrastructure.


3 Signs Your Startup Needs a CTO — But Not As a Full-Time Hire

CTO as a service provides businesses with access to experienced technical leadership without the commitment of a full-time hire. This model allows startups to leverage specialized expertise on an as-needed basis. ... An on-demand expert can bridge this gap by offering leadership that goes beyond programming. This model provides access to strategic guidance on technology choices, project architecture and team dynamics. During a growth phase, mistakes in management won't be forgiven. ... Hiring a full-time CTO can strain tight budgets, diverting funds from critical areas like product development and market expansion. However, with the CTO as a service model, companies can access top-tier expertise tailored to their financial capabilities. This flexibility allows startups to engage a tech strategist on a project basis, paying only for the high-quality leadership they need when they need it (and if needed). ... Engaging outsourced expertise offers a viable solution, providing a fresh perspective on existing challenges at a cost that remains accessible, even amid resource constraints. This strategic move allows businesses to tap into a wealth of external knowledge, leveraging insights gained from diverse industry experiences. Such an external viewpoint can be invaluable, especially when navigating complex technical hurdles, ensuring that projects not only survive but thrive. 


How to Turn Developer Team Friction Into a Positive Force

Developer team friction, while often seen as a negative trait, can actually become a positive force under certain conditions, McGinnis says. "Friction can enhance problem-solving abilities by highlighting weaknesses in current processes or solutions," he explains. "It prompts the team to address these issues, thereby improving their overall problem-solving skills." Team friction often occurs when a developer passionately advocates a new approach or solution. ... Friction can easily spiral out of control when retrospectives and feedback focus on individuals instead of addressing issues and problems jointly as a team. "Staying solution-oriented and helping each other achieve collective success for the sake of the team, should always be the No. 1 priority," Miears says. "Make it a safe space." As a leader it's important to empower every team member to speak up, Beck advises. Each team member has a different and unique perspective. "For instance, you could have one brilliant engineer who rarely speaks up, but when they do it’s important that people listen," he says. "At other times, you may have an outspoken member on your team who will speak on every issue and argue for their point, regardless of the situation." 


Enterprise Architecture in the Digital Age: Navigating Challenges and Unleashing Transformative Potential

EA is about crafting a comprehensive, composable, and agile architecture-aligned blueprint that synchronizes an organization’s business processes, workforce, and technology with its strategic vision. Rooted in frameworks like TOGAF, it transcends IT, embedding itself into the very heart of a business. ... In this digital age, EA’s role is more critical than ever. It’s not just about maintaining systems; it’s about equipping organizations—whether agile startups or sprawling, successful enterprises—for the disruptions driven by rapid technological evolution and innovation. ... As we navigate inevitable future complexities, Enterprise Architecture stands as a critical differentiator between organizations that merely survive digital disruption and those that harness it for competitive advantage. The most successful implementations of EA share common characteristics: they integrate technical depth with business acumen, maintain adaptable governance frameworks, and continuously measure impact through concrete metrics. These aren’t abstract benefits—they represent tangible business outcomes that directly impact market position and financial performance. Looking forward, EA will increasingly focus on orchestrating complex ecosystems rather than simply mapping them. 


Generative AI Drives Emphasis on Unstructured Data Security

As organizations pivot their focus, the demand for vendors specializing in security solutions, such as data classification, encryption and access control, tailored to unstructured data is expected to increase. This increased demand reflects the necessity for robust and adaptable security measures that can effectively protect the vast and varied types of unstructured data organizations now manage. In tandem with this shift, the rising significance of unstructured data in driving business value and innovation compels organizations to develop expertise in unstructured data security. ... Organizations should prioritize investment in security controls specifically designed for unstructured data. This includes tools with advanced capabilities such as rapid data classification, entitlement management and unclassified data redaction. Solutions that offer prompt engineering and output filtering can also further enhance data security measures. ... Building a knowledgeable team is crucial for managing unstructured data security. Organizations should invest in staffing, training and development to cultivate expertise in this area. This involves hiring data security professionals with specialized skills and providing ongoing education to ensure they are equipped to handle the unique challenges associated with unstructured data. 


Quantum Pulses Could Help Preserve Qubit Stability, Researchers Report

The researchers used a model of two independent qubits, each interacting with its own environment through a process called pure dephasing. This form of decoherence arises from random fluctuations in the qubit’s surroundings, which gradually disrupt its quantum state. The study analyzed how different configurations of PDD pulses — applying them to one qubit versus both — affected the system’s evolution. By employing mathematical models that calculate the quantum speed limit based on changes in quantum coherence, the team measured the impact of periodic pulses on the system’s stability. When pulses were applied to both qubits, they observed a near-complete suppression of dephasing, while applying pulses to just one qubit provided partial protection. Importantly, the researchers investigated the effects of different pulse frequencies and durations to determine the optimal conditions for coherence preservation. ... While the study presents promising results, the effectiveness of PDD depends on the ability to deliver precise, high-frequency pulses. Practical quantum computing systems must contend with hardware limitations, such as pulse imperfections and operational noise, which could reduce the technique’s efficiency.


Disaster Recovery Plan for DevOps

While developing your disaster recovery Plan for your DevOps stack, it’s worth considering the challenges DevOps face in this view. DevOps ecosystems always have complex architecture, like interconnected pipelines and environments (e., GitHub and Jira integration). Thus, a single failure, whether due to a corrupted artifact or a ransomware attack, can cascade through the entire system. Moreover, the rapid development of DevOps creates constant changes, which can complicate data consistency and integrity checks during the recovery process. Another issue is data retention policies. SaaS tools often impose limited retention periods – usually, they vary from 30 to 365 days. ... your backup solution should allow you to:Automate your backups, by scheduling them with the most appropriate interval between backup copies, so that no data is lost in the event of failure,
Provide long-term or even unlimited retention, which will help you to restore data from any point in time. Apply the 3-2-1 backup rule and ensure replication between all the storages, so that in case one of the backup locations fails, you can run your backup from another one. Ransomware protection, which includes AES encryption with your own encryption key, immutable backups, restore and DR capabilities


The state of ransomware: Fragmented but still potent despite takedowns

“Law enforcement takedowns have disrupted major groups like LockBit but newly formed groups quickly emerge akin to a good old-fashioned game of whack-a-mole,” said Jake Moore, global cybersecurity advisor at ESET. “Double and triple extortion, including data leaks and DDoS threats, are now extremely common, and ransomware-as-a-service models make attacks even easier to launch, even by inexperienced criminals.” Moore added: “Law enforcement agencies have struggled over the years to take control of this growing situation as it is costly and resource heavy to even attempt to take down a major criminal network.” ... Meanwhile, enterprises are taking proactive measures to defend against ransomware attacks. These include implementing zero trust architectures, enhancing endpoint detection and response (EDR) solutions, and conducting regular exercises to improve incident response readiness. Anna Chung, principal researcher at Palo Alto Networks’ Unit 42, told CSO that advanced tools such as next-gen firewalls, immutable backups, and cloud redundancies, while keeping systems regularly patched, can help defend against cyberattacks. Greater use of gen AI technologies by attackers is likely to bring further challenges, Chung warned. 

Daily Tech Digest - March 06, 2025


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe


RIP (finally) to the blockchain hype

Fowler is not alone in his skepticism about blockchain. It hasn’t yet delivered practical benefits at scale, says Salome Mikadze, co-founder at software development firm Movadex. Still, the technology is showing promise in some niche areas, such as secure data sharing or certain supply chain scenarios, she says. “Most of us agree that while it’s an exciting idea, its real-world applications are still limited,” Mikadze adds. “In short, blockchain is on the shelf for now — something we check in on periodically, but not a priority until it proves its worth in the real world.” The crazy hype around digital art NFTs turned blockchain into a bit of a joke, adds Trevor Fry, an IT consultant and fractional CTO. Many organizations haven’t found other uses for blockchain, he says. “Blockchain was marketed as this must-have innovation, but in practice, it doesn’t solve a problem that many companies or people have,” he says. “Unlike AI and LLMs, which have real-world applications across industries and have such a low barrier to entry that everyone can easily try it, blockchain’s use cases are very niche, though not useless.” Fry sees eventual benefits in supply chain tracking and data integrity, situations where a secure and decentralized record can matter. “But right now, it’s not solving a big enough pain point for most organizations to justify the complexity and cost and hiring people who know how to develop and work with it,” he adds. 


The 5 stages of incident response grief

Starting with denial and moving through anger, bargaining, depression, and acceptance, security experts can take a few lessons from the grieving process ... when you first see the evidence of an incident in progress, you might first consider alternate explanations. Is it a false alarm? Did an employee open the wrong application by mistake? Maybe an automated process is misfiring, or a misconfiguration is causing an alert to trigger. You want to consider your options before assuming the worst. ... Once you confirm that it isn’t a false alarm and there is, in fact, an attacker present in the system, your first thought is probably, “this is going to consume the next few days, weeks, or months of my life.” You may become angry at a specific team for not following security guidelines or shortcutting a process. ... Sadly, getting an intruder out of your system is rarely a quick and easy process. But understanding the layout of your digital landscape and working with stakeholders throughout the organization can help ensure you’re making the right decisions at the right time. ... With the recovery process well underway, it’s time to take what you’ve learned and apply it. Now is the time to start bringing in all those suppressed thoughts from the former stages. That begins with understanding what went wrong. What was the cyber kill chain? What vulnerabilities did they exploit to gain access to certain systems? How did they evade detection solutions? Are certain solutions not working as well as they should? 


How to Manage Software Supply Chain Risks

Developers can’t manage risks on their own, nor can CISOs. “Effectively protecting, defending and responding to supply chain events should be a combination among many departments [including] security, IT, legal, development, product, etc.,” says Ventura. “Not one department should fully own the entire supply chain program as it touches many business units within an organization. Spearheading the program typically falls under the CISO or the security team as cybersecurity risks should be considered business risks.” One of the most common mistakes is having a false sense of security. “Thinking with the mindset of, ‘If I haven't had a supply chain issue before, why fix it now?’ leads to complacency and a lack of taking cybersecurity serious throughout the business,” says Ventura. “Another common mistake is organizations relying too heavily on vendor-assessments, where an organization can say they are secure, but haven't put in robust controls. Trusting an assessment completely without verification can lead to major issues down the road.” By failing to focus on supply chain risks, organizations put themselves at a high risk of a data breach, financial loss, regulatory and compliance fines and business and reputational damage. 


FinOps for Software as a Service (SaaS)

The challenges of managing public cloud spending are mirrored in the proliferation of SaaS across organizations through the use of decentralized, individual-level procurement and corporate-credit-card-funded purchase orders, resulting in limited organizational-level visibility into cost and usage. Additionally, SaaS is a consideration in the typical Build-vs-Buy-vs-Rent discussions. Often, engineers have a choice between building their own solutions or purchasing one via a SaaS provider. Because of this, there is less of a clear distinction between what workloads are managed in Public Cloud versus workloads managed by SaaS vendors (or where they are shared). Therefore, the spend is all part of the same value creation process, and engineering teams want to know the total cost of running their solutions. And naturally, the other FinOps goals and outcomes follow. By iteratively applying Framework Capabilities to achieve the outcomes described by the four FinOps Domains: to Understand our Cost & Usage, to Quantify its Business Value, to Optimize our Cost & Usage, and to effectively Manage the FinOps Practice, the same financial accountability and transparency can be established for SaaS spending, ensuring organizations can keep their SaaS costs aligned with business goals and associated technology strategy.


The role of data centres in national security

The UK government’s recent decision to designate certain data centres as Critical National Infrastructure (CNI) represents a significant shift in recognising their role in safeguarding the nation’s essential services. Data centres are the backbone of industries like healthcare, finance and telecommunications, placing them at increased risk of cyberattacks. While this move enhances protection for specific facilities, it also raises important questions for the wider industry. ... A critical first step for data centres is to conduct a thorough security audit. This process helps to create a complete inventory of all endpoints across both OT and IT environments, including legacy devices that may have been overlooked. Understanding the scope of connected systems and their potential vulnerabilities provides a clear foundation for implementing effective security measures. Once an inventory is established, technologies like Endpoint Detection and Response (EDR) can be deployed to monitor critical endpoints, including servers and workstations, for signs of malicious activity. EDR solutions enable rapid containment of threats, preventing them from spreading across the network. Extended Detection and Response (XDR) builds on this by unifying threat detection across endpoints, networks and servers, offering a holistic view of vulnerabilities and enabling more comprehensive protection.


Will the future of software development run on vibes?

When it comes to defining what exactly constitutes vibe coding, Willison makes an important distinction: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant." Vibe coding, by contrast, involves accepting code without fully understanding how it works. While "vibe coding" originated with Karpathy as a playful term, it may encapsulate a real shift in how some developers approach programming tasks—prioritizing speed and experimentation over deep technical understanding. And to some people, that may be terrifying. Willison emphasizes that developers need to take accountability for their code: "I firmly believe that as a developer you have to take accountability for the code you produce—if you're going to put your name to it you need to be confident that you understand how and why it works—ideally to the point that you can explain it to somebody else." He also warns about a common path to technical debt: "For experiments and low-stake projects where you want to explore what's possible and build fun prototypes? Go wild! But stay aware of the very real risk that a good enough prototype often faces pressure to get pushed to production."


How the Emerging Market for AI Training Data is Eroding Big Tech’s ‘Fair Use’ Copyright Defense

“It would be impossible to train today’s leading AI models without using copyrighted materials,” the company wrote in testimony submitted to the House of Lords. “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.” Missed in OpenAI’s pleading was the obvious point: Of course AI models need to be trained with high-quality data. Developers simply need to fairly remunerate the owners of those datasets for their use. One could equally argue that “without access to food in supermarkets, millions of people would starve.” Yes. Indeed. But we do need to pay the grocer. ... Anthropic, developer of the Claude AI model, answered a copyright infringement lawsuit one year ago by arguing that the market for training data simply didn’t exist. It was entirely theoretical—a figment of the imagination. In federal court, Anthropic submitted an expert opinion from economist Steven R. Peterson. “Economic analysis,” wrote Peterson, “shows that the hypothetical competitive market for licenses covering data to train cutting-edge LLMs would be impracticable.” Obtaining permission from property owners to use their property: So bothersome and expensive.


3 Ways FinOps Strategies Can Boost Cyber Defenses

By providing visibility into cloud costs, FinOps uncovers underutilized or redundant resources and subscriptions, or over-provisioned budgets that can be redirected to strengthen cybersecurity. Through continuous real-time monitoring, organizations can proactively identify trends, anomalies, or emerging inefficiencies, ensuring they align their resources with strategic goals. For example, regular audits may uncover unnecessary overlapping subscriptions or unused security features, while ongoing monitoring ensures these inefficiencies do not reoccur. ... A FinOps approach also involved continuous monitoring, which not only identifies potential security gaps before they escalate but also matches security measures with organizational goals. Furthermore, FinOps helps with financial risk management by assessing the costs of potential breaches and allocating resources effectively. Through ongoing risk assessments and strategic budget adjustments, organizations can make better use of their security investments, which will help to maintain a robust defense against threats while still achieving their business aims. ... Moreover, governance frameworks are built into FinOps principles, which leads to consistent application of security policies and procedures. This includes setting up governance frameworks that define roles, responsibilities, and accountability for security and financial management.


Black Inc has asked authors to sign AI agreements. But why should writers help AI learn how to do their job?

Writers were reportedly asked to permit Black Inc the ability to exercise key rights within their copyright to help develop machine learning and AI systems. This includes using the writers’ work in the training, testing, validation and subsequent deployment of AI systems. The contract is offered on an opt-in basis, said a Black Inc spokesperson, and the company would negotiate with “reputable” AI companies. But authors, literary agents and the Australian Society of Authors have criticised the move. “I feel like we’re being asked to sign our own death warrant,” said novelist Laura Jean McKay. ... In theory, the licensing solution should hold true for authors, publishers and AI companies. After all, a licensing system would offer a stream of revenue. But in reality there might just be a trickle of income for authors and the basis for providing it under existing laws might be quite weak. Authors and publishers are depending on copyright law to protect them. Unfortunately, copyright law works in relation to copying, not on the development of capabilities in probability-driven language outputs. ... To put it another way, once the AI has learned how to write, it has acquired that capability. It is true AI can be manipulated to produce output that reflects copyright protected content. 


Outsmarting Cyber Threats with Attack Graphs

An attack graph is a visual representation of potential attack paths within a system or network. It maps how an attacker could move through different security weaknesses - misconfigurations, vulnerabilities, and credential exposures, etc. - to reach critical assets. Attack graphs can incorporate data from various sources, continuously update as environments change, and model real-world attack scenarios. Instead of focusing solely on individual vulnerabilities, attack graphs provide the bigger picture - how different security gaps, like misconfigurations, credential issues, and network exposures, could be used together to pose serious risk. Unlike traditional security models that prioritize vulnerabilities based on severity scores alone, attack graphs loop in exploitability and business impact. The reason? Just because a vulnerability has a high CVSS score doesn't mean it's an actual threat to a given environment. Attack graphs add critical context, showing whether a vulnerability can actually be used in combination with other weaknesses to reach critical assets. Attack graphs are also able to provide continuous visibility. This, in contrast to one-time assessments like red teaming or penetration tests, which can quickly become outdated.