Showing posts with label metrics. Show all posts
Showing posts with label metrics. Show all posts

Daily Tech Digest - June 16, 2025


Quote for the day:

"A boss has the title, a leader has the people." -- Simon Sinek


How CIOs are getting data right for AI

Organizations that have taken steps to better organize their data are more likely to possess data maturity, a key attribute of companies that succeed with AI. Research firm IDC defines data maturity as the use of advanced data quality, cataloging and metadata, and data governance processes. The research firm’s Office of the CDO Survey finds firms with data maturity are far more likely than other organizations to have generative AI solutions in production. ... “We have to be mindful of what we put into public data sets,” says Yunger. With that caution in mind, Servier has built a private version of ChatGPT on Microsoft Azure to ensure that teams benefit from access to AI tools while protecting proprietary information and maintaining confidentiality. The gen AI implementation is used to speed the creation of internal documents and emails, Yunger says. In addition, personal data that might crop up in pharmaceutical trials must be treated with the utmost caution to comply with the European Union’s AI Act,  ... To achieve what he calls “sustainable AI,” AES’s Reyes counsels the need to strike a delicate balance: implementing data governance, but in a way that does not disrupt work patterns. He advises making sure everyone at your company understands that data must be treated as a valuable asset: With the high stakes of AI in play, there is a strong reason it must be accurately cataloged and managed.


Alan Turing Institute reveals digital identity and DPI risks in Cyber Threats Observatory Workshop

The trend indicates that threat actors could be targeting identity mechanisms such as authentication, session management, and role-based access systems. The policy implication for governments translates to a need for more detailed cyber incident reporting across all critical sectors, the institute recommends. An issue is the “weakest link” problem. A well-resourced sector like finance might invest in strong security, but their dependence on, say, a national ID system means they are still vulnerable if that ID system is weak. The institute believes this calls for viewing DPI security as a public good. Improvements in one sector’s security, such as “hardened” digital ID protocols, could benefit other sectors’ security. Integrating security and development teams is recommended as is promoting a culture of shared cyber responsibility. Digital ID, government, healthcare, and finance must advance together on the cybersecurity maturity curve, the report says, as a weakness in one can undermine the public’s trust in all. The report also classifies CVEs by attack vectors: Network, Local, Adjacent Network, and Physical. Remote Network threats were dominant, particularly affecting finance and digital identity platforms. But Local and Physical attack surfaces, especially in health and government, are increasingly relevant due to on-premise systems and biometric interfaces, according to the Cyber Threat Observatory.


The Advantages Of Machine Learning For Large Restaurant Chains

Machine learning can not only assist in the present activities but contribute to steering long-term planning and development. Decision-makers can use these trends to notice opportunities to explore new markets, develop new products, or redistribute resources when they discover the patterns across the different locations, customer groups, and product categories. These insights dig deeper into the superficial data and reveal trends that might not have been apparent by just manual analysis. The capability to make data-driven decisions becomes even more significant with the growth of restaurant chains. Machine learning tools provide scalable insights that can be applied in parallel with the rest of the business objectives when combined with other technologies like a drive thru system or cloud-based analytics platforms. The opening of a new venue or the optimizing of an advertisement campaign, machine learning enables the management levels to have the information needed to make a decision with assured confidence and competence. ... Machine learning is transforming how major restaurant chains run their business, providing an unbeatable mix of accuracy, speed, and flexibility over their older equivalents. 


How Staff+ Engineers Can Develop Strategic Thinking

For risk and innovation, you need to understand what your organization values the most. Everybody has a culture memo and a set of tenets they follow, but these are part of unsaid rules, something that every new hire will learn by the first week of their onboarding, which is not written out loud and clear. In my experience, there are different kinds of organizations. Some care about execution, like results above everything, top line, bottom line. Others care about data-driven decision-making, customer sentiment, and keeping adapting. There are others who care about storytelling and relationships. What does this really mean? If you fail to influence, if you fail to tell a story about what ideas you have, what you're really trying to do, to build trust and relationships, you may not succeed in that environment, because it's not enough for you to be smart and knowing it all. You also need to know how to convey your ideas and influence people. When you talk about innovation, there are companies that really pride themselves on experimentation, staying ahead of the curve. You can look at this by how many of them have an R&D department, and how much funding they put into that. Then, what's their role in the open-source community, and how much they contribute towards it.


Legal and Policy Responses to Spyware: A Primer

There have been a number of international efforts to combat at least some aspects of the harms of commercial spyware. These include the US-led Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware and the Pall Mall Process, an ongoing multistakeholder undertaking focussed on this issue. So far, principles, norms, and calls for businesses to comply with the United Nations Guiding Principles on Business and Human Rights (UNGPs) have emerged, and Costa Rica has called for a full moratorium, but no well-orchestrated international action has been fully brought to fruition. However, private companies and individuals, regulators, and national or regional governments have taken action, employing a wide range of legal and regulatory tools. Guidelines and proposals have also been articulated by governmental and non-governmental organizations, but we will focus here on measures that are existent and, at least in theory, enforceable. While some attempts at combating spyware, like WhatsApp’s, have been effective, others have not. Analyzing the strengths and weaknesses of each approach is beyond the scope of this article, and, considering the international nature of spyware, what fails in one jurisdiction may be successful in another.


Red Teaming AI: The Build Vs Buy Debate

In order to red team your AI model, you need to have a deep understanding of the system you are protecting. Today’s models are complex multimodal, multilingual systems. One model might take in text, images, code, and speech with any single input having the potential to break something. Attackers know this and can easily take advantage. For example, a QR code might contain an obfuscated prompt injection or a roleplay conversation might lead to ethical bypasses. This isn’t just about keywords, but about understanding how intent hides beneath layers of tokens, characters, and context. The attack surface isn’t just large, it’s effectively infinite. ... Building versus buying is an age-old debate. Fortunately, the AI security space is maturing rapidly, and organizations have a lot of choices to implement from. After you have some time to evaluate your own criteria against Microsoft, OWASP and NIST frameworks, you should have a good idea of what your biggest risks are and key success criteria. After considering risk mitigation strategies, and assuming you want to keep AI turned on, there are some open-source deployment options like Promptfoo and Llama Guard, which provide useful scaffolding for evaluating model safety. Paid platforms like Lakera, Knostic, Robust Intelligence, Noma, and Aim are pushing the edge on real-time, content-aware security for AI, each offering slightly different tradeoffs in how they offer protection. 


The Impact of Quantum Decryption

There are two key quantum mechanical phenomena, superposition and entanglement, that enable qubits to operate fundamentally differently than classical bits. Superposition allows a qubit to exist in a probabilistic combination of both 0 and 1 states simultaneously, significantly increasing the amount of information a small number of qubits can hold.  ... Quantum decryption of data stolen using current standards could have pervasive impacts. Government secrets, more long-term data, and intellectual property remain at significant risk even if decrypted years after a breach. Decrypted government communications, documents, or military strategies could compromise national security. An organization’s competitive advantage could be undermined by trade secrets being exposed. Meanwhile, data such as credit card information will diminish over time due to expiration dates and the issuance of new cards. ... For organizations, the ability of quantum computers to decrypt previously stolen data could result in substantial financial losses due to data breaches, corporate espionage, and potential legal liabilities. The exposure of sensitive corporate information, such as trade secrets and strategic plans, could provide competitors with an unfair advantage, leading to significant financial harm. 


Don't let a crisis of confidence derail your data strategy

In an age of AI, the options that range from on-premise facilities to colocation, or public, private and hybrid clouds, are business-critical decisions. These decisions are so important because such choices impact the compliance, cost efficiency, scalability, security, and agility that can make or break a business. In the face of such high stakes, it is hardly surprising that confidence is the battleground on which deals for digital infrastructure are fought. ... Commercially, Total Cost of Ownership (TCO) has become another key factor. Public cloud was heavily promoted on the basis of lower upfront costs. However, businesses have seen the "pay-as-you-go" model lead to escalating operational expenses. In contrast, businesses have seen the cost of colocation and private cloud become more predictable and attractive for long-term investment. Some reports suggest that at scale, colocation can offer significant savings over public cloud, while private cloud can also reduce costs by eliminating hardware procurement and management. Another shift in confidence has been that public cloud no longer guarantees the easiest path to growth. Public cloud has traditionally excelled in rapid, on-demand scalability. This agility was a key driver for adoption, as businesses sought to expand quickly.


The Anti-Metrics Era of Developer Productivity

The need to measure everything truly spiked during COVID when we started working remotely, and there wasn’t a good way to understand how work was done. Part of this also stemmed from management’s insecurities about understanding what’s going on in software engineering. However, when surveyed about the usefulness of developer productivity metrics, most leaders admit that the metrics they track are not representative of developer productivity and tend to conflate productivity with experience. And now that most of the code is written by AI, measuring productivity the same way makes even less sense. If AI improves programming effort by 30%, does that mean we get 30% more productivity?” ... Whether you call it DevEx or platform engineering, the lack of friction equals happy developers, which equals productive developers. In the same survey, 63% of developers said developer experience is important for their job satisfaction. ... Instead of building shiny dashboards, engineering leads should focus on developer experience and automated workflows across the entire software development life cycle: development, code reviews, builds, tests and deployments. This means focusing on solving real developer problems instead of just pointing at the problems.


Why banks’ tech-first approach leaves governance gaps

Integration begins with governance. When cybersecurity is properly embedded in enterprise-wide governance and risk management, security leaders are naturally included in key forums, including strategy discussions, product development, and M&A decision making. Once at the table, the cybersecurity team must engage productively. They must identify risks, communicate them in business terms AND collaborate with the business to develop solutions that enable business goals while operating within defined risk appetites. The goal is to make the business successful, in a safe and secure manner. Cyber teams that focus solely on highlighting problems risk being sidelined. Leaders must ensure their teams are structured and resourced to support business goals, with appropriate roles and encouragement of creative risk mitigation approaches. ... Start by ensuring there is a regulatory management function that actively tracks and analyzes emerging requirements. These updates should be integrated into the enterprise risk management (ERM) framework and governance processes—not handled in isolation. They should be treated no differently than any other new business initiatives. ... Ultimately, aligning cyber governance with regulatory change requires cross-functional collaboration, early engagement, and integration into strategic risk processes, not just technical or compliance checklists.

Daily Tech Digest - June 15, 2025


Quote for the day:

“Whenever you find yourself on the side of the majority, it is time to pause and reflect.” -- Mark Twain



Gazing into the future of eye contact

Eye contact is a human need. But it also offers big business benefits. Brain scans show that eye contact activates parts of the brain linked to reading others’ feelings and intentions, including the fusiform gyrus, medial prefrontal cortex, and amygdala. These brain regions help people figure out what others are thinking or feeling, which we all need for trusting business and work relationships. ... If you look into the camera to simulate eye contact, you can’t see the other person’s face or reactions. This means both people always appear to be looking away, even if they are trying to pay attention. It is not just awkward — it changes how people feel and behave. ... The iContact Camera Pro is a 4K webcam that uses a retractable arm that places the camera right in your line of sight so that you can look at the person and the camera at the same time. It lets you adjust video and audio settings in real time. It’s compact and folds away when not in use. It’s also easy to set up with a USB-C connection and works with Zoom, Microsoft Teams, Google Meet, and other major platforms. ... Finally, there’s Casablanca AI, software that fixes your gaze in real time during video calls, so it looks like you’re making eye contact even when you’re not. It works by using AI and GAN technology to adjust both your eyes and head angle, keeping your facial expressions and gestures natural, according to the company.


New York passes a bill to prevent AI-fueled disasters

“The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” said Senator Gounardes. “The people that know [AI] the best say that these risks are incredibly likely […] That’s alarming.” The RAISE Act is now headed for New York Governor Kathy Hochul’s desk, where she could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, New York’s AI safety bill would require the world’s largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York’s attorney general to bring civil penalties of up to $30 million. The RAISE Act aims to narrowly regulate the world’s largest companies — whether they’re based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill’s transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources (seemingly, more than any AI model available today), and are being made available to New York residents.


The ZTNA Blind Spot: Why Unmanaged Devices Threaten Your Hybrid Workforce

The risks are well-documented and growing. But many of the traditional approaches to securing these endpoints fall short—adding complexity without truly mitigating the threat. It’s time to rethink how we extend Zero Trust to every user, regardless of who owns the device they use. ... The challenge of unmanaged endpoints is no longer theoretical. In the modern enterprise, consultants, contractors, and partners are integral to getting work done—and they often need immediate access to internal systems and sensitive data. BYOD scenarios are equally common. Executives check dashboards from personal tablets, marketers access cloud apps from home desktops, and employees work on personal laptops while traveling. In each case, IT has little to no visibility or control over the device’s security posture. ... To truly solve the BYOD and contractor problem, enterprises need a comprehensive ZTNA solution that applies to all users and all devices under a single policy framework. The foundation of this approach is simple: trust no one, verify everything, and enforce policies consistently. ... The shift to hybrid work is permanent. That means BYOD and third-party access are not exceptions—they’re standard operating procedures. It’s time for enterprises to stop treating unmanaged devices as an edge case and start securing them as part of a unified Zero Trust strategy.


3 reasons I'll never trust an SSD for long-term data storage

SSDs rely on NAND flash memory, which inevitably wears out after a finite number of write cycles. Every time you write data to an SSD and erase it, you use up one write cycle. Most manufacturers specify the write endurance for their SSDs, which is usually in terabytes written (TBW). ... When I first started using SSDs, I was under the impression that I could just leave them on the shelf for a few years and access all my data whenever I wanted. But unfortunately, that's not how NAND flash memory works. The data stored in each cell leaks over time; the electric charge used to represent a bit can degrade, and if you don't power on the drive periodically to refresh the NAND cells, those bits can become unreadable. This is called charge leakage, and it gets worse with SSDs using lower-end NAND flash memory. Most consumer SSDs these days use TLC and QLC NAND flash memory, which aren't as great as SLC and MLC SSDs at data retention. ... A sudden power loss during critical write operations can corrupt data blocks and make recovery impossible. That's because SSDs often utilize complex caching mechanisms and intricate wear-leveling algorithms to optimize performance. During an abrupt shutdown, these processes might fail to complete correctly, leaving your data corrupted.


Beyond the Paycheck: Where IT Operations Careers Outshine Software Development

On the whole, working in IT tends to be more dynamic than working as a software developer. As a developer, you're likely to spend the bulk of your time writing code using a specific set of programming languages and frameworks. Your day-to-day, month-to-month, and year-to-year work will center on churning out never-ending streams of application updates. The tasks that fall to IT engineers, in contrast, tend to be more varied. You might troubleshoot a server failure one day and set up a RAID array the next. You might spend part of your day interfacing with end users, then go into strategic planning meetings with executives. ... IT engineers tend to be less abstracted from end users, with whom they often interact on a daily basis. In contrast, software engineers are more likely to spend their time writing code while rarely, if ever, watching someone use the software they produce. As a result, it can be easier in a certain respect for someone working in IT as compared to software development to feel a sense of satisfaction.  ... While software engineers can move into adjacent types of roles, like site reliability engineering, IT operations engineers arguably have a more diverse set of easily pursuable options if they want to move up and out of IT operations work.


Europe is caught in a cloud dilemma

The European Union is worried about its reliance on the leading US-based cloud providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These large-scale players hold an unrivaled influence over the cloud sector and manage vital infrastructure essential for driving economies and fostering innovation. European policymakers have raised concerns that their heavy dependence exposes the continent to vulnerabilities, constraints, and geopolitical uncertainties. ... Europe currently lacks cloud service providers that can challenge those global Goliaths. Despite efforts like Gaia-X that aim to change this, it’s not clear if Europe can catch up anytime soon. It will be a prohibitively expensive undertaking to build large-scale cloud infrastructure in Europe that is both cost-efficient and competitive. In a nutshell, Europe’s hope to adopt top-notch cloud technology without the countries that currently dominate the industry is impractical, considering current market conditions. ... Often companies view cloud integration as merely a checklist or set of choices to finalize their cloud migration. This frequently results in tangled networks and isolated silos. Instead, businesses should overhaul their existing cloud environment with a comprehensive strategy that considers both immediate needs and future goals as well as the broader geopolitical landscape.


Applying Observability to Leadership to Understand and Explain your Way of Working

Leadership observability means observing yourself as you lead. Alex Schladebeck shared at OOP conference how narrating thoughts, using mind maps, asking questions, and identifying patterns helped her as a leader to explain decisions, check bias, support others, and understand her actions and challenges. Employees and other leaders around you want to understand what leads to your decisions, Schladebeck said. ... Heuristics give us our "gut feeling". And that’s useful, but it’s better if we’re able to take a step back and get explicit about how we got to that gut feeling, Schladebeck mentioned. If we categorise and label things and explain what experiences lead us to our gut feeling, then we have the option of checking our bias and assumptions, and can help others to develop the thinking structures to make their own decisions, she explained ... Schladebeck recommends that leaders narrate their thoughts to reflect on, and describe their own work to the ones they are leading. They can do this by asking themselves questions like, "Why do I think that?", "What assumptions am I basing this on?", "What context factors am I taking into account?" Look for patterns, categories, and specific activities, she advised, and then you can try to explain these things to others around you. To visualize her thinking as a leader, Schladebeck uses mind maps.


Data Mesh: The Solution to Financial Services' Data Management Nightmare

Data mesh is not a technology or architecture, but an organizational and operational paradigm designed to scale data in complex enterprises. It promotes domain-oriented data ownership, where teams manage their data as a product, using a self-service infrastructure and following federated governance principles. In a data mesh, any team or department within an organization becomes accountable for the quality, discoverability, and accessibility of the data products they own. The concept emerged around five years ago as a response to the bottlenecks and limitations created by centralized data engineering teams acting as data gatekeepers. ... In a data mesh model, data ownership and stewardship are assigned to the business domains that generate and use the data. This means that teams such as credit risk, compliance, underwriting, or investment analysis can take responsibility for designing and maintaining the data products that meet their specific needs. ... Data mesh encourages clear definitions of data products and ownership, which helps reduce the bottlenecks often caused by fragmented data ownership or overloaded central teams. When combined with modern data technologies — such as cloud-native platforms, data virtualization layers, and orchestration tools — data mesh can help organizations connect data across legacy mainframes, on-premises databases, and cloud systems.


Accelerating Developer Velocity With Effective Platform Teams

Many platform engineering initiatives fail, not because of poor technology choices, but because they miss the most critical component: genuine collaboration. The most powerful internal developer platforms aren’t just technology stacks; they’re relationship accelerators that fundamentally transform the way teams work together. Effective platform teams have a deep understanding of what a day in the life of a developer, security engineer or operations specialist looks like. They know the pressures these teams face, their performance metrics and the challenges that frustrate them most. ... The core mission of platform teams is to enable faster software delivery by eliminating complexity and cognitive load. Put simply: Make the right way the easiest way. Developer experience extends beyond function; it’s about creating delight and demonstrating that the platform team cares about the human experience, not just technical capabilities. The best platforms craft natural, intuitive interfaces that anticipate questions and incorporate error messages that guide, rather than confuse. Platform engineering excellence comes from making complex things appear simple. It’s not about building the most sophisticated system; it’s about reducing complexity so developers can focus on creating business value.


AI agents will be ambient, but not autonomous - what that means for us

Currently, the AI assistance that users receive is deterministic; that is, humans are expected to enter a command in order to receive an intended outcome. With ambient agents, there is a shift in how humans fundamentally interact with AI to get the desired outcomes they need; the AI assistants rely instead on environmental cues. "Ambient agents we define as agents that are triggered by events, run in the background, but they are not completely autonomous," said Chase. He explains that ambient agents benefit employees by allowing them to expand their magnitude and scale themselves in ways they could not previously do. ... When talking about these types of ambient agents with advanced capabilities, it's easy to become concerned about trusting AI with your data and with executing actions of high importance. To tackle that concern, it is worth reiterating Chase's definition of ambient agents -- they're "not completely autonomous." ... "It's not deterministic," added Jokel. "It doesn't always give you the same outcome, and we can build scaffolding, but ultimately you still ant a human being sitting at the keyboard checking to make sure that this decision is the right thing to do before it gets executed, and I think we'll be in that state for a relatively long period of time."





Daily Tech Digest - June 01, 2025


Quote for the day:

"You are never too old to set another goal or to dream a new dream." -- C.S. Lewis


A wake-up call for real cloud ROI

To make cloud spending work for you, the first step is to stop, assess, and plan. Do not assume the cloud will save money automatically. Establish a meticulous strategy that matches workloads to the right environments, considering both current and future needs. Take the time to analyze which applications genuinely benefit from the public cloud versus alternative options. This is essential for achieving real savings and optimal performance. ... Enterprises should rigorously review their existing usage, streamline environments, and identify optimization opportunities. Invest in cloud management platforms that can automate the discovery of inefficiencies, recommend continuous improvements, and forecast future spending patterns with greater accuracy. Optimization isn’t a one-time exercise—it must be an ongoing process, with automation and accountability as central themes. Enterprises are facing mounting pressure to justify their escalating cloud spend and recapture true business value from their investments. Without decisive action, waste will continue to erode any promised benefits. ... In the end, cloud’s potential for delivering economic and business value is real, but only for organizations willing to put in the planning, discipline, and governance that cloud demands. 


Why IT-OT convergence is a gamechanger for cybersecurity

The combination of IT and OT is a powerful one. It promises real-time visibility into industrial systems, predictive maintenance that limits downtime and data-driven decision making that gives everything from supply chain efficiency to energy usage a boost. When IT systems communicate directly with OT devices, businesses gain a unified view of operations – leading to faster problem solving, fewer breakdowns, smarter automation and better resource planning. This convergence also supports cost reduction through more accurate forecasting, optimised maintenance and the elimination of redundant technologies. And with seamless collaboration, IT and OT teams can now innovate together, breaking down silos that once slowed progress. Cybersecurity maturity is another major win. OT systems, often built without security in mind, can benefit from established IT protections like centralised monitoring, zero-trust architectures and strong access controls. Concurrently, this integration lays the foundation for Industry 4.0 – where smart factories, autonomous systems and AI-driven insights thrive on seamless IT-OT collaboration. ... The convergence of IT and OT isn’t just a tech upgrade – it’s a transformation of how we operate, secure and grow in our interconnected world. But this new frontier demands a new playbook that combines industrial knowhow with cybersecurity discipline.


How To Measure AI Efficiency and Productivity Gains

Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview. ... The challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained." ... Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. 


The discipline we never trained for: Why spiritual quotient is the missing link in leadership

Spiritual Quotient (SQ) is the intelligence that governs how we lead from within. Unlike IQ or EQ, SQ is not about skill—it is about state. It reflects a leader’s ability to operate from deep alignment with their values, to stay centred amid volatility and to make decisions rooted in clarity rather than compulsion. It shows up in moments when the metrics don’t tell the full story, when stakeholders pull in conflicting directions. When the team is watching not just what you decide, but who you are while deciding it. It’s not about belief systems or spirituality in a religious sense; it’s about coherence between who you are, what you value, and how you lead. At its core, SQ is composed of several interwoven capacities: deep self-awareness, alignment with purpose, the ability to remain still and present amid volatility, moral discernment when the right path isn’t obvious, and the maturity to lead beyond ego. ... The workplace in 2025 is not just hybrid—it is holographic. Layers of culture, technology, generational values and business expectations now converge in real time. AI challenges what humans should do. Global disruptions challenge why businesses exist. Employees are no longer looking for charismatic heroes. They’re looking for leaders who are real, reflective and rooted.


Microsoft Confirms Password Deletion—Now Just 8 Weeks Away

The company’s solution is to first move autofill and then any form of password management to Edge. “Your saved passwords (but not your generated password history) and addresses are securely synced to your Microsoft account, and you can continue to access them and enjoy seamless autofill functionality with Microsoft Edge.” Microsoft has added an Authenticator splash screen with a “Turn on Edge” button as its ongoing campaign to switch users to its own browser continues. It’s not just with passwords, of course, there are the endless warnings and nags within Windows and even pointers within security advisories to switch to Edge for safety and security. ... Microsoft wants users to delete passwords once that’s done, so no legacy vulnerability remains, albeit Google has not gone quite that far as yet. You do need to remove SMS 2FA though, and use an app or key-based code at a minimum. ... Notwithstanding these Authenticator changes, Microsoft users should use this as a prompt to delete passwords and replace them with passkeys, per the Windows-makers’ advice. This is especially true given increasing reports of two-factor authentication (2FA) bypasses that are increasingly rendering basics forms of 2FA redundant.


Sustainable cyber risk management emerges as industrial imperative as manufacturers face mounting threats

The ability of a business to adjust, absorb, and continue operating under pressure is becoming a performance metric in and of itself. It is measured not only in uptime or safety statistics. It’s not a technical checkbox; it’s a strategic commitment that is becoming the new baseline for industrial trust and continuity. At the heart of this change lies security by design. Organizations are working to integrate security into OT environments, working their way up from system architecture to vendor procurement and lifecycle management, rather than adding protections along the way and after deployment. ... The path is made more difficult by the acute lack of OT cyber skills, which could be overcome by employing specialists and establishing long-term pipelines through internal reskilling, knowledge transfer procedures, and partnerships with universities. Building sustainable industrial cyber risk management can be made more organized using the ISA/IEC 62443 industrial cybersecurity standards. Cyber defense is now a continuous, sustainable discipline rather than an after-the-fact response thanks to these widely recognized models, which also allow industries to link risk mitigation to real industrial processes, guarantee system interoperability, and measure progress against common benchmarks.


Design Sprint vs Design Thinking: When to Use Each Framework for Maximum Impact

The Design Sprint is a structured five-day process created by Jake Knapp during his time at Google Ventures. It condenses months of work into a single workweek, allowing teams to rapidly solve challenges, create prototypes, and test ideas with real users to get clear data and insights before committing to a full-scale development effort. Unlike the more flexible Design Thinking approach, a Design Sprint follows a precise schedule with specific activities allocated to each day ...
The Design Sprint operates on the principle of "together alone" – team members work collaboratively during discussions and decision-making, but do individual work during ideation phases to ensure diverse thinking and prevent groupthink. ... Design Thinking is well-suited for broadly exploring problem spaces, particularly when the challenge is complex, ill-defined, or requires extensive user research. It excels at uncovering unmet needs and generating innovative solutions for "wicked problems" that don't have obvious answers. The Design Sprint works best when there's a specific, well-defined challenge that needs rapid resolution. It's particularly effective when a team needs to validate a concept quickly, align stakeholders around a direction, or break through decision paralysis.


Broadcom’s VMware Financial Model Is ‘Ethically Flawed’: European Report

Some of the biggest issues VMware cloud partners and customers in Europe include the company increasing prices after Broadcom axed VMware’s former perpetual licenses and pay-as-you-go monthly pricing models. Another big issue was VMware cutting its product portfolio from thousands of offerings into just a few large bundles that are only available via subscription with a multi-year minimum commitment. “The current VMware licensing model appears to rely on practices that breach EU competition regulations which, in addition to imposing harm on its customers and the European cloud ecosystem, creates a material risk for the company,” said the ECCO in its report. “Their shareholders should investigate and challenge the legality of such model.” Additionally, the ECCO said Broadcom recently made changes to its partnership program that forced partners to choose between either being a cloud service provider or a reseller. “It is common in Europe for CSP to play both [service provider and reseller] roles, thus these new requirements are a further harmful restriction on European cloud service providers’ ability to compete and serve European customers,” the ECCO report said.


Protecting Supply Chains from AI-Driven Risks in Manufacturing

Cybercriminals are notorious for exploiting AI and have set their sights on supply chains. Supply chain attacks are surging, with current analyses indicating a 70% likelihood of cybersecurity incidents stemming from supplier vulnerabilities. Additionally, Gartner projects that by the end of 2025, nearly half of all global organizations will have faced software supply chain attacks. Attackers manipulate data inputs to mislead algorithms, disrupt operations or steal proprietary information. Hackers targeting AI-enabled inventory systems can compromise demand forecasting, causing significant production disruptions and financial losses. ... Continuous validation of AI-generated data and forecasts ensures that AI systems remain reliable and accurate. The “black-box” nature of most AI products, where internal processes remain hidden, demands innovative auditing approaches to guarantee reliable outputs. Organizations should implement continuous data validation, scenario-based testing and expert human review to mitigate the risks of bias and inaccuracies. While black-box methods like functional testing offer some evaluation, they are inherently limited compared to audits of transparent systems, highlighting the importance of open AI development.


What's the State of AI Costs in 2025?

This year's report revealed that 44% of respondents plan to invest in improving AI explainability. Their goals are to increase accountability and transparency in AI systems as well as to clarify how decisions are made so that AI models are more understandable to users. Juxtaposed with uncertainty around ROI, this statistic signals further disparity between organizations' usage of AI and accurate understanding of it. ... Of the companies that use third-party platforms, over 90% reported high awareness of AI-driven revenue. That awareness empowers them to confidently compare revenue and cost, leading to very reliable ROI calculations. Conversely, companies that don't have a formal cost-tracking system have much less confidence that they can correctly determine the ROI of their AI initiatives. ... Even the best-planned AI projects can become unexpectedly expensive if organizations lack effective cost governance. This report highlights the need for companies to not merely track AI spend but optimize it via real-time visibility, cost attribution, and useful insights. Cloud-based AI tools account for almost two-thirds of AI budgets, so cloud cost optimization is essential if companies want to stop overspending. Cost is more than a metric; it's the most strategic measure of whether AI growth is sustainable. As companies implement better cost management practices and tools, they will be able to scale AI in a fiscally responsible way, confidently measure ROI, and prevent financial waste.

Daily Tech Digest - May 23, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you them feel." -- Mary Kay Ash


MCP, ACP, and Agent2Agent set standards for scalable AI results

“Without standardized protocols, companies will not be able to reap the maximum value from digital labor, or will be forced to build interoperability capabilities themselves, increasing technical debt,” he says. Protocols are also essential for AI security and scalability, because they will enable AI agents to validate each other, exchange data, and coordinate complex workflows, Lerhaupt adds. “The industry can build more robust and trustworthy multi-agent systems that integrate with existing infrastructure, encouraging innovation and collaboration instead of isolated, fragmented point solutions,” he says. ... ACP is “a universal protocol that transforms the fragmented landscape of today’s AI agents into inter-connected teammates,” writes Sandi Besen, ecosystem lead and AI research engineer at IBM Research, in Towards Data Science. “This unlocks new levels of interoperability, reuse, and scale.” ACP uses standard HTTP patterns for communication, making it easy to integrate into production, compared to JSON-RPC, which relies on more complex methods, Besen says. ... Agent2Agent, supported by more than 50 Google technology partners, will allow IT leaders to string a series of AI agents together, making it easier to get the specialized functionality their organizations need, Ensono’s Piazza says. Both ACP and Agent2Agent, with their focus on connecting AI agents, are complementary protocols to the model-centric MCP, their creators say.


It’s Time to Get Comfortable with Uncertainty in AI Model Training

“We noticed that some uncertainty models tend to be overconfident, even when the actual error in prediction is high,” said Bilbrey Pope. “This is common for most deep neural networks. But a model trained with SNAP gives a metric that mitigates this overconfidence. Ideally, you’d want to look at both prediction uncertainty and training data uncertainty to assess your overall model performance.” ... “AI should be able to accurately detect its knowledge boundaries,” said Choudhury. “We want our AI models to come with a confidence guarantee. We want to be able to make statements such as ‘This prediction provides 85% confidence that catalyst A is better than catalyst B, based on your requirements.’” In their published study, the researchers chose to benchmark their uncertainty method with one of the most advanced foundation models for atomistic materials chemistry, called MACE. The researchers calculated how well the model is trained to calculate the energy of specific families of materials. These calculations are important to understanding how well the AI model can approximate the more time- and energy-intensive methods that run on supercomputers. The results show what kinds of simulations can be calculated with confidence that the answers are accurate. 


Don’t let AI integration become your weakest link

Ironically, integration meant to boost efficiency can stifle innovation. Once a complex web of AI- interconnected systems exists, adding tools or modifying processes becomes a major architectural undertaking, not plug-and-play. It requires understanding interactions with central AI logic, potentially needing complex model re-training, integration point redevelopment, and extensive regression testing to avoid destabilisation. ... When AI integrates and automates decisions and workflows across systems based on learned patterns, it inherently optimises for the existing or dominant processes observed in the training data. While efficiency is the goal, there’s a tangible risk of inadvertently enforcing uniformity and suppressing valuable diversity in approaches. Different teams might have unique, effective methods deviating from the norm. An AI trained on the majority might flag these as errors, subtly discouraging creative problem-solving or context-specific adaptations. ... Feeding data from multiple sensitive systems (CRM, HR, finance, and communications) into central AI dramatically increases the scope and sensitivity of data processed and potentially exposed. Each integration point is another vector for data leakage or unauthorised access. Sensitive customer, employee, and financial data may flow across more boundaries and be aggregated in new ways, increasing the surface area for breaches or misuse.


Beyond API Uptime: Modern Metrics That Matter

A minuscule delay (measurable in API response times) in processing API requests can be as painful to a customer as a major outage. User behavior and expectations have evolved, and performance standards need to keep up. Traditional API monitoring tools are stuck in a binary paradigm of up versus down, despite the fact that modern, cloud native applications live in complex, distributed ecosystems. ... Measuring performance from multiple locations provides a more balanced and realistic view of user experience and can help uncover metrics you need to monitor, like location-specific latency: What’s fast in San Francisco might be slow in New York and terrible in London. ... The real value of IPM comes from how its core strengths, such as proactive synthetic testing, global monitoring agents, rich analytics with percentile-based metrics and experience-level objectives, interact and complement each other, Vasiliou told me. “IPM can proactively monitor single API URIs [uniform resource identifiers] or full API multistep transactions, even when users are not on your site or app. Many other monitors can also do this. It is only when you combine this with measuring performance from multiple locations, granular analytics and experience-level objectives that the value of the whole is greater than the sum of its parts,” Vasiliou said.


Agentic AI shaping strategies and plans across sectors as AI agents swarm

“Without a robust identity model, agents can’t truly act autonomously or securely,” says the post. “The MCP-I (I for Identity) specification addresses this gap – introducing a practical, interoperable approach to agentic identity.” Vouched also offers its turnkey SaaS Vouched MCP Identity Server, which provides easy-to-integrate APIs and SDKs for enterprises and developers to embed strong identity verification into agent systems. While the Agent Reputation Directory and MCP-I specification are open and free to the public, the MCP Identity Server is available as a commercial offering. “Thinking through strong identity in advance is critical to building an agentic future that works,” says Peter Horadan, CEO of Vouched. “In some ways we’ve seen this movie before. For example, when our industry designed email, they never anticipated that there would be bad email senders. As a result, we’re still dealing with spam problems 50 years later.” ... An early slide outlining definitions tells us that AI agents are ushering in a new definition of the word “tools,” which he calls “one of the big changes that’s happening this year around agentic AI, giving the ability to LLMs to actually do and act with permission on behalf of the user, interact with permission on behalf of the user, interact with third-party APIs,” and so on. Tools aside, what are the challenges for agentic AI? “The biggest one is security,” he says. 


Optimistic Security: A New Era for Cyber Defense

Optimistic cybersecurity involves effective NHI management that reduces risk, improves regulatory compliance, enhances operational efficiency and provides better control over access management. This management strategy goes beyond point solutions such as secret scanners, offering comprehensive protection throughout the entire lifecycle of these identities. ... Furthermore, a proactive attitude towards cybersecurity can lead to potential cost savings by automating processes such as secrets rotation and NHIs decommissioning. By utilizing optimistic cybersecurity strategies, businesses can transform their defensive mechanisms, preparing for a new era in cyber defense. By integrating Non-Human Identities and Secrets Management into their cloud security control strategies, organizations can fortify their digital infrastructure, significantly reducing security breaches and data leaks. ... Implementing an optimistic cybersecurity approach is no less than a transformation in perspective. It involves harnessing the power of technology and human ingenuity to build a resilient future. With optimism at its core, cybersecurity measures can become a beacon of hope rather than a looming threat. By welcoming this new era of cyber defense with open arms, organizations can build a secure digital environment where NHIs and their secrets operate seamlessly, playing a pivotal role in enhancing overall cybersecurity.


Identity Security Has an Automation Problem—And It's Bigger Than You Think

The data reveals a persistent reliance on human action for tasks that should be automated across the identity security lifecycle.41% of end users still share or update passwords manually, using insecure methods like spreadsheets, emails, or chat tools. They are rarely updated or monitored, increasing the likelihood of credential misuse or compromise. Nearly 89% of organizations rely on users to manually enable MFA in applications, despite MFA being one of the most effective security controls. Without enforcement, protection becomes optional, and attackers know how to exploit that inconsistency. 59% of IT teams handle user provisioning and deprovisioning manually, relying on ticketing systems or informal follow-ups to grant and remove access. These workflows are slow, inconsistent, and easy to overlook—leaving organizations exposed to unauthorized access and compliance failures. ... According to the Ponemon Institute, 52% of enterprises have experienced a security breach caused by manual identity work in disconnected applications. Most of them had four or more. The downstream impact was tangible: 43% reported customer loss, and 36% lost partners. These failures are predictable and preventable, but only if organizations stop relying on humans to carry out what should be automated. Identity is no longer a background system. It's one of the primary control planes in enterprise security. 


Critical infrastructure under attack: Flaws becoming weapon of choice

“Attackers have leaned more heavily on vulnerability exploitation to get in quickly and quietly,” said Dray Agha, senior manager of security operations at managed detection and response vendor Huntress. “Phishing and stolen credentials play a huge role, however, and we’re seeing more and more threat actors target identity first before they probe infrastructure.” James Lei, chief operating officer at application security testing firm Sparrow, added: “We’re seeing a shift in how attackers approach critical infrastructure in that they’re not just going after the usual suspects like phishing or credential stuffing, but increasingly targeting vulnerabilities in exposed systems that were never meant to be public-facing.” ... “Traditional methods for defense are not resilient enough for today’s evolving risk landscape,” said Andy Norton, European cyber risk officer at cybersecurity vendor Armis. “Legacy point products and siloed security solutions cannot adequately defend systems against modern threats, which increasingly incorporate AI. And yet, too few organizations are successfully adapting.” Norton added: “It’s vital that organizations stop reacting to cyber incidents once they’ve occurred and instead shift to a proactive cybersecurity posture that allows them to eliminate vulnerabilities before they can be exploited.”


Fundamentals of Data Access Management

An important component of an organization’s data management strategy is controlling access to the data to prevent data corruption, data loss, or unauthorized modification of the information. The fundamentals of data access management are especially important as the first line of defense for a company’s sensitive and proprietary data. Data access management protects the privacy of the individuals to which the data pertains, while also ensuring the organization complies with data protection laws. It does so by preventing unauthorized people from accessing the data, and by ensuring those who need access can reach it securely and in a timely manner. ... Appropriate data access controls improve the efficiency of business processes by limiting the number of actions an employee can take. This helps simplify user interfaces, reduce database errors, and automate validation, accuracy, and integrity checks. By restricting the number of entities that have access to sensitive data, or permission to alter or delete the data, organizations reduce the likelihood of errors being introduced while enhancing the effectiveness of their real-time data processing activities. ... Becoming a data-driven organization requires overcoming several obstacles, such as data silos, fragmented and decentralized data, lack of visibility into security and access-control measures currently in place, and a lack of organizational memory about how existing data systems were designed and implemented.


Chief Intelligence Officers? How Gen AI is rewiring the CxOs Brain

Generative AI is making the most impact in areas like Marketing, Software Engineering, Customer Service, and Sales. These functions benefit from AI’s ability to process vast amounts of data quickly. On the other hand, Legal and HR departments see less GenAI adoption, as these areas require high levels of accuracy, predictability, and human judgment. ... Business and tech leaders must prioritize business value when choosing AI use cases, focus on AI literacy and responsible AI, nurture cross-functional collaboration, and stress continuous learning to achieve successful outcomes. ... Leaders need to clearly outline and share a vision for responsible AI, establishing straightforward principles and policies that address fairness, bias reduction, ethics, risk management, privacy, sustainability, and compliance with regulations. They should also pinpoint the risks associated with Generative AI, such as privacy concerns, security issues, hallucinations, explainability, and legal compliance challenges, along with practical ways to mitigate these risks. When choosing and prioritizing use cases, it’s essential to consider responsible AI by filtering out those that carry unacceptable risks. Each Generative AI use case should have a designated champion responsible for ensuring that development and usage align with established policies. 

Daily Tech Digest - May 13, 2025


Quote for the day:

"If you genuinely want something, don't wait for it -- teach yourself to be impatient." -- Gurbaksh Chahal



How to Move from Manual to Automated to Autonomous Testing

As great as test automation is, it would be a mistake to put little emphasis on or completely remove manual testing. Automated testing's strength is its ability to catch issues while scanning code. Conversely, a significant weakness is that it is not as reliable as manual testing in noticing unexpected issues that manifest themselves during usability tests. While developing and implementing automated tests, organizations should integrate manual testing into their overall quality assurance program. Even though manual testing may not initially benefit the bottom line, it definitely adds a level of protection against issues that could wreak havoc down the road, with potential damage in the areas of cost, quality, and reputation. ... The end goal is to have an autonomous testing program that has a clear focus on helping the organization achieve its desired business outcomes. There is a consistent theme in successfully developing and implementing automated testing programs: planning and patience. With the right strategy and a deliberate rollout, test automation opens the door to smoother operations and the ability to remain competitive and profitable in the ever-changing world of software development. To guarantee a successful implementation of automation practices, it is necessary to invest in training and creating best practices. 


The Hidden Dangers of Artifactory Tokens: What Every Organization Should Know

If tokens with read access are dangerous, those with write permissions are cybersecurity nightmares made flesh. They enable the most feared attack vector in modern software: supply chain poisoning. The playbook is elegant in its simplicity and devastating in its impact. Attackers identify frequently downloaded packages within your Artifactory instance, insert malicious code into these dependencies, then repackage and upload them as new versions. From there, they simply wait as unsuspecting users throughout your organization automatically upgrade to the compromised versions during routine updates. The cascading damage expands exponentially depending on which components get poisoned. Compromising build environments leads to persistent backdoors in all future software releases. Targeting developer tools gives attackers access to engineer workstations and credentials. ... The first line of defense must be preventing leaks before they happen. Implementing secret detection tools that can catch credentials before they're published to repositories. Establishing monitoring systems can identify exposed tokens on public forums, even from personal developer accounts. And following JFrog's evolving security guidance — such as moving away from deprecated API keys — ensures you're not using authentication methods with known weaknesses.


Is Model Context Protocol the New API?

With APIs, we learned that API design matters. Great APIs, like those from Stripe or Twilio, were designed for the developer. With MCP, design matters too. But who are we authoring for? You’re not authoring for a human; you’re authoring for a model that will pay close attention to every word you write. And it’s not just design, it’s the operationalization of MCP that is also important and another point of parallelism with the world of APIs. As we used to say at Apigee, there are good APIs and bad APIs. If your backend descriptions are domain-centric — as opposed to business or end-user centric — integration, adoption and developers’ overall ability to use your APIs will be impaired. A similar issue can arise with MCP. An AI might not recognize or use an MCP server’s tools if its description isn’t clear, action-oriented or AI friendly. A final thing to note, which in many ways is very new to the AI world, is the fact that every action is “on the meter.” In the LLM world, everything turns into tokens, and tokens are dollars, as NVIDIA CEO Jensen Huang reminded us in his Nvidia GTC keynote this year. So, AI-native apps — and by extension the MCP servers that those apps connect to — need to pay attention to token optimization techniques necessary for cost optimization. There’s also a question of resource optimization outside of the token/GPU space.


CISOs must speak business to earn executive trust

The key to building broader influence is translating security into business impact language. I’ve found success by guiding conversations around what executives and customers truly care about: business outcomes, not technical implementations. When I speak with the CEO or board members, I discuss how our security program protects revenue, ensures business continuity and enables growth. With many past breaches, organizations detected the threat but failed to take timely action, resulting in significant business impact. By emphasizing how our approach prevents these outcomes, I’m speaking their language. ... Successfully shifting a security organization from being perceived as the “department of no” to a strategic enabler requires a fundamental change in mindset, engagement model and communication style. It begins with aligning security goals to the broader business strategy, understanding what drives growth, customer trust and operational efficiency. Security leaders must engage cross-functionally early and often, embedding their teams within product development, IT and go-to-market functions to co-create secure solutions rather than imposing controls after the fact. This proactive, partnership-driven approach reduces friction and builds credibility.


Enterprise IAM could provide needed critical mass for reusable digital identity

Acquisitions, different business goals, and even rogue teams can prevent a single, unified platform from serving the whole organization. And then there are partnerships, employees contracted to customers, customer onboarding and a host of other situations that force identity information to move from an internal system to another one. “The result is we end up building difficult, complicated integrations that are hard to maintain,” Esplin says. Further, people want services that providers can only deliver by receiving trusted information, but people are hesitant to share their information. And then there are the attendant regulatory concerns, particularly where biometrics are involved. Intermediaries clearly have a big role to play. Some of those intermediaries may be AI agents, which can ease data sharing, but does not address the central concern about how to limit information sharing while delivering trust. Esplin argues for verifiable credentials as the answer, with the signature of the issuer providing the trust and the consent-based sharing model of VCs satisfying user’s desire to limit data sharing. Because VCs are standardized, the need for complicated integrations is removed. Biometric templates are stored by the user, enabling strong binding without the data privacy concerns that come with legacy architectures.


Beyond speed: Measuring engineering success by impact, not velocity

From a planning and accountability perspective, velocity gives teams a clean way to measure output vs. effort. It can help them plan for sprints and prioritize long-term productivity targets. It can even help with accountability, allowing teams to rightsize their work and communicate it cross-departmentally. The issues begin when it is used as the sole metric of success for teams, as it fails to reveal the nuances necessary for high-level strategic thinking and positioning by leadership. It sets up a standard that over-emphasizes pure workload rather than productive effort towards organizational objectives. ... When leadership works with their engineering teams to find solutions to business challenges, they create a highly visible value stream between each individual developer and the customer at the end of the line. For engineering-forward organizations, developer experience and satisfaction is a top priority, so factors like transparency and recognition of work go a long way towards developer well-being. Perhaps most vital is for business and tech leaders to create roadmaps of success for engineers that clearly align with the goals of the overall business. LinearB cofounder and COO Lines acknowledges that these business goals can differ wildly between businesses: “For some of the leaders that I work with, real business impact might be as simple as, we got to get to production faster…”


Sakana introduces new AI architecture, ‘Continuous Thought Machines’ to make models reason with less guidance — like human brains

Sakana AI’s Continuous Thought Machine is not designed to chase leaderboard-topping benchmark scores, but its early results indicate that its biologically inspired design does not come at the cost of practical capability. On the widely used ImageNet-1K benchmark, the CTM achieved 72.47% top-1 and 89.89% top-5 accuracy. While this falls short of state-of-the-art transformer models like ViT or ConvNeXt, it remains competitive—especially considering that the CTM architecture is fundamentally different and was not optimized solely for performance. What stands out more are CTM’s behaviors in sequential and adaptive tasks. In maze-solving scenarios, the model produces step-by-step directional outputs from raw images—without using positional embeddings, which are typically essential in transformer models. Visual attention traces reveal that CTMs often attend to image regions in a human-like sequence, such as identifying facial features from eyes to nose to mouth. The model also exhibits strong calibration: its confidence estimates closely align with actual prediction accuracy. Unlike most models that require temperature scaling or post-hoc adjustments, CTMs improve calibration naturally by averaging predictions over time as their internal reasoning unfolds. 


How to build (real) cloud-native applications

Cloud-native applications are designed and built specifically to operate in cloud environments. It’s not about just “lifting and shifting” an existing application that runs on-premises and letting it run in the cloud. Unlike traditional monolithic applications that are often tightly coupled, cloud-native applications are modular in a way that monolithic applications are not. A cloud-native application is not an application stack, but a decoupled application architecture. Perhaps the most atomic level of a cloud-native application is the container. A container could be a Docker container, though really any type of container that matches the Open Container Interface (OCI) specifications works just as well. Often you’ll see the term microservices used to define cloud-native applications. Microservices are small, independent services that communicate over APIs—and they are typically deployed in containers. A microservices architecture allows for independent scaling in an elastic way that supports the way the cloud is supposed to work. While a container can run on all different types of host environments, the most common way that containers and microservices are deployed is inside of an orchestration platform. The most commonly deployed container orchestration platform today is the open source Kubernetes platform, which is supported on every major public cloud.


Responsible AI as a Business Necessity: Three Forces Driving Market Adoption

AI systems introduce operational, reputational, and regulatory risks that must be actively managed and mitigated. Organizations implementing automated risk management tools to monitor and mitigate these risks operate more efficiently and with greater resilience. The April 2024 RAND report, “The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed,” highlights that underinvestment in infrastructure and immature risk management are key contributors to AI project failures. ... Market adoption is the primary driver for AI companies, while organizations implementing AI solutions seek internal adoption to optimize operations. In both scenarios, trust is the critical factor. Companies that embed responsible AI principles into their business strategies differentiate themselves as trustworthy providers, gaining advantages in procurement processes where ethical considerations are increasingly influencing purchasing decisions. ... Stakeholders extend beyond regulatory bodies to include customers, employees, investors, and affected communities. Engaging these diverse perspectives throughout the AI lifecycle, from design and development to deployment and decommissioning, yields valuable insights that improve product-market fit while mitigating potential risks.


Leading high-performance engineering teams: Lessons from mission-critical systems

Conducting blameless post-mortems was imperative to focus on improving the systems without getting into blame avoidance or blame games. Building trust required consistency from me: admitting mistakes, getting feedback, going through exercises suggesting improvements, and responding in a constructive way. At the heart of this was creating the conditions for the team to feel safe taking interpersonal risks, so it was my role to steer conversation towards systemic factors that contributed to failures (“What process or procedures change could prevent this?”) and I was regularly looking for the opportunity to discuss, or later analyze, patterns across incidents so I could work towards higher order improvements. ... For teams just starting out, my advice is to take a staged approach. Pick one or two practices they can begin, evolve their plan for how they will evolve the practice and some metrics for the team to realize early value. Questions to ask yourselfHow comfortable are team members sharing reliability concerns? Does your team look for ways to prevent incidents through your reviews or look for ways to blame others? How often does your team practice responding to failure? ... In my experience, leading top engineering teams requires a set of skills. Building a strong technical culture, focusing on people, guiding teams through difficult times, and establishing durable practices.

Daily Tech Digest - April 28, 2025


Quote for the day:

"If a window of opportunity appears, don't pull down the shade." -- Tom Peters



Researchers Revolutionize Fraud Detection with Machine Learning

Machine learning plays a critical role in fraud detection by identifying patterns and anomalies in real-time. It analyzes large datasets to spot normal behavior and flag significant deviations, such as unusual transactions or account access. However, fraud detection is challenging because fraud cases are much rarer than normal ones, and the data is often messy or unlabeled. ... “The use of machine learning in fraud detection brings many advantages,” said Taghi Khoshgoftaar, Ph.D., senior author and Motorola Professor in the FAU Department of Electrical Engineering and Computer Science. “Machine learning algorithms can label data much faster than human annotation, significantly improving efficiency. Our method represents a major advancement in fraud detection, especially in highly imbalanced datasets. It reduces the workload by minimizing cases that require further inspection, which is crucial in sectors like Medicare and credit card fraud, where fast data processing is vital to prevent financial losses and enhance operational efficiency.” ... The method combines two strategies: an ensemble of three unsupervised learning techniques using the SciKit-learn library and a percentile-gradient approach. The goal is to minimize false positives by focusing on the most confidently identified fraud cases. 


Cybersecurity is Not Working: Time to Try Something Else

Many CISOs, changing jobs every 2 years or so, have not learnt to get things done in large firms; they have not developed the political acumen and the management experience they would need. Many have simply remained technologists and firefighters, trapped in an increasingly obsolete mindset, pushing bottom-up a tools-based, risk-based, tech-driven narrative, disconnected from what the board wants to hear which has now shifted towards resilience and execution. This is why we may have to come to the point where we have to accept that the construction around the role of the CISO, as it was initiated in the late 90s, has served its purpose and needs to evolve. The first step in this evolution, in my opinion, is for the board to own cybersecurity as a business problem, not as a technology problem. It needs to be owned at board level in business terms, in line with the way other topics are owned at board level. This is about thinking the protection of the business in business terms, not in technology terms. Cybersecurity is not a purely technological matter; it has never been and cannot be. ... There may be a need to amalgamate it with other matters such as corporate resilience, business continuity or data privacy to build up a suitable board-level portfolio, but for me this is the way forward in reversing the long-term dynamics, away from the failed historical bottom-up constructions, towards a progressive top-down approach.


How the financial services C-suite are going beyond ‘keeping the lights on’ in 2025

C-suites need to tackle three core areas: Ensure they are getting high value support for their mission-critical systems; Know how to optimise their investments; Transform their organisation without disrupting day-to-day operations. It is certainly possible if they have the time, capabilities and skill sets in-house. Yet even the most well-resourced enterprises can struggle to acquire the knowledge base and market expertise required to negotiate with multiple vendors, unlock investments or run complex change programmes single-handedly. The reality is that managing the balance needed to save costs while accelerating innovation is challenging. ... The survey demonstrates a growing necessity for CIOs and CFOs to speak each other’s language, marking a shift in organisational strategy, moving IT beyond the traditional ‘keeping the lights on’ approach, and driving a pivotal transformation in the relationship between CIOs and CFOs. As they find better ways to collaborate and innovate, businesses in the financial services space will reap the rewards of emerging technology, while falling in line with budgetary needs. Emerging technologies are being introduced thick and fast, and as a result, hard metrics aren’t always available. Instead of feeling frustrated with a lack of data, CFOs should lean in as active participants, understanding how emerging technologies like AI and cybersecurity can drive strategic value, optimise operations and create new revenue streams. 


Threat actors are scanning your environment, even if you’re not

Martin Jartelius, CISO and Product Owner at Outpost24, says that the most common blind spots that the solution uncovers are exposed management interfaces of devices, exposed databases in the cloud, misconfigured S3 storage, or just a range of older infrastructure no longer in use but still connected to the organization’s domain. All of these can provide an entry point into internal networks, and some can be used to impersonate organizations in targeted phishing attacks. But these blind spots are not indicative of poor leadership or IT security performance: “Most who see a comprehensive report of their attack surface for the first time are surprised that it is often substantially larger than they understood. Some react with discomfort and perceive their prior lack of insight as a failure, but that is not the case. ... Attack surface management is still a maturing technology field, but having a solution bringing the information together in a platform gives a more refined and in-depth insight over time. External attack surface management starts with a continuous detection of exposed assets – in Sweepatic’s case, that also includes advanced port scanning to detect all (and not just the most common) ports at risk of exploitation – then moves on to automated security analysis and then risk-based reporting.


How to Run a Generative AI Developer Tooling Experiment

The last metric, rework, is a significant consideration with generative AI, as 67% of developers find that they are spending more time debugging AI-generated code. Devexperts experienced a 200% increase in rework and a 30% increase in maintenance. On the other hand, while the majority of organizations are seeing an increase in complexity and lines of code with code generators, these five engineers saw a surprising 15% decrease in lines of code created. “We can conclude that, for the live experiment, GitHub Copilot didn’t deliver the results one could expect after reading their articles,” summarized German Tebiev, the software engineering process architect who ran the experiment. He did think the results were persuasive enough to believe speed will be enabled if the right processes are put in place: “The fact that the PR throughput shows significant growth tells us that the desired speed increase can be achieved if the tasks’ flow is handled effectively.” ... Just 17% of developers responded that they think Copilot helped them save at least an hour a week, versus a whopping 40% saw no time savings by using the code generator, which is well below the industry average. Developers were also able to share their own anecdotal experience, which is very situation-dependent. Copilot seemed to be a better choice for completing more basic lines of code for new features, less so when there’s complexity of working with an existing codebase.


Dark Data: Surprising Places for Business Insights

Previously, the biggest problem when dealing with dark data was its messy nature. Even though AI has been able to analyze structured data for years, unstructured or semi-structured data proved to be a hard nut to crack. Unfortunately, unstructured data constitutes the majority of dark data. However, recent advances in natural language programming (NLP), natural language understanding (NLU), speech recognition, and ML have enabled AI to deal with unstructured dark data more effectively. Today, AI can easily analyze raw inputs like customer reviews, social media comments to identify trends and sentiment. Advanced sentiment analysis algorithms can come to accurate conclusions when concerning tone, context, emotional nuances, sarcasm, and urgency, providing businesses with deeper audience insights. For instance, Amazon uses this approach to flag fake reviews. In finance and banking, AI-powered data analysis tools are used to process transaction logs and unstructured customer communications to identify fraud risks and enhance service and customer satisfaction. Another industry where dark data mining might have potentially huge social benefits is healthcare. Currently, this industry generates around 30% of all the data in the world. 


Is your AI product actually working? How to develop the right metric system

Not tracking whether your product is working well is like landing a plane without any instructions from air traffic control. There is absolutely no way that you can make informed decisions for your customer without knowing what is going right or wrong. Additionally, if you do not actively define the metrics, your team will identify their own back-up metrics. The risk of having multiple flavors of an ‘accuracy’ or ‘quality’ metric is that everyone will develop their own version, leading to a scenario where you might not all be working toward the same outcome. ... the complexity of operating an ML product with multiple customers translates to defining metrics for the model, too. What do I use to measure whether a model is working well? Measuring the outcome of internal teams to prioritize launches based on our models would not be quick enough; measuring whether the customer adopted solutions recommended by our model could risk us drawing conclusions from a very broad adoption metric ... Most metrics are gathered at-scale by new instrumentation via data engineering. However, in some instances (like question 3 above) especially for ML based products, you have the option of manual or automated evaluations that assess the model outputs. 


Security needs to be planned and discussed early, right at the product ideation stage

On the open-source side, where a lot of supply chain risks emerge, we leverage a state-of-the-art development pipeline. Developers follow strict security guidelines and use frameworks embedded with security tools that detect risks associated with third-party libraries—both during development and at runtime. We also have robust monitoring systems in place to detect vulnerabilities and active exploits. ... Monitoring technological events is one part, but from a core product perspective, we also need to monitor risk-based activities, like transactions that could potentially lead to fraud. For that, we have strong AI/ML developments already deployed, with a dedicated AI and data science team constantly building new algorithms to detect fraudulent actions. From a product standpoint, the system is quite mature. On the technology side, monitoring has some automation powered by AI, and we’ve also integrated tools like GitHub Copilot. Our analysts, developers, and security engineers use these technologies to quickly identify potential issues, reducing manual effort significantly. ... Security needs to be planned and discussed early—right at the product ideation stage with product managers—so that it doesn’t become a blocker at the end. Early involvement makes it much easier and avoids last-minute disruptions.


14 tiny tricks for big cloud savings

Good algorithms can boost the size of your machine when demand peaks. But clouds don’t always make it easy to shrink all the resources on disk. If your disks grow, they can be hard to shrink. By monitoring these machines closely, you can ensure that your cloud instances consume only as much as they need and no more. ... Cloud providers can offer significant discounts for organizations that make a long-term commitment to using hardware. These are sometimes called reserved instances, or usage-based discounts. They can be ideal when you know just how much you’ll need for the next few years. The downside is that the commitment locks in both sides of the deal. You can’t just shut down machines in slack times or when a project is canceled. ... Programmers like to keep data around in case they might ever need it again. That’s a good habit until your app starts scaling and it’s repeated a bazillion times. If you don’t call the user, do you really need to store their telephone number? Tossing personal data aside not only saves storage fees but limits the danger of releasing personally identifiable information. Stop keeping extra log files or backups of data that you’ll never use again. ... Cutting back on some services will save money, but the best way to save cash is to go cold turkey. There’s nothing stopping you from dumping your data into a hard disk on your desk or down the hall in a local data center. 


Two-thirds of jobs will be impacted by AI

“Most jobs will change dramatically in the next three to four years, at least as much as the internet has changed jobs over the last 30,” Calhoon said. “Every job posted on Indeed today, from truck driver to physician to software engineer, will face some level of exposure to genAI-driven change.” ... What will emerge is a “symbiotic” relationship with an increasingly “proactive” technology that will require employees to constantly learn new skills and adapt. “AI can manage repetitive tasks, or even difficult tasks that are specific in nature, while humans can focus on innovative and strategic initiatives that drive revenue growth and improve overall business performance,” Hoffman said in an interview earlier this year. “AI is also much quicker than humans could possibly be, is available 24/7, and can be scaled to handle increasing workloads.” As AI takes over repetitive tasks, workers will shift toward roles that involve overseeing AI, solving unique problems, and applying creativity and strategy. Teams will increasingly collaborate with AI—like marketers personalizing content or developers using AI copilots. Rather than replacing humans, AI will enhance human strengths such as decision-making and emotional intelligence. Adapting to this change will require ongoing learning and a fresh approach to how work is done.