Showing posts with label passkeys. Show all posts
Showing posts with label passkeys. Show all posts

Daily Tech Digest - July 09, 2025


Quote for the day:

"Whenever you see a successful person you only see the public glories, never the private sacrifices to reach them." -- Vaibhav Shah


Why CIOs see APIs as vital for agentic AI success

API access also goes beyond RAG. It allows agents and their underlying language models not just to retrieve information, but perform database mutations and trigger external actions. This shift allows agents to carry out complex, multi-step workflows that once required multiple human touchpoints. “AI-ready APIs paired with multi-agentic capabilities can unlock a broad range of use cases, which have enterprise workflows at their heart,” says Milind Naphade, SVP of technology and head of AI foundations at Capital One. In addition, APIs are an important bridge out of previously isolated AI systems. ... AI agents can make unprecedented optimizations on the fly using APIs. Gartner reports that PC manufacturer Lenovo uses a suite of autonomous agents to optimize marketing and boost conversions. With the oversight of a planning agent, these agents call APIs to access purchase history, product data, and customer profiles, and trigger downstream applications in the server configuration process. ... But the bigger wins will likely be increased operational efficiency and cost reduction. As Fox describes, this stems from a newfound best-of-breed business agility. “When agentic AI can dynamically reconfigure business processes, using just what’s needed from the best-value providers, you’ll see streamlined operations, reduced complexity, and better overall resource allocation,” she says.


What we can learn about AI from the ‘dead internet theory’

The ‘dead internet theory,’ or the idea that much of the web is now dominated by bots and AI-generated content, is largely speculative. However, the concern behind it is worth taking seriously. The internet is changing, and the content that once made it a valuable source of knowledge is increasingly diluted by duplication, misinformation, and synthetic material. For the development of artificial intelligence, especially large language models (LLMs), this shift presents an existential problem. ... One emerging model for collecting and maintaining this kind of data is Knowledge as a Service (KaaS). Rather than scraping static sources, KaaS creates a living, structured ecosystem of contributions from real users (often experts in their fields) who continuously validate and update content. This approach takes inspiration from open-source communities but remains focused on knowledge creation and maintenance rather than code. KaaS supports AI development with a sustainable, high-quality stream of data that reflects current thinking. It’s designed to scale with human input, rather than in spite of it. ... KaaS helps AI stay relevant by providing fresh, domain-specific input from real users. Unlike static datasets, KaaS adapts as conditions change. It also brings greater transparency, illustrating directly how contributors’ inputs are utilised. This level of attribution represents a step toward more ethical and accountable AI.


The Value of Threat Intelligence in Ensuring DORA Compliance

One of the biggest challenges for security teams today is securing visibility into third-party providers within their ecosystem due to their volume, diversity, and the constant monitoring required. Utilising a Threat Intelligence Platform (TIP) with advanced capabilities can enable a security team to address this gap by monitoring and triaging threats within third-party systems through automation. It can flag potential signs of compromise, vulnerabilities, and risky behaviour, enabling organisations to take pre-emptive action before risks escalate and impact their systems. ... A major aspect of DORA is implementing a robust risk management framework. However, to keep pace with global expansion and new threats and technologies, this framework must be responsive, flexible, and up-to-date. Sourcing, aggregating, and collating threat intelligence data to facilitate this is a time-exhaustive task, and unfeasible for many resource-stretched and siloed security teams. ... From tabletop scenarios to full-scale simulations, these exercises evaluate how well systems, processes, and people can withstand and respond to real-world cyber threats. With an advanced TIP, security teams can leverage customisable workflows to recreate specific operational stress scenarios. These scenarios can be further enhanced by feeding real-world data on attacker behaviours, tactics, and trends, ensuring that simulations reflect actual threats rather than outdated risks.


Why your security team feels stuck

The problem starts with complexity. Security stacks have grown dense, and tools like EDR, SIEM, SOAR, CASB, and DSPM don’t always integrate well. Analysts often need to jump between multiple dashboards just to confirm whether an alert matters. Tuning systems properly takes time and resources, which many teams don’t have. So alerts pile up, and analysts waste energy chasing ghosts. Then there’s process friction. In many organizations, security actions, especially the ones that affect production systems, require multiple levels of approval. On paper, that’s to reduce risk. But these delays can mean missing the window to contain an incident. When attackers move in minutes, security teams shouldn’t be stuck waiting for a sign-off. ... “Security culture is having a bit of a renaissance. Each member of the security team may be in a different place as we undertake this transformation, which can cause internal friction. In the past, security was often tasked with setting and enforcing rules in order to secure the perimeter and ensure folks weren’t doing risky things on their machines. While that’s still part of the job, security and privacy teams today also need to support business growth while protecting customer data and company assets. If business growth is the top priority, then security professionals need new tools and processes to secure those assets.”


Your data privacy is slipping away. Here's why, and what you can do about it

In 2024, the Identity Theft Resource Center reported that companies sent out 1.3 billion notifications to the victims of data breaches. That's more than triple the notices sent out the year before. It's clear that despite growing efforts, personal data breaches are not only continuing, but accelerating. What can you do about this situation? Many people think of the cybersecurity issue as a technical problem. They're right: Technical controls are an important part of protecting personal information, but they are not enough. ... Even the best technology falls short when people make mistakes. Human error played a role in 68% of 2024 data breaches, according to a Verizon report. Organizations can mitigate this risk through employee training, data minimization—meaning collecting only the information necessary for a task, then deleting it when it's no longer needed—and strict access controls. Policies, audits and incident response plans can help organizations prepare for a possible data breach so they can stem the damage, see who is responsible and learn from the experience. It's also important to guard against insider threats and physical intrusion using physical safeguards such as locking down server rooms. ... Despite years of discussion, the U.S. still has no comprehensive federal privacy law. Several proposals have been introduced in Congress, but none have made it across the finish line. 


How To Build Smarter Factories With Edge Computing

According to edge computing experts, these are essentially rugged versions of computers, of any size, purpose-built for their harsh environments. Forget standard form factors; industrial edge devices come in varied configurations specific to the application. This means a device shaped to fit precisely where it’s needed, whether tucked inside a machine or mounted on a factory wall. ... What makes these tough machines intelligent? It’s the software revolution happening on factory floors right now. Historically, industrial computing relied on software specially built to run on bare metal; custom code directly installed on specific machines. While this approach offered reliability and consistent, deterministic performance, it came with significant limitations: slow development cycles, difficult updates and vendor lock-in. ... Communication between smart devices presents unique challenges in industrial environments. Traditional networking approaches often fall short when dealing with thousands of sensors, robots and automated systems. Standard Wi-Fi faces significant constraints in factories where heavy machinery creates electromagnetic interference, and critical operations can’t tolerate wireless dropouts.


Fighting in a cloudy arena

“There are a few primary problems. Number one is that the hyperscalers leverage free credits to get digital startups to build their entire stack on their cloud services,” Cochrane says, adding that as the startups grow, the technical requirements from hyperscalers leave them tied to that provider. “The second thing is also in the relationship they have with enterprises. They say, ‘Hey, we project you will have a $250 million cloud bill, we are going to give you a discount.’ Then, because the enterprise has a contractual vehicle, there’s a mad rush to use as much of the hyperscalers compute as possible because you either lose it or use it. “At the end of the day, it’s like the roach motel. You can check in, but you can’t check out,” he sums up. ... "We are exploring our options to continue to fight against Microsoft’s anti competitive licensing in order to promote choice, innovation, and the growth of the digital economy in Europe." Mark Boost, CEO of UK cloud company Civo, said: ”However they position it, we cannot shy away from what this deal appears to be: a global powerful company paying for the silence of a trade body, and avoiding having to make fundamental changes to their software licensing practices on a global basis.” In the months that followed this decision, things got interesting.


How passkeys work: The complete guide to your inevitable passwordless future

Passkeys are often described as a passwordless technology. In order for passwords to work as a part of the authentication process, the website, app, or other service -- collectively referred to as the "relying party" -- must keep a record of that password in its end-user identity management system. This way, when you submit your password at login time, the relying party can check to see if the password you provided matches the one it has on record for you. The process is the same, whether or not the password on record is encrypted. In other words, with passwords, before you can establish a login, you must first share your secret with the relying party. From that point forward, every time you go to login, you must send your secret to the relying party again. In the world of cybersecurity, passwords are considered shared secrets, and no matter who you share your secret with, shared secrets are considered risky. ... Many of the largest and most damaging data breaches in history might not have happened had a malicious actor not discovered a shared password. In contrast, passkeys also involve a secret, but that secret is never shared with a relying party. Passkeys are a form of Zero Knowledge Authentication (ZKA). The relying party has zero knowledge of your secret, and in order to sign in to a relying party, all you have to do is prove to the relying party that you have the secret in your possession.


Crafting a compelling and realistic product roadmap

The most challenging aspect of roadmap creation is often prioritization. Given finite resources, not everything can be built at once. Effective prioritization requires a clear framework. Common methods include scoring features based on business value versus effort, using frameworks like RICE, or focusing on initiatives that directly address key strategic objectives. Be prepared to say “no” to good ideas that don’t align with current priorities. Transparency in this process is vital. Communicate why certain items are prioritized over others to stakeholders, fostering understanding and buy-in, even when their preferred feature isn’t immediately on the roadmap. ... A product roadmap is a living document, not a static contract. The B2B software landscape is constantly evolving, with new technologies emerging, customer needs shifting, and competitive pressures mounting. A realistic roadmap acknowledges this dynamism. While it provides a clear direction, it should also be adaptable. Plan for regular reviews and updates – quarterly or even monthly – to adjust based on new insights, validated learnings, and changes in the market or business environment. Embrace iterative development and be prepared to pivot or adjust priorities as new information comes to light. 


Are software professionals ready for the AI tsunami?

Modern AI assistants can translate plain-English prompts into runnable project skeletons or even multi-file apps aligned with existing style guides (e.g., Replit). This capability accelerates experimentation and learning, especially when teams are exploring unfamiliar technology stacks. A notable example is MagicSchool.com, a real-world educational platform created using AI-assisted coding workflows, showcasing how AI can powerfully convert conceptual prompts into usable products. These tools enable rapid MVP development that can be tested directly with customers. Once validated, the MVP can then be scaled into a full-fledged product. Rapid code generation can lead to fragile or opaque implementations if teams skip proper reviews, testing, and documentation. Without guardrails, it risks technical debt and poor maintainability. To stay reliable, agile teams must pair AI-generated code with sprint reviews, CI pipelines, automated testing, and strategies to handle evolving features and business needs. Recognising the importance of this shift, tech giants like Amazon (CodeWhisperer) and Google (AlphaCode) are making significant investments in AI development tools, signaling just how central this approach is becoming to the future of software engineering.

Daily Tech Digest - May 01, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden



Bridging the IT and security team divide for effective incident response

One reason IT and security teams end up siloed is the healthy competitiveness that often exists between them. IT wants to innovate, while security wants to lock things down. These teams are made up from brilliant minds. However, faced with the pressure of a crisis, they might hesitate to admit they feel out of control, simmering issues may come to a head, or they may become so fixated on solving the issue that they fail to update others. To build an effective incident response strategy, identifying a shared vision is essential. Here, leadership should host joint workshops where teams learn more about each other and share ideas about embedding security into system architecture. These sessions should also simulate real-world crises, so that each team is familiar with how their roles intersect during a high-pressure situation and feel comfortable when an actual crisis arises. ... By simulating realistic scenarios – whether it’s ransomware incidents or malware attacks – those in leadership positions can directly test and measure the incident response plan so that is becomes an ingrained process. Throw in curveballs when needed, and use these exercises to identify gaps in processes, tools, or communication. There’s a world of issues to uncover disconnected tools and systems; a lack of automation that could speed up response times; and excessive documentation requirements.


First Principles in Foundation Model Development

The mapping of words and concepts into high-dimensional vectors captures semantic relationships in a continuous space. Words with similar meanings or that frequently appear in similar contexts are positioned closer to each other in this vector space. This allows the model to understand analogies and subtle nuances in language. The emergence of semantic meaning from co-occurrence patterns highlights the statistical nature of this learning process. Hierarchical knowledge structures, such as the understanding that “dog” is a type of “animal,” which is a type of “living being,” develop organically as the model identifies recurring statistical relationships across vast amounts of text. ... The self-attention mechanism represents a significant architectural innovation. Unlike recurrent neural networks that process sequences sequentially, self-attention allows the model to consider all parts of the input sequence simultaneously when processing each word. The “dynamic weighting of contextual relevance” means that for any given word in the input, the model can attend more strongly to other words that are particularly relevant to its meaning in that specific context. This ability to capture long-range dependencies is critical for understanding complex language structures. The parallel processing capability significantly speeds up training and inference. 


The best preparation for a password-less future is to start living there now

One of the big ideas behind passkeys is to keep us users from behaving as our own worst enemies. For nearly two decades, malicious actors -- mainly phishers and smishers -- have been tricking us into giving them our passwords. You'd think we would have learned how to detect and avoid these scams by now. But we haven't, and the damage is ongoing. ... But let's be clear: Passkeys are not passwords. If we're getting rid of passwords, shouldn't we also get rid of the phrase "password manager?" Note that there are two primary types of credential managers. The first is the built-in credential manager. These are the ones from Apple, Google, Microsoft, and some browser makers built into our platforms and browsers, including Windows, Edge, MacOS, Android, and Chrome. With passkeys, if you don't bring your own credential manager, you'll likely end up using one of these. ... The FIDO Alliance defines a "roaming authenticator" as a separate device to which your passkeys can be securely saved and recalled. Examples are hardware security keys (e.g., Yubico) and recent Android phones and tablets, which can act in the capacity of a hardware security key. Since your credentials to your credential manager are literally the keys to your entire kingdom, they deserve some extra special security.


Mind the Gap: Assessing Data Quality Readiness

Data Quality Readiness is defined as the ratio of the number of fully described Data Quality Measure Elements that are being calculated and/or collected to the number of Data Quality Measure Elements in the desired set of Data Quality Measures. By fully described I mean both the “number of data values” part and the “that are outliers” part. The first prerequisite activity is determining which Quality Measures you want to implement. The ISO standard defines 15 different Data Quality Characteristics. I covered those last time. The Data Quality Characteristics are made up of 63 Quality Measures. The Quality Measures are categorized as Highly Recommendable (19), Recommendable (36), and For Reference (8). This provides a starting point for prioritization. Begin with a few measures that are most applicable to your organization and that will have the greatest potential to improve the quality of your data. The reusability of the Quality Measures can factor into the decision, but it shouldn’t be the primary driver. The objective is not merely to collect information for its own sake, but to use that information to generate value for the enterprise. The result will be a set of Data Quality Measure Elements to collect and calculate. You do the ones that are best for you, but I would recommend looking at two in particular.


Why non-human identity security is the next big challenge in cybersecurity

What makes this particularly challenging is that each of these identities requires access to sensitive resources and carries potential security risks. Unlike human users, who follow predictable patterns and can be managed through traditional IAM solutions, non-human identities operate 24/7, often with elevated privileges, making them attractive targets for attackers. ... We’re witnessing a paradigm shift in how we need to think about identity security. Traditional security models were built around human users – focusing on aspects like authentication, authorisation and access management from a human-centric perspective. But this approach is inadequate for the machine-dominated future we’re entering. Organisations need to adopt a comprehensive governance framework specifically designed for non-human identities. This means implementing automated discovery and classification of all machine identities and their secrets, establishing centralised visibility and control and enforcing consistent security policies across all platforms and environments. ... First, organisations need to gain visibility into their non-human identity landscape. This means conducting a thorough inventory of all machine identities and their secrets, their access patterns and their risk profiles.


Preparing for the next wave of machine identity growth

First, let’s talk about the problem of ownership. Even organizations that have conducted a thorough inventory of the machine identities in their environments often lack a clear understanding of who is responsible for managing those identities. In fact, 75% of the organizations we surveyed indicated that they don’t have assigned ownership for individual machine identities. That’s a real problem—especially since poor (or insufficient) governance practices significantly increase the likelihood of compromised access, data loss, and other negative outcomes. Another critical blind spot is around understanding what data each machine identity can or should be able to access—and just as importantly, what it cannot and should not access. Without clarity, it becomes nearly impossible to enforce proper security controls, limit unnecessary exposure, or maintain compliance. Each machine identity is a potential access point to sensitive data and critical systems. Failing to define and control their access scope opens the door to serious risk. Addressing the issue starts with putting a comprehensive machine identity security solution in place—ideally one that lets organizations govern machine identities just as they do human identities. Automation plays a critical role: with so many identities to secure, a solution that can discover, classify, assign ownership, certify, and manage the full lifecycle of machine identities significantly streamlines the process.


To Compete, Banking Tech Needs to Be Extensible. A Flexible Platform is Key

The banking ecosystem includes three broad stages along the trajectory toward extensibility, according to Ryan Siebecker, a forward deployed engineer at Narmi, a banking software firm. These include closed, non-extensible systems — typically legacy cores with proprietary software that doesn’t easily connect to third-party apps; systems that allow limited, custom integrations; and open, extensible systems that allow API-based connectivity to third-party apps. ... The route to extensibility can be enabled through an internally built, custom middleware system, or institutions can work with outside vendors whose systems operate in parallel with core systems, including Narmi. Michigan State University Federal Credit Union, which began its journey toward extensibility in 2009, pursued an independent route by building in-house middleware infrastructure to allow API connectivity to third-party apps. Building in-house made sense given the early rollout of extensible capabilities, but when developing a toolset internally, institutions need to consider appropriate staffing levels — a commitment not all community banks and credit unions can make. For MSUFCU, the benefit was greater customization, according to the credit union’s chief technology officer Benjamin Maxim. "With the timing that we started, we had to do it all ourselves," he says, noting that it took about 40 team members to build a middleware system to support extensibility.


5 Strategies for Securing and Scaling Streaming Data in the AI Era

Streaming data should never be wide open within the enterprise. Least-privilege access controls, enforced through role-based (RBAC) or attribute-based (ABAC) access control models, limit each user or application to only what’s essential. Fine-grained access control lists (ACLs) add another layer of protection, restricting read/write access to only the necessary topics or channels. Combine these controls with multifactor authentication, and even a compromised credential is unlikely to give attackers meaningful reach. ... Virtual private cloud (VPC) peering and private network setups are essential for enterprises that want to keep streaming data secure in transit. These configurations ensure data never touches the public internet, thus eliminating exposure to distributed denial of service (DDoS), man-in-the-middle attacks and external reconnaissance. Beyond security, private networking improves performance. It reduces jitter and latency, which is critical for applications that rely on subsecond delivery or AI model responsiveness. While VPC peering takes thoughtful setup, the benefits in reliability and protection are well worth the investment. ... Just as importantly, security needs to be embedded into culture. Enterprises that regularly train their employees on privacy and data protection tend to identify issues earlier and recover faster.


Supply Chain Cybersecurity – CISO Risk Management Guide

Modern supply chains often span continents and involve hundreds or even thousands of third-party vendors, each with their security postures and vulnerabilities. Attackers have recognized that breaching a less secure supplier can be the easiest way to compromise a well-defended target. Recent high-profile incidents have shown that supply chain attacks can lead to data breaches, operational disruptions, and significant financial losses. The interconnectedness of digital systems means that a single compromised vendor can have a cascading effect, impacting multiple organizations downstream. For CISOs, this means that traditional perimeter-based security is no longer sufficient. Instead, a holistic approach must be taken that considers every entity with access to critical systems or data as a potential risk vector. ... Building a secure supply chain is not a one-time project—it’s an ongoing journey that demands leadership, collaboration, and adaptability. CISOs must position themselves as business enablers, guiding the organization to view cybersecurity not as a barrier but as a competitive advantage. This starts with embedding cybersecurity considerations into every stage of the supplier lifecycle, from onboarding to offboarding. Leadership engagement is crucial: CISOs should regularly brief the executive team and board on supply chain risks, translating technical findings into business impacts such as potential downtime, reputational damage, or regulatory penalties.


Developers Must Slay the Complexity and Security Issues of AI Coding Tools

Beyond adding further complexity to the codebase, AI models also lack the contextual nuance that is often necessary for creating high-quality, secure code, primarily when used by developers who lack security knowledge. As a result, vulnerabilities and other flaws are being introduced at a pace never before seen. The current software environment has grown out of control security-wise, showing no signs of slowing down. But there is hope for slaying these twin dragons of complexity and insecurity. Organizations must step into the dragon’s lair armed with strong developer risk management, backed by education and upskilling that gives developers the tools they need to bring software under control. ... AI tools increase the speed of code delivery, enhancing efficiency in raw production, but those early productivity gains are being overwhelmed by code maintainability issues later in the SDLC. The answer is to address those issues at the beginning, before they put applications and data at risk. ... Organizations involved in software creation need to change their culture, adopting a security-first mindset in which secure software is seen not just as a technical issue but as a business priority. Persistent attacks and high-profile data breaches have become too common for boardrooms and CEOs to ignore. 

Daily Tech Digest - April 27, 2025


Quote for the day:

“Most new jobs won’t come from our biggest employers. They will come from our smallest. We’ve got to do everything we can to make entrepreneurial dreams a reality.” -- Ross Perot



7 key strategies for MLops success

Like many things in life, in order to successfully integrate and manage AI and ML into business operations, organisations first need to have a clear understanding of the foundations. The first fundamental of MLops today is understanding the differences between generative AI models and traditional ML models. Cost is another major differentiator. The calculations of generative AI models are more complex resulting in higher latency, demand for more computer power, and higher operational expenses. Traditional models, on the other hand, often utilise pre-trained architectures or lightweight training processes, making them more affordable for many organisations. ... Creating scalable and efficient MLops architectures requires careful attention to components like embeddings, prompts, and vector stores. Fine-tuning models for specific languages, geographies, or use cases ensures tailored performance. An MLops architecture that supports fine-tuning is more complicated and organisations should prioritise A/B testing across various building blocks to optimise outcomes and refine their solutions. Aligning model outcomes with business objectives is essential. Metrics like customer satisfaction and click-through rates can measure real-world impact, helping organisations understand whether their models are delivering meaningful results. 


If we want a passwordless future, let's get our passkey story straight

When passkeys work, which is not always the case, they can offer a nearly automagical experience compared to the typical user ID and password workflow. Some passkey proponents like to say that passkeys will be the death of passwords. More realistically, however, at least for the next decade, they'll mean the death of some passwords -- perhaps many passwords. We'll see. Even so, the idea of killing passwords is a very worthy objective. ... With passkeys, the device that the end user is using – for example, their desktop computer or smartphone -- is the one that's responsible for generating the public/private key pair as a part of an initial passkey registration process. After doing so, it shares the public key – the one that isn't a secret – with the website or app that the user wants to login to. The private key -- the secret -- is never shared with that relying party. This is where the tech article above has it backward. It's not "the site" that "spits out two pieces of code" saving one on the server and the other on your device. ... Passkeys have a long way to go before they realize their potential. Some of the current implementations are so alarmingly bad that it could delay their adoption. But adoption of passkeys is exactly what's needed to finally curtail a decades-long crime spree that has plagued the internet. 



AI: More Buzzword Than Breakthrough

While Artificial Intelligence focuses on creating systems that simulate human intelligence, Intelligent Automation leverages these AI capabilities to automate end-to-end business processes. In essence, AI is the brain that provides cognitive functions, while Intelligent Automation is the body that executes tasks using AI’s intelligence. This distinction is critical; although Artificial Intelligence is a component of Intelligent Automation, not all AI applications result in automation, and not all automation requires advanced Artificial Intelligence. ... Intelligent Automation automates and optimizes business processes by combining AI with automation tools. This integration results in increased efficiency and reduced operating costs. For instance, Intelligent Automation can streamline supply chain operations by automating inventory management, order fulfillment, and logistics, resulting in faster turnaround times and fewer errors. ... In recent years, the term “AI” has been widely used as a marketing buzzword, often applied to technologies that do not have true AI capabilities. This phenomenon, sometimes referred to as “AI washing,” involves branding traditional automation or data processing systems as AI in order to capitalize on the term’s popularity. Such practices can mislead consumers and businesses, leading to inflated expectations and potential disillusionment with the technology.


Introduction to API Management

API gateways are pivotal in managing both traffic and security for APIs. They act as the frontline interface between APIs and the users, handling incoming requests and directing them to the appropriate services. API gateways enforce policies such as rate limiting and authentication, ensuring secure and controlled access to API functions. Furthermore, they can transform and route requests, collect analytics data and provide caching capabilities. ... With API governance, businesses get the most out of their investment. The purpose of API governance is to make sure that APIs are standardized so that they are complete, compliant and consistent. Effective API governance enables organizations to identify and mitigate API-related risks, including performance concerns, compliance issues and security vulnerabilities. API governance is complex and involves security, technology, compliance, utilization, monitoring, performance and education. Organizations can make their APIs secure, efficient, compliant and valuable to users by following best practices in these areas. ... Security is paramount in API management. Advanced security features include authentication mechanisms like OAuth, API keys and JWT (JSON Web Tokens) to control access. Encryption, both in transit and at rest, ensures data integrity and confidentiality.


Sustainability starts within: Flipkart & Furlenco on building a climate-conscious culture

Based on the insights from Flipkart and Furlenco, here are six actionable steps for leaders seeking to embed climate goals into their company culture: Lead with intent: Make climate goals a strategic priority, not just a CSR initiative. Signal top-level commitment and allocate leadership roles accordingly. Operationalise sustainability: Move beyond policies into process design — from green supply chains to net-zero buildings and water reuse systems. Make It measurable: Integrate climate-related KPIs into team goals, performance reviews, and business dashboards. Empower employees: Create space for staff to lead climate initiatives, volunteer, learn, and innovate. Build purpose into daily roles. Foster dialogue and storytelling: Share wins, losses, and journeys. Use Earth Day campaigns, internal newsletters, and learning modules to bring sustainability to life. Measure Culture, Not Just Carbon: Assess how employees feel about their role in climate action — through surveys, pulse checks, and feedback loops. ... Beyond the company walls, this cultural approach to climate leadership has ripple effects. Customers are increasingly drawn to brands with strong environmental values, investors are rewarding companies with robust ESG cultures, and regulators are moving from voluntary frameworks to mandatory disclosures.


Proof-of-concept bypass shows weakness in Linux security tools

An Israeli vendor was able to evade several leading Linux runtime security tools using a new proof-of-concept (PoC) rootkit that it claims reveals the limitations of many products in this space. The work of cloud and Kubernetes security company Armo, the PoC is called ‘Curing’, a portmanteau word that combines the idea of a ‘cure’ with the io_uring Linux kernel interface that the company used in its bypass PoC. Using Curing, Armo found it was possible to evade three Linux security tools to varying degrees: Falco (created by Sysdig but now a Cloud Native Computing Foundation graduated project), Tetragon from Isovalent (now part of Cisco), and Microsoft Defender. ... Armo said it was motivated to create the rootkit to draw attention to two issues. The first was that, despite the io_uring technique being well documented for at least two years, vendors in the Linux security space had yet to react to the danger. The second purpose was to draw attention to deeper architectural challenges in the design of the Linux security tools that large numbers of customers rely on to protect themselves: “We wanted to highlight the lack of proper attention in designing monitoring solutions that are forward-compatible. Specifically, these solutions should be compatible with new features in the Linux kernel and address new techniques,” said Schendel.


Insider threats could increase amid a chaotic cybersecurity environment

Most organisations have security plans and policies in place to decrease the potential for insider threats. No policy will guarantee immunity to data breaches and IT asset theft but CISOs can make sure their policies are being executed through routine oversight and audits. Best practices include access control and least privilege, which ensures employees, contractors and all internal users only have access to the data and systems necessary for their specific roles. Regular employee training and awareness programmes are also critical. Training sessions are an effective means to educate employees on security best practices such as how to recognise phishing attempts, social engineering attacks and the risks associated with sharing sensitive information. Employees should be trained in how to report suspicious activities – and there should be a defined process for managing these reports. Beyond the security controls noted above, those that govern the IT asset chain of custody are crucial to mitigating the fallout of a breach should assets be stolen by employees, former employees or third parties. The IT asset chain of custody refers to the process that tracks and documents the physical possession, handling and movement of IT assets throughout their lifecycle. A sound programme ensures that there is a clear, auditable trail of who has access to and controls the asset at any given time. 


Distributed Cloud Computing: Enhancing Privacy with AI-Driven Solutions

AI has the potential to play a game-changing role in distributed cloud computing and PETs. By enabling intelligent decision-making and automation, AI algorithms can help us optimize data processing workflows, detect anomalies, and predict potential security threats. AI has been instrumental in helping us identify patterns and trends in complex data sets. We're excited to see how it will continue to evolve in the context of distributed cloud computing. For instance, homomorphic encryption allows computations to be performed on encrypted data without decrypting it first. This means that AI models can process and analyze encrypted data without accessing the underlying sensitive information. Similarly, AI can be used to implement differential privacy, a technique that adds noise to the data to protect individual records while still allowing for aggregate analysis. In anomaly detection, AI can identify unusual patterns or outliers in data without requiring direct access to individual records, ensuring that sensitive information remains protected. While AI offers powerful capabilities within distributed cloud environments, the core value proposition of integrating PETs remains in the direct advantages they provide for data collaboration, security, and compliance. Let's delve deeper into these key benefits, challenges and limitations of PETs in distributed cloud computing.


Mobile Applications: A Cesspool of Security Issues

"What people don't realize is you ship your entire mobile app and all your code to this public store where any attacker can download it and reverse it," Hoog says. "That's vastly different than how you develop a Web app or an API, which sit behind a WAF and a firewall and servers." Mobile platforms are difficult for security researchers to analyze, Hoog says. One problem is that developers rely too much on the scanning conducted by Apple and Google on their app stores. When a developer loads an application, either company will conduct specific scans to detect policy violations and to make malicious code more difficult to upload to the repositories. However, developers often believe the scanning is looking for security issues, but it should not be considered a security control, Hoog says. "Everybody thinks Apple and Google have tested the apps — they have not," he says. "They're testing apps for compliance with their rules. They're looking for malicious malware and just egregious things. They are not testing your application or the apps that you use in the way that people think." ... In addition, security issues on mobile devices tend to have a much shorter lifetime, because of the closed ecosystems and the relative rarity of jailbreaking. When NowSecure finds a problem, there is no guarantee that it will last beyond the next iOS or Android update, he says.


The future of testing in compliance-heavy industries

In today’s fast-evolving technology landscape, being an engineering leader in compliance-heavy industries can be a struggle. Managing risks and ensuring data integrity are paramount, but the dangers are constant when working with large data sources and systems. Traditional integration testing within the context of stringent regulatory requirements is more challenging to manage at scale. This leads to gaps, such as insufficient test coverage across interconnected systems, a lack of visibility into data flows, inadequate logging, and missed edge case conditions, particularly in third-party interactions. Due to these weaknesses, security vulnerabilities can pop up and incident response can be delayed, ultimately exposing organizations to violations and operational risk. ... API contract testing is a modern approach used to validate the expectations between different systems, making sure that any changes in APIs don’t break expectations or contracts. Changes might include removing or renaming a field and altering data types or response structures. These seemingly small updates can cause downstream systems to crash or behave incorrectly if they are not properly communicated or validated ahead of time. ... The shifting left practice has a lesser-known cousin: shifting right. Shifting right focuses on post-deployment validation using concepts such as observability and real-time monitoring techniques.

Daily Tech Digest - April 25, 2025


Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." -- Johann Wolfgang von Goethe


Revolutionizing Application Security: The Plea for Unified Platforms

“Shift left” is a practice that focuses on addressing security risks earlier in the development cycle, before deployment. While effective in theory, this approach has proven problematic in practice as developers and security teams have conflicting priorities. ... Cloud native applications are dynamic; constantly deployed, updated and scaled, so robust real-time protection measures are absolutely necessary. Every time an application is updated or deployed, new code, configurations or dependencies appear, all of which can introduce new vulnerabilities. The problem is that it is difficult to implement real-time cloud security with a traditional, compartmentalized approach. Organizations need real-time security measures that provide continuous monitoring across the entire infrastructure, detect threats as they emerge and automatically respond to them. As Tager explained, implementing real-time prevention is necessary “to stay ahead of the pace of attackers.” ... Cloud native applications tend to rely heavily on open source libraries and third-party components. In 2021, Log4j’s Log4Shell vulnerability demonstrated how a single compromised component could affect millions of devices worldwide, exposing countless enterprises to risk. Effective application security now extends far beyond the traditional scope of code scanning and must reflect the modern engineering environment. 


AI-Powered Polymorphic Phishing Is Changing the Threat Landscape

Polymorphic phishing is an advanced form of phishing campaign that randomizes the components of emails, such as their content, subject lines, and senders’ display names, to create several almost identical emails that only differ by a minor detail. In combination with AI, polymorphic phishing emails have become highly sophisticated, creating more personalized and evasive messages that result in higher attack success rates. ... Traditional detection systems group phishing emails together to enhance their detection efficacy based on commonalities in phishing emails, such as payloads or senders’ domain names. The use of AI by cybercriminals has allowed them to conduct polymorphic phishing campaigns with subtle but deceptive variations that can evade security measures like blocklists, static signatures, secure email gateways (SEGs), and native security tools. For example, cybercriminals modify the subject line by adding extra characters and symbols, or they can alter the length and pattern of the text. ... The standard way of grouping individual attacks into campaigns to improve detection efficacy will become irrelevant by 2027. Organizations need to find alternative measures to detect polymorphic phishing campaigns that don’t rely on blocklists and that can identify the most advanced attacks.


Does AI Deserve Worker Rights?

Chalmers et al declare that there are three things that AI-adopting institutions can do to prepare for the coming consciousness of AI: “They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern.” What would “an appropriate level of moral concern” actually look like? According to Kyle Fish, Anthropic’s AI welfare researcher, it could take the form of allowing an AI model to stop a conversation with a human if the conversation turned abusive. “If a user is persistently requesting harmful content despite the model’s refusals and attempts at redirection, could we allow the model simply to end that interaction?” Fish told the New York Times in an interview. What exactly would model welfare entail? The Times cites a comment made in a podcast last week by podcaster Dwarkesh Patel, who compared model welfare to animal welfare, stating it was important to make sure we don’t reach “the digital equivalent of factory farming” with AI. Considering Nvidia CEO Jensen Huang’s desire to create giant “AI factories” filled with millions of his company’s GPUs cranking through GenAI and agentic AI workflows, perhaps the factory analogy is apropos.


Cybercriminals switch up their top initial access vectors of choice

“Organizations must leverage a risk-based approach and prioritize vulnerability scanning and patching for internet-facing systems,” wrote Saeed Abbasi, threat research manager at cloud security firm Qualys, in a blog post. “The data clearly shows that attackers follow the path of least resistance, targeting vulnerable edge devices that provide direct access to internal networks.” Greg Linares, principal threat intelligence analyst at managed detection and response vendor Huntress, said, “We’re seeing a distinct shift in how modern attackers breach enterprise environments, and one of the most consistent trends right now is the exploitation of edge devices.” Edge devices, ranging from firewalls and VPN appliances to load balancers and IoT gateways, serve as the gateway between internal networks and the broader internet. “Because they operate at this critical boundary, they often hold elevated privileges and have broad visibility into internal systems,” Linares noted, adding that edge devices are often poorly maintained and not integrated into standard patching cycles. Linares explained: “Many edge devices come with default credentials, exposed management ports, secret superuser accounts, or weakly configured services that still rely on legacy protocols — these are all conditions that invite intrusion.”


5 tips for transforming company data into new revenue streams

Data monetization can be risky, particularly for organizations that aren’t accustomed to handling financial transactions. There’s an increased threat of security breaches as other parties become aware that you’re in possession of valuable information, ISG’s Rudy says. Another risk is unintentionally using data you don’t have a right to use or discovering that the data you want to monetize is of poor quality or doesn’t integrate across data sets. Ultimately, the biggest risk is that no one wants to buy what you’re selling. Strong security is essential, Agility Writer’s Yong says. “If you’re not careful, you could end up facing big fines for mishandling data or not getting the right consent from users,” he cautions. If a data breach occurs, it can deeply damage an enterprise’s reputation. “Keeping your data safe and being transparent with users about how you use their info can go a long way in avoiding these costly mistakes.” ... “Data-as-a-service, where companies compile and package valuable datasets, is the base model for monetizing data,” he notes. However, insights-as-a-service, where customers provide prescriptive/predictive modeling capabilities, can demand a higher valuation. Another consideration is offering an insights platform-as-a-service, where subscribers can securely integrate their data into the provider’s insights platform.


Are AI Startups Faking It Till They Make It?

"A lot of VC funds are just kind of saying, 'Hey, this can only go up.' And that's usually a recipe for failure - when that starts to happen, you're becoming detached from reality," Nnamdi Okike, co-founder and managing partner at 645 Ventures, told Tradingview. Companies are branding themselves as AI-driven, even when their core technologies lack substantive AI components. A 2019 study by MMC Ventures found 40% of surveyed "AI startups" in Europe showed no evidence of AI integration in their products or services. And this was before OpenAI further raised the stakes with the launch of ChatGPT in 2022. It's a slippery slope. Even industry behemoths have had to clarify the extent of their AI involvement. Last year, tech giant and the fourth-most richest company in the world Amazon pushed back on allegations that its AI-powered "Just Walk Out" technology installed at its physical grocery stores for a cashierless checkout was largely being driven by around 1,000 workers in India who manually checked almost three quarters of the transactions. Amazon termed these reports "erroneous" and "untrue," adding that the staff in India were not reviewing live footage from the stores but simply reviewing the system. The incentive to brand as AI-native has only intensified. 


From deployment to optimisation: Why cloud management needs a smarter approach

As companies grow, so does their cloud footprint. Managing multiple cloud environments—across AWS, Azure, and GCP—often results in fragmented policies, security gaps, and operational inefficiencies. A Multi-Cloud Maturity Research Report by Vanson Bourne states that nearly 70% of organisations struggle with multi-cloud complexity, despite 95% agreeing that multi-cloud architectures are critical for success. Companies are shifting away from monolithic architecture to microservices, but managing distributed services at scale remains challenging. ... Regulatory requirements like SOC 2, HIPAA, and GDPR demand continuous monitoring and updates. The challenge is not just staying compliant but ensuring that security configurations remain airtight. IBM’s Cost of a Data Breach Report reveals that the average cost of a data breach in India reached ₹195 million in 2024, with cloud misconfiguration accounting for 12% of breaches. The risk is twofold: businesses either overprovision resources—wasting money—or leave environments under-secured, exposing them to breaches. Cyber threats are also evolving, with attackers increasingly targeting cloud environments. Phishing and credential theft accounted for 18% of incidents each, according to the IBM report. 


Inside a Cyberattack: How Hackers Steal Data

Once a hacker breaches the perimeter the standard practice is to beachhead, and then move laterally to find the organisation’s crown jewels: their most valuable data. Within a financial or banking organisation it is likely there is a database on their server that contains sensitive customer information. A database is essentially a complicated spreadsheet, wherein a hacker can simply click SELECT and copy everything. In this instance data security is essential, however, many organisations confuse data security with cybersecurity. Organisations often rely on encryption to protect sensitive data, but encryption alone isn’t enough if the decryption keys are poorly managed. If an attacker gains access to the decryption key, they can instantly decrypt the data, rendering the encryption useless. ... To truly safeguard data, businesses must combine strong encryption with secure key management, access controls, and techniques like tokenisation or format-preserving encryption to minimise the impact of a breach. A database protected by Privacy Enhancing Technologies (PETs), such as tokenisation, becomes unreadable to hackers if the decryption key is stored offsite. Without breaching the organisation’s data protection vendor to access the key, an attacker cannot decrypt the data – making the process significantly more complicated. This can be a major deterrent to hackers.


Why Testing is a Long-Term Investment for Software Engineers

At its core, a test is a contract. It tells the system—and anyone reading the code—what should happen when given specific inputs. This contract helps ensure that as the software evolves, its expected behavior remains intact. A system without tests is like a building without smoke detectors. Sure, it might stand fine for now, but the moment something catches fire, there’s no safety mechanism to contain the damage. ... Over time, all code becomes legacy. Business requirements shift, architectures evolve, and what once worked becomes outdated. That’s why refactoring is not a luxury—it’s a necessity. But refactoring without tests? That’s walking blindfolded through a minefield. With a reliable test suite, engineers can reshape and improve their code with confidence. Tests confirm that behavior hasn’t changed—even as the internal structure is optimized. This is why tests are essential not just for correctness, but for sustainable growth. ... There’s a common myth: tests slow you down. But seasoned engineers know the opposite is true. Tests speed up development by reducing time spent debugging, catching regressions early, and removing the need for manual verification after every change. They also allow teams to work independently, since tests define and validate interfaces between components.


Why the road from passwords to passkeys is long, bumpy, and worth it - probably

While the current plan rests on a solid technical foundation, many important details are barriers to short-term adoption. For example, setting up a passkey for a particular website should be a rather seamless process; however, fully deactivating that passkey still relies on a manual multistep process that has yet to be automated. Further complicating matters, some current user-facing implementations of passkeys are so different from one another that they're likely to confuse end-users looking for a common, recognizable, and easily repeated user experience. ... Passkey proponents talk about how passkeys will be the death of the password. However, the truth is that the password died long ago -- just in a different way. We've all used passwords without considering what is happening behind the scenes. A password is a special kind of secret -- a shared or symmetric secret. For most online services and applications, setting a password requires us to first share that password with the relying party, the website or app operator. While history has proven how shared secrets can work well in very secure and often temporary contexts, if the HaveIBeenPawned.com website teaches us anything, it's that site and app authentication isn't one of those contexts. Passwords are too easily compromised.

Daily Tech Digest January 17, 2025

The Architect’s Guide to Understanding Agentic AI

All business processes can be broken down into two planes: a control plane and a tools plane. See the graphic below. The tools plane is a collection of APIs, stored procedures and external web calls to business partners. However, for organizations that have started their AI journey, it could also include calls to traditional machine learning models (wave No. 1) and LLMs (wave No. 2) operating in “one-shot” mode. ... The promise of agentic AI is to use LLMs with full knowledge of an organization’s tools plane and allow them to build and execute the logic needed for the control plane. This can be done by providing a “few-shot” prompt to an LLM that has been fine-tuned on an organization’s tools plane. Below is an example of a “few-shot” prompt that answers the same hypothetical question presented earlier. This is also known as letting the LLM think slowly. ... If agentic AI still seems to be made up of too much magic, then consider the simple example below. Every developer who has to write code daily probably asks an LLM a question similar to the one below. ... Agentic AI is the next logical evolution of AI. It is based on capabilities with a solid footing in AI’s first and second waves. The promise is the use of AI to solve more complex problems by allowing them to plan, execute tasks and revise— in other words, allowing them to think slowly. This also promises to produce more accurate responses.


AI datacenters putting zero emissions promises out of reach

Datacenters' use of water and land are other bones of contention, which in combination with their reliance on tax breaks and the limited number of local jobs they deliver, will see them face growing opposition from local residents and environmental groups. Uptime highlights that many governments have set targets for GHG emissions to become net-zero by a set date, but warns that because the AI boom look set to test power availability, it will almost certainly put these pledges out of reach. ... Many governments seem convinced of the economic benefits promised by AI at the expense of other concerns, the report notes. The UK is a prime example, this week publishing the AI Opportunities Action Plan and vowing to relax planning rules to prioritize datacenter builds. ... Increasing rack power presents several challenges, the report warns, including the sheer space taken up by power distribution infrastructure such as switchboards, UPS systems, distribution boards, and batteries. Without changes to the power architecture, many datacenters risk becoming an electrical plant built around a relatively small IT room. Solving this will call for changes such as medium-voltage (over 1 kV) distribution to the IT space and novel power distribution topologies. However, this overhaul will take time to unfold, with 2025 potentially a pivotal year for investment to make this possible.


State of passkeys 2025: passkeys move to mainstream

One of the critical factors driving passkeys into mainstream is the full passkey-readiness of devices, operating systems and browsers. Apple (iOS, macOS, Safari), Google (Android, Chrome) and Microsoft (Windows, Edge) have fully integrated passkey support across their platforms: Over 95 percent of all iOS & Android devices are passkey-ready; and Over 90 percent of all iOS & Android devices have passkey functionality enabled. With Windows soon supporting synced passkeys, all major operating systems ensure users can securely and effortlessly access their credentials across devices. ... With full device support, a polished UX, growing user familiarity, and a proven track record among early adopter implementations, there’s no reason for businesses to delay adopting passkeys. The business advantages of passkeys are compelling. Companies that previously relied on SMS-based authentication can save considerably on SMS costs. Beyond that, enterprises adopting passkeys benefit from reduced support overhead (since fewer password resets are needed), lower risk of breaches (thanks to phishing-resistance), and optimized user flows that improve conversion rates. Collectively, these perks make a convincing business case for passkeys.


Balancing usability and security in the fight against identity-based attacks

AI and ML are a double-edged sword in cybersecurity. On one hand, cybercriminals are using these technologies to make their attacks faster and wiser. They can create highly convincing phishing emails, generate deepfake content, and even find ways to bypass traditional security measures. For example, generative AI can craft emails or videos that look almost real, tricking people into falling for scams. On the flip side, AI and ML are also helping defenders. These technologies allow security systems to quickly analyze vast amounts of data, spotting unusual behavior that might indicate compromised credentials. ... Targeted security training can be useful but generally you want to reduce the human dependency as much as possible. This is why controls that can meet a user where they are at is critical. If you can deliver point-in-time guidance, or straight up technically prevent something like a user entering their password into a phishing site, it significantly reduces the dependency on the human to make the right decision unassisted every time. When you consider how hard it can be for even security professionals to spot the more sophisticated phishing sites, it’s essential that we help people out as much as possible with technical controls.


Understanding Leaderless Replication for Distributed Data

Leaderless replication is another fundamental replication approach for distributed systems. It alleviates problems of multi-leader replication while, at the same time, it introduces its own problems. Write conflicts in multi-leader replication are tackled in leaderless replication with quorum-based writes and systematic conflict resolution. Cascading failures, synchronization overhead, and operational complexity can be handled in leaderless replication via its decentralized architecture. Removing leaders can simplify cluster management, failure handling,g and recovery mechanisms. Any replica can handle writes/reads. ... Direct writes, and coordination-based replication are the most common approaches in leaderless replication. In the first approach, clients write directly to node replicas, while in the second approach, there exist coordinator-mediated writes. It is worth mentioning that, unlike the leader-follower concept, coordinators in leaderless replication do not enforce a particular ordering of writes. ... Failure handling is one of the most challenging aspects of both approaches. While direct writes provide better theoretical availability, they can be problematic during failure scenarios. Coordinator-based systems can provide clearer failure semantics but at the cost of potential coordinator bottlenecks.


Blockchain in Banking: Use Cases and Examples

Bitcoin has entered a space usually reserved for gold and sovereign bonds: national reserves. While the U.S. Federal Reserve maintains that it cannot hold Bitcoin under current regulations, other financial systems are paying close attention to its potential role as a store of value. On the global stage, Bitcoin is being viewed not just as a speculative asset but as a hedge against inflation and currency volatility. Governments are now debating whether digital assets can sit alongside gold bars in their vaults. Behind all this activity lies blockchain - providing transparency, security, and a framework for something as ambitious as a digital reserve currency. ... Financial assets like real estate, investment funds, or fine art are traditionally expensive, hard to divide, and slow to transfer. Blockchain changes this by converting these assets into digital tokens, enabling fractional ownership and simplifying transactions. UBS launched its first tokenized fund on the Ethereum blockchain, allowing investors to trade fund shares as digital assets. This approach reduces administrative costs, accelerates settlements, and improves accessibility for investors. Additionally, one of Central and Eastern Europe’s largest banks has tokenized fine art on Aleph Zero blockchain. This enables fractional ownership of valuable art pieces while maintaining verifiable proof of ownership and authenticity.


Decentralized AI in Edge Computing: Expanding Possibilities

Federated learning enables decentralized training of AI models directly across multiple edge devices. This approach eliminates the need to transfer raw data to a central server, preserving privacy and reducing bandwidth consumption. Models are trained locally, with only aggregated updates shared to improve the global system. ... Localized data processing empowers edge devices to conduct real-time analytics, facilitating faster decision-making and minimizing reliance on central frameworks. This capability is fundamental for applications such as autonomous vehicles and industrial automation, where even milliseconds can be vital. ... Blockchain technology is pivotal in decentralized AI for edge computing by providing a secure, immutable ledger for data sharing and task execution across edge nodes. It ensures transparency and trust in resource allocation, model updates, and data verification processes. ... By processing data directly at the edge, decentralized AI removes the delays in sending data to and from centralized servers. This capability ensures faster response times, enabling near-instantaneous decision-making in critical real-time applications. ... Decentralized AI improves privacy protocols by empowering the processing of sensitive information locally on the device rather than sending it to external servers.


The Myth of Machine Learning Reproducibility and Randomness

The nature of ML systems contributes to the challenge of reproducibility. ML components implement statistical models that provide predictions about some input, such as whether an image is a tank or a car. But it is difficult to provide guarantees about these predictions. As a result, guarantees about the resulting probabilistic distributions are often given only in limits, that is, as distributions across a growing sample. These outputs can also be described by calibration scores and statistical coverage, such as, “We expect the true value of the parameter to be in the range [0.81, 0.85] 95 percent of the time.” ... There are two basic techniques we can use to manage reproducibility. First, we control the seeds for every randomizer used. In practice there may be many. Second, we need a way to tell the system to serialize the training process executed across concurrent and distributed resources. Both approaches require the platform provider to include this sort of support. ... Despite the importance of these exact reproducibility modes, they should not be enabled during production. Engineering and testing should use these configurations for setup, debugging and reference tests, but not during final development or operational testing.


The High-Stakes Disconnect For ICS/OT Security

ICS technologies, crucial to modern infrastructure, are increasingly targeted in sophisticated cyber-attacks. These attacks, often aimed at causing irreversible physical damage to critical engineering assets, highlight the risks of interconnected and digitized systems. Recent incidents like TRISIS, CRASHOVERRIDE, Pipedream, and Fuxnet demonstrate the evolution of cyber threats from mere nuisances to potentially catastrophic events, orchestrated by state-sponsored groups and cybercriminals. These actors target not just financial gains but also disruptive outcomes and acts of warfare, blending cyber and physical attacks. Additionally, human-operated Ransomware and targeted ICS/OT ransomware pose concerns being on the rise in recent times. ... Traditional IT security measures, when applied to ICS/OT environments, can provide a false sense of security and disrupt engineering operations and safety. Thus, it is important to consider and prioritize the SANS Five ICS Cybersecurity Critical Controls. This freely available whitepaper sets forth the five most relevant critical controls for an ICS/OT cybersecurity strategy that can flex to an organization's risk model and provides guidance for implementing them.


Execs are prioritizing skills over degrees — and hiring freelancers to fill gaps

Companies are adopting more advanced approaches to assessing potential and current employee skills, blending AI tools with hands-on evaluations, according to Monahan. AI-powered platforms are being used to match candidates with roles based on their skills, certifications, and experience. “Our platform has done this for years, and our new UMA (Upwork’s Mindful AI) enhances this process,” she said. Gartner, however, warned that “rapid skills evolutions can threaten quality of hire, as recruiters struggle to ensure their assessment processes are keeping pace with changing skills. Meanwhile, skills shortages place more weight on new hires being the right hires, as finding replacement talent becomes increasingly challenging. Robust appraisal of candidate skills is therefore imperative, but too many assessments can lead to candidate fatigue.” ... The shift toward skills-based hiring is further driven by a readiness gap in today’s workforce. Upwork’s research found that only 25% of employees feel prepared to work effectively alongside AI, and even fewer (19%) can proactively leverage AI to solve problems. “As companies navigate these challenges, they’re focusing on hiring based on practical, demonstrated capabilities, ensuring their workforce is agile and equipped to meet the demands of a rapidly evolving business landscape,” Monahan said.



Quote for the day:

“If you set your goals ridiculously high and it’s a failure, you will fail above everyone else’s success.” -- James Cameron