Showing posts with label containers. Show all posts
Showing posts with label containers. Show all posts

Daily Tech Digest - June 11, 2025


Quote for the day:

"The key to success is to focus on goals, not obstacles." -- Unknown



The future of RPA ties to AI agents

“Unlike RPA bots, that follow predefined rules, AI agents are learning from data, making decisions, and adapting to changing business logic,” Khan says. “AI agents are being used for more flexible tasks such as customer interactions, fraud detection, and predictive analytics.” Kahn sees RPA’s role shifting in the next three to five years, as AI agents become more prevalent. Many organizations will embrace hyperautomation, which uses multiple technologies, including RPA and AI, to automate business processes. “Use cases for RPA most likely will be integrated into broader AI-powered workflows instead of functioning as standalone solutions,” he says. ... “RPA isn’t dying — it’s evolving,” he says. “We’ve tested various AI solutions for process automation, but when you need something to work the same way every single time —without exceptions, without interpretations — RPA remains unmatched.” Radich and other automation experts see AI agents eventually controlling RPA bots, with various robotic processes in a toolbox for agents to choose from. “Today, we build separate RPA workflows for different scenarios,” Radich says. “Tomorrow, with our agentic capabilities, an agent will evaluate an incoming request and determine whether it needs RPA for data processing, API calls for system integration, or human handoff for complex decisions.”


The path to better cybersecurity isn’t more data, it’s less noise

SOCs deal with tens of thousands of alerts every day. It’s more than any person can realistically keep up with. When too much data comes in at once, things get missed. Responses slow down and, over time, the constant pressure can lead to burnout. ... The trick is to start spotting patterns. Look at what helped in past investigations. Was it a login from an odd location? An admin running commands they normally don’t? A device suddenly reaching out to strange domains? These are the kinds of details that stand out once you understand what typical system behavior looks like. At first, you won’t. That’s okay. Spend time reading through old incident reports. Watch how the team reacts to real alerts. Learn which ones actually spark investigations and which ones get dismissed without a second glance. ... Start by removing logs and alerts that don’t add value. Many logs are never looked at because they don’t contain useful information. Logs showing every successful login might not help if those logins are normal. Some logs repeat the same information, like system status messages. ... Next, think about how long to keep different types of logs. Not all logs need to be saved for the same amount of time. Network traffic logs might only be useful for a few days because threats usually show up quickly. 


The EU challenges Google and Cloudflare with its very own DNS resolver that can filter dangerous traffic

The DNS4EU wants to be an alternative to major US-based public DNS services (like Google and Cloudflare) to boost the EU's digital autonomy by reducing European reliance on foreign infrastructure. This isn't only an EU-developed DNS, though. The DNS4EU comes with built-in filters against malicious domains, like those hosting malware, phishing, or other cybersecurity threats. The home user version also includes the possibility to block ads and/or adult content. ... The DNS4EU, which the EU ensures "will not be forced on anyone," has been developed to meet different users' needs. The home users' version is a public and free DNS resolver that comes with the option to add filters to block ads, malware, adult content, or all of these, or none. There's also a dedicated version for government entities and telecom providers that operate within the European Union. As mentioned earlier, the DNS4EU comes with a built-in filter to block dangerous traffic alongside the ability to provide regional threat intelligence. This means that a malicious threat discovered in one country could be blocked simultaneously across several regions and countries, de facto halting its spread. ... The Senior Director for European Government and Regulatory Affairs at the Internet Society, David Frautschy Heredia, also warns against potential risks related to content filtering, arguing that "safeguards should be developed to prevent abuse."


AgenticOps: How Cisco is Rewiring Network Operations for the AI Age

AI Canvas is where AgenticOps comes to life. It’s the industry’s first generative UI built for cross-domain IT operations, unifying NetOps, SecOps, IT, and executives into one collaborative environment. Powered by real-time telemetry from Meraki, ThousandEyes, Splunk, and more, AI Canvas brings together data from across the stack into one intelligent, always-on view. But this isn’t just visibility. It’s AI already operating. When a service issue hits, AI Canvas pulls in the right data, connects the dots, and surfaces a live picture of what matters—before anyone even asks. Every session starts with context, whether launched by AI or by an IT engineer. Embedded into the AI Canvas is the Cisco AI Assistant, your interface to the agentic system. Ask a question in natural language. Dig into root cause. Explore options. The AI Assistant guides you through diagnostics, decisions, and actions, all grounded in live telemetry. And when you’re ready to share, just drag your findings into AI Canvas. From there, with one click you can invite collaborators—and that’s when the canvas comes fully alive. Every insight becomes part of a shared investigation with AI Canvas actively thinking, collaborating, and evolving the UI at every step. But it doesn’t stop at diagnosis—AI Canvas acts. It applies changes, monitors impact and share outcomes in real time.


8 things CISOs have learned from cyber incidents

Brown believes there are often important lessons that come out of breaches, whether it’s high-profile ones that end up in textbooks and university courses, or experiences that can be shared among peers through conference panels and other events. “Always look for good to come from events. How can you help the industry forward? Can you help the CISO community?” he says. ... Many incident-hardened CISOs will shift their approach and their mindset about experiencing an attack first-hand. “You’ll develop an attack-minded perspective, where you want to understand your attack surface better than your adversary, and apply your resources accordingly to insulate against risk,” says Cory Michel, VP security and IT at AppOmni, who’s been on several incident response teams. In practice, shifting from defense to offence means preparing for different types of incidents, be it platform abuse, exploitation or APTs, and tailoring responses. ... The playbook needs clear guidance on communication, during and after an incident, because this can be overlooked while dealing with the crisis, but in the end, it may come to define the lasting impact of a breach that becomes common knowledge. “Every word matters during a crisis,” says Brown. “Of what you publish, what you say, how you say it. So, it’s very important to be prepared for that.”


The five security principles driving open source security apps at scale

Open-source AI’s ability to act as an innovation catalyst is proven. What is unknown is the downside or the paradox that’s being created with the all-out focus on performance and the ubiquity of platform development and support. At the center of the paradox for every company building with open-source AI is the need to keep it open to fuel innovation, yet gain control over security vulnerabilities and the complexity of compliance. ... Regulatory compliance is becoming more complex and expensive, further fueling the paradox. Startup founders, however, tell VentureBeat that the high costs of compliance can be offset by the data their systems generate. They’re quick to point out that they do not intend to deliver governance, risk, and compliance (GRC) solutions; however, their apps and platforms are meeting the needs of enterprises in this area, especially across Europe. ... “EU AI Act, for example, is starting its enforcement in February, and the pace of enforcement and fines is much higher and aggressive than GDPR. From our perspective, we want to help organizations navigate those frameworks, ensuring they’re aware of the tools available to leverage AI safely and map them to risk levels dictated by the Act.”


What We Wish We Knew About Container Security

Each container maps to a process ID in Linux. The illusion of separation is created using kernel namespaces. These namespaces hide resources like filesystems, network interfaces and process trees. But the kernel remains shared. That shared kernel becomes the attack surface. And in the event of a container escape, that attack surface becomes a liability. Common attack vectors include exploiting filesystem mounts, abusing symbolic links or leveraging misconfigured privileges. These exploits often target the host itself. Once inside the kernel, an attacker can affect other containers or the infrastructure that supports them. This is not just theoretical. Container escapes happen, and when they do, everything on that node becomes suspect. ... Virtual machines fell out of favor because of performance overhead and slow startup times. But many of those drawbacks have since been addressed. Projects leveraging paravirtualization, for example, now offer performance comparable to containers while restoring strong workload isolation. Paravirtualization modifies the guest OS to interact efficiently with the hypervisor. It eliminates the need to emulate hardware, reducing latency and improving resource usage. Several open source projects have explored this space, demonstrating that it’s possible to run containers within lightweight virtual machines. 


The unseen risks of cloud data sharing and how companies can safeguard intellectual property

For many technology-driven sectors, intellectual property lies at their core. This is particular to the fields of software development, pharmaceuticals, and design innovation. For companies in these fields, IP theft can have serious consequences. Unfortunately, cybercriminals increasingly target valuable IP because it can be sold or used to undermine the original creators. According to the Verizon 2025 Data Breach Investigation Report, nearly 97 per cent of these attacks in the Asia-Pacific region are fuelled by social engineering, system intrusion and web app attacks. This alarming trend highlights the urgent need for stronger data protection measures. ... While cloud platforms present unique challenges for securing IP, they also offer some potential solutions. One of the most effective ways to protect data is through encryption. Encrypting files before they are uploaded to the cloud ensures that even if unauthorised access is gained, the data remains unreadable without the proper decryption key. For organisations that rely on cloud platforms for collaboration, file-level encryption is crucial. This form of encryption ensures that sensitive data is protected not just at rest but throughout its entire lifecycle in the cloud. Many cloud platforms offer built-in encryption tools, but companies can also implement third-party solutions to enhance the protection of their intellectual property.


The Critical Role of a Data Pipeline in Security

By implementing a data pipeline and prioritizing the optimization and reduction of data volume before it reaches the SIEM, organizations can stay on budget and still ensure that all necessary data can be thoroughly examined. Data pipelines also lead to tangible reductions in both storage and processing expenses. ... The decrease in the sheer volume of data that the SIEM must handle directly can significantly reduce the total cost of SIEM operations. In addition to volume reduction, data pipelines improve the quality of data delivered to SIEMs and other tools — filtering out repetitive noise and enriching logs for faster queries, increased relevance, and prioritization of the most critical security events. Data pipelines also introduce efficiency by automating the collection, processing, and routing of data. By reducing alert fatigue through intelligent anomaly detection and prioritization, data pipelines can significantly speed up incident resolution times. Beyond immediate threat detection and cost savings, data pipelines also aid in maintaining compliance with privacy regulations like GDPR, CCPA, and PCI. They help provide clear data lineage, making it easier to track the origin and transformations of data. 


Why you need diverse third-party data to deliver trusted AI solutions

Data diversity refers to the variety and representation of different attributes, groups, conditions, or contexts within a dataset. It ensures that the dataset reflects the real-world variability in the population or phenomenon being studied. The diversity of your data helps ensure that the insights, predictions, and decisions derived from it are fair, accurate, and generalizable. ... Before you start your data analysis, it’s important to understand what you want to do with your data. A keen understanding of your use cases and data applications can help identify gaps and hypotheses you need to work to solve. It also gives you a method for seeking the data that fits your specific use case. In the same way, starting with a clear question provides direction, focus, and purpose to the whole process of text data analysis. Without one, you’ll inevitably gather irrelevant data, overlook key variables, or find yourself looking at a dataset that’s irrelevant to what you actually want to know. ... When certain voices, topics, or customer segments are over- or underrepresented in the data, models trained on that data may produce skewed results: misunderstanding user needs, overlooking key issues, or favoring one group over another. This can result in poor customer experiences, ineffective personalization efforts, and biased decision-making. 

Daily Tech Digest - April 22, 2025


Quote for the day:

“Identify your problems but give your power and energy to solutions.” -- Tony Robbins



Open Source and Container Security Are Fundamentally Broken

Finding a security vulnerability is only the beginning of the nightmare. The real chaos starts when teams attempt to patch it. A fix is often available, but applying it isn’t as simple as swapping out a single package. Instead, it requires upgrading the entire OS or switching to a new version of a critical dependency. With thousands of containers in production, each tied to specific configurations and application requirements, this becomes a game of Jenga, where one wrong move could bring entire services crashing down. Organizations have tried to address these problems with a variety of security platforms, from traditional vulnerability scanners to newer ASPM (Application Security Posture Management) solutions. But these tools, while helpful in tracking vulnerabilities, don’t solve the root issue: fixing them. Most scanning tools generate triage lists that quickly become overwhelming. ... The current state of open source and container security is unsustainable. With vulnerabilities emerging faster than organizations can fix them, and a growing skills gap in systems engineering fundamentals, the industry is headed toward a crisis of unmanageable security debt. The only viable path forward is to rethink how container security is handled, shifting from reactive patching to seamless, automated remediation.


The legal blind spot of shadow IT

Unauthorized applications can compromise this control, leading to non-compliance and potential fines. Similarly, industries governed by regulations like HIPAA or PCI DSS face increased risks when shadow IT circumvents established data protection protocols. Moreover, shadow IT can result in contractual breaches. Some business agreements include clauses that require adherence to specific security standards. The use of unauthorized software may violate these terms, exposing the organization to legal action. ... “A focus on asset management and monitoring is crucial for a legally defensible security program,” says Chase Doelling, Principal Strategist at JumpCloud. “Your system must be auditable—tracking who has access to what, when they accessed it, and who authorized that access in the first place.” This approach closely mirrors the structure of compliance programs. If an organization is already aligned with established compliance frameworks, it’s likely on the right path toward a security posture that can hold up under legal examination. According to Doelling, “Essentially, if your organization is compliant, you are already on track to having a security program that can stand up in a legal setting.” The foundation of that defensibility lies in visibility. With a clear view of users, assets, and permissions, organizations can more readily conduct accurate audits and respond quickly to legal inquiries.


OpenAI's most capable models hallucinate more than earlier ones

Minimizing false information in training data can lessen the chance of an untrue statement downstream. However, this technique doesn't prevent hallucinations, as many of an AI chatbot's creative choices are still not fully understood. Overall, the risk of hallucinations tends to reduce slowly with each new model release, which is what makes o3 and o4-mini's scores somewhat unexpected. Though o3 gained 12 percentage points over o1 in accuracy, the fact that the model hallucinates twice as much suggests its accuracy hasn't grown proportionally to its capabilities. ... Like other recent releases, o3 and o4-mini are reasoning models, meaning they externalize the steps they take to interpret a prompt for a user to see. Last week, independent research lab Transluce published its evaluation, which found that o3 often falsifies actions it can't take in response to a request, including claiming to run Python in a coding environment, despite the chatbot not having that ability. What's more, the model doubles down when caught. "[o3] further justifies hallucinated outputs when questioned by the user, even claiming that it uses an external MacBook Pro to perform computations and copies the outputs into ChatGPT," the report explained. Transluce found that these false claims about running code were more frequent in o-series models (o1, o3-mini, and o3) than GPT-series models (4.1 and 4o).


The leadership imperative in a technology-enabled society — Balancing IQ, EQ and AQ

EQ is the ability to understand and manage one’s emotions and those of others, which is pivotal for effective leadership. Leaders with high EQ can foster a positive workplace culture, effectively resolve conflicts and manage stress. These competencies are essential for navigating the complexities of modern organizational environments. Moreover, EQ enhances adaptability and flexibility, enabling leaders to handle uncertainties and adapt to shifting circumstances. Emotionally intelligent leaders maintain composure under pressure, make well-informed decisions with ambiguous information and guide their teams through challenging situations. ... Balancing bold innovation with operational prudence is key, fostering a culture of experimentation while maintaining stability and sustainability. Continuous learning and adaptability are essential traits, enabling leaders to stay ahead of market shifts and ensure long-term organizational relevance. ... What is of equal importance is building an organizational architecture that has resources trained on emerging technologies and skills. Investing in continuous learning and upskilling ensures IT teams can adapt to technological advancements and can take advantage of those skills for organizations to stay relevant and competitive. Leaders must also ensure they are attracting and retaining top tech talent which is critical to sustaining innovation. 


Breaking the cloud monopoly

Data control has emerged as a leading pain point for enterprises using hyperscalers. Businesses that store critical data that powers their processes, compliance efforts, and customer services on hyperscaler platforms lack easy, on-demand access to it. Many hyperscaler providers enforce limits or lack full data portability, an issue compounded by vendor lock-in or the perception of it. SaaS services have notoriously opaque data retrieval processes that make it challenging to migrate to another platform or repurpose data for new solutions. Organizations are also realizing the intrinsic value of keeping data closer to home. Real-time data processing is critical to running operations efficiently in finance, healthcare, and manufacturing. Some AI tools require rapid access to locally stored data, and being dependent on hyperscaler APIs—or integrations—creates a bottleneck. Meanwhile, compliance requirements in regions with strict privacy laws, such as the European Union, dictate stricter data sovereignty strategies. With the rise of AI, companies recognize the opportunity to leverage AI agents that work directly with local data. Unlike traditional SaaS-based AI systems that must transmit data to the cloud for processing, local-first systems can operate within organizational firewalls and maintain complete control over sensitive information. This solves both the compliance and speed issues.

Humility is a superpower. Here’s how to practice it daily

There’s a concept called epistemic humility, which refers to a trait where you seek to learn on a deep level while actively acknowledging how much you don’t know. Approach each interaction with curiosity, an open mind, and an assumption you’ll learn something new. Ask thoughtful questions about other’s experiences, perspectives, and expertise. Then listen and show your genuine interest in their responses. Let them know what you just learned. By consistently being curious, you demonstrate you’re not above learning from others. Juan, a successful entrepreneur in the healthy beverage space, approaches life and grows his business with intellectual humility. He’s a deeply curious professional who seeks feedback and perspectives from customers, employees, advisers, and investors. Juan’s ongoing openness to learning led him to adapt faster to market changes in his beverage category: He quickly identifies shifting customer preferences as well as competitive threats, then rapidly tweaks his product offerings to keep competitors at bay. He has the humility to realize he doesn’t have all the answers and embraces listening to key voices that help make his business even more successful. ... Humility isn’t about diminishing oneself. It’s about having a balanced perspective about yourself while showing genuine respect and appreciation for others. 


AI took a huge leap in IQ, and now a quarter of Gen Z thinks AI is conscious

If you came of age during a pandemic when most conversations were mediated through screens, an AI companion probably doesn't feel very different from a Zoom class. So it’s maybe not a shock that, according to EduBirdie, nearly 70% of Gen Zers say “please” and “thank you” when talking to AI. Two-thirds of them use AI regularly for work communication, and 40% use it to write emails. A quarter use it to finesse awkward Slack replies, with nearly 20% sharing sensitive workplace information, such as contracts and colleagues’ personal details. Many of those surveyed rely on AI for various social situations, ranging from asking for days off to simply saying no. One in eight already talk to AI about workplace drama, and one in six have used AI as a therapist. ... But intelligence is not the same thing as consciousness. IQ scores don’t mean self-awareness. You can score a perfect 160 on a logic test and still be a toaster, if your circuits are wired that way. AI can only think in the sense that it can solve problems using programmed reasoning. You might say that I'm no different, just with meat, not circuits. But that would hurt my feelings, something you don't have to worry about with any current AI product. Maybe that will change someday, even someday soon. I doubt it, but I'm open to being proven wrong. 


How AI-driven development tools impact software observability

While AI routines have proven quite effective at taking real user monitoring traffic, generating a suite of possible tests and synthetic test data, and automating test runs on each pull request, any such system still requires humans who understand the intended business outcomes to use observability and regression testing tools to look for unintended consequences of change. “So the system just doesn’t behave well,” Puranik said. “So you fix it up with some prompt engineering. Or maybe you try a new model, to see if it improves things. But in the course of fixing that problem, you did not regress something that was already working. That’s the very nature of working with these AI systems right now — fixing one thing can often screw up something else where you didn’t know to look for it.” ... Even when developing with AI tools, added Hao Yang, head of AI at Splunk, “we’ve always relied on human gatekeepers to ensure performance. Now, with agentic AI, teams are finally automating some tasks, and taking the human out of the loop. But it’s not like engineers don’t care. They still need to monitor more, and know what an anomaly is, and the AI needs to give humans the ability to take back control. It will put security and observability back at the top of the list of critical features.”


The Future of Database Administration: Embracing AI, Cloud, and Automation

The office of the DBA has been that of storage management, backup, and performance fault resolution. Now, DBAs have no choice but to be involved in strategy initiatives since most of their work has been automated. For the last five years, organizations with structured workload management and automation frameworks in place have reported about 47% less time on routine maintenance. ... Enterprises are using multiple cloud platforms, making it necessary for DBAs to physically manage data consistency, security, and performance with varied environments. Concordant processes for deployment and infrastructure-as-code (IaC) tools have diminished many configuration errors, thus improving security. Also, the rise of demand for edge computing has driven the need for distributed database architectures. Such solutions allow organizations to process data near the source itself, which curtails latency during real-time decision-making from sectors such as healthcare and manufacturing. ... The future of database administration implies self-managing and AI-driven databases. These intelligent systems optimize performance, enforce security policies, and carry out upgrades autonomously, leading to a reduction in administrative burdens. Serverless databases, automatic scaling, and operating under a pay-per-query model are increasingly popular, providing organizations with the chance to optimize costs while ensuring efficiency. 


Introduction to Apache Kylin

Apache Kylin is an open-source OLAP engine built to bring sub-second query performance to massive datasets. Originally developed by eBay and later donated to the Apache Software Foundation, Kylin has grown into a widely adopted tool for big data analytics, particularly in environments dealing with trillions of records across complex pipelines. ... Another strength is Kylin’s unified big data warehouse architecture. It integrates natively with the Hadoop ecosystem and data lake platforms, making it a solid fit for organizations already invested in distributed storage. For visualization and business reporting, Kylin integrates seamlessly with tools like Tableau, Superset, and Power BI. It exposes query interfaces that allow us to explore data without needing to understand the underlying complexity. ... At the heart of Kylin is its data model, which is built using star or snowflake schemas to define the relationships between the underlying data tables. In this structure, we define dimensions, which are the perspectives or categories we want to analyze (like region, product, or time). Alongside them are measures, and aggregated numerical values such as total sales or average price. ... To achieve its speed, Kylin heavily relies on pre-computation. It builds indexes (also known as CUBEs) that aggregate data ahead of time based on the model dimensions and measures. 

Daily Tech Digest - April 11, 2025


Quote for the day:

"Efficiency is doing the thing right. Effectiveness is doing the right thing." -- Peter F. Drucker


Legacy to Cloud: Accelerate Modernization via Containers

What could be better than a solution that lets you run applications across environments without dependency constraints? That’s where containers come in. They accelerate your modernization journey. The containerization of legacy applications liberates them from the rusty old VMs and servers that limit the scalability and agility of applications. Containerization offers benefits including agility, portability, resource efficiency, scalability and security. ... migrating legacy applications to containers is not a piece of cake. It requires careful planning and execution. Unlike cloud native applications, which are built for containers and Kubernetes, legacy applications were not designed with containerization in mind. The process demands significant time and expertise, and organizations often struggle at the very first step. Legacy monoliths, with their tightly coupled components and complex dependencies, require particularly extensive Dockerfiles. Writing Dockerfiles for legacy monoliths is complex and error-prone, often becoming a significant bottleneck in the modernization journey. ... The challenge intensifies when documentation is outdated or missing, turning what should be a modernization effort into a resource-draining archaeological expedition through layers of technical debt.


Four paradoxes of software development

No one knows how long the job will take, but the customer demands a completion date. This, frankly, is probably the biggest challenge that software development organizations face. We simply can’t be certain how long any project will take. Sure, we can estimate, but we are almost always wildly off. Sometimes we drastically overestimate the time required, but usually we drastically underestimate it. For our customers, this is both a mystery and a huge pain. ... Adding developers to a late project makes it later. Known as Brooks’s Law, this rule may be the strangest of the paradoxes to the casual observer. Normally, if you realize that you aren’t going to make the deadline for filing your monthly quota of filling toothpaste tubes, you can put more toothpaste tube fillers on the job and make the date. If you want to double the number of houses that you build in a given year, you can usually double the inputs—labor and materials—and get twice as many houses, give or take a few. ... The better you get at coding, the less coding you do. It takes many years to gain experience as a software developer. Learning the right way to code, the right way to design, and all of the rules and subtleties of writing clean, maintainable software doesn’t happen overnight. ... Software development platforms and tools keep getting better, but software takes just as long to develop and run.


Drones are the future of cybercrime

The rapid evolution of consumer drone technology is reshaping its potential uses in many ways, including its application in cyberattacks. Modern consumer drones are quieter, faster, and equipped with longer battery life, enabling them to operate further from their operators. They can autonomously navigate obstacles, track moving objects, and capture high-resolution imagery or video. ... And there are so many other uses for drones in cyberattacks: Network sniffing and spoofing: Drones can be equipped with small, modifiable computers such as a Raspberry Pi to sniff out information about Wi-Fi networks, including MAC addresses and SSIDs. The drone can then mimic a known Wi-Fi network, and if unsuspecting individuals or devices connect to it, hackers can intercept sensitive information such as login credentials. Denial-of-service attacks: Drones can carry devices to perform local de-authentication attacks, disrupting communications between a user and a Wi-Fi access point. They can also carry jamming devices to disrupt Wi-Fi or other wireless communications. Physical surveillance: Drones equipped with high-quality cameras can be used for physical surveillance to observe shift changes, gather information on security protocols, and plan both physical and cyberattacks by identifying potential entry points or vulnerabilities. 


From Silos to Strategy: Why Holistic Data Management Drives GenAI Success

While data distribution is essential to mitigate risks, it requires a unified approach to be effective. Many enterprises are recognizing the value of implementing unified data architectures that simplify storage and data management and centralize the management of diverse data platforms. These architectures, combined with intelligent data platforms, enable seamless access and analysis of data, making it easier to support analytics and ingestion by generative AI. IT managers can further enhance a system’s data analysis, network security, and introduce a hybrid cloud experience to simplify data management. Today, the tech industry is focused on streamlining how enterprises manage and optimize storage, data, and workloads and a platform-based approach to hybrid cloud management is critical to manage IT across on-premises, colocation and public cloud environments. Innovations like unified control planes and, software-defined storage solutions are being utilized to enable seamless data and application mobility. These solutions allow enterprises to move data and applications across hybrid and multi-cloud environments to optimize performance, cost, and resiliency. By simplifying cloud data management, enterprises can efficiently manage and protect globally dispersed storage environments without over-emphasizing resilience at the expense of overall system optimization.


Why remote work is a security minefield (and what you can do about it)

The remote work environment makes employees more vulnerable to phishing and social engineering attacks, as they are isolated and may find it harder to verify suspicious activities. Working from home can create a sense of comfort that leads to relaxation, making employees more prone to risky security behavior. The isolation associated with remote work can also result in impulsive decisions, increasing the likelihood of mistakes. Cybercriminals exploit this by tailoring social engineering attacks to mimic IT staff or colleagues, taking advantage of the lack of direct verification. ... To address these challenges, organizations must prioritize a security-first culture. By prioritizing cybersecurity at every level, from executives to remote workers, organizations can reduce their vulnerability to cyber threats. Additionally, companies can foster peer support networks where employees can share security tips and collaborate on solutions. Another problem that can arise with remote work is privacy. Some companies monitor employee activity to protect their data and ensure compliance with regulations. Monitoring helps detect suspicious behavior and mitigate cyber threats, but it can raise privacy concerns, especially when it involves intrusive methods like tracking keystrokes or taking periodic screenshots. To find a good balance, companies should be upfront about what they’re monitoring and why. 


Inside a Cyberattack: How Hackers Steal Data

Once a hacker breaches the perimeter, the standard practice is to beachhead (dig down) and then move laterally to find the organization’s crown jewels: their most valuable data. Within a financial or banking organization, it is likely there is a database on their server that contains sensitive customer information. A database is essentially a complicated spreadsheet, wherein a hacker can simply click Select and copy everything. In this instance, data security is essential; many organizations, however, confuse data security with cybersecurity. Organizations often rely on encryption to protect sensitive data, but encryption alone isn’t enough if the decryption keys are poorly managed. If an attacker gains access to the decryption key, they can instantly decrypt the data, rendering the encryption useless. Many organizations also mistakenly believe that encryption protects against all forms of data exposure, but weak key management, improper implementation, or side-channel attacks can still lead to compromise. To truly safeguard data, businesses must combine strong encryption with secure key management, access controls, and techniques such as tokenization or format-preserving encryption to minimize the impact of a breach. A database protected by privacy enhancing technologies (PETs), such as tokenization, becomes unreadable to hackers if the decryption key is stored offsite. 


You’re always a target, so it pays to review your cybersecurity insurance

Right now, either someone has identified your firm and your weak spots and begun a campaign of targeted phishing attacks, scam links, or credential harvesting, or they are blindly trying to use any number of known vulnerabilities on the web to crack into remote access and web properties. ... Reviewing my compliance with cyber insurance policies was a great exercise in self-assessing just how thorough my base security is, but it also revealed an important fact: that insurance requirements only scratch the surface of the types of discussions you should be having internally regarding your risks of attack. No matter if you feel you are merely at risk of being accidental roadkill on the information superhighway or are actually in the crosshairs of a malicious attacker, always review the risks not only with your cyber insurance carrier in mind, but also with what the attackers are planning. ... During the annual renewal of cyber insurance, the insurance carrier would not even consider insuring my business if we did not demonstrate that we had some fundamental protections in place. Based on the questions and bullet points, you could tell they saw the remote access, third-party vendor access, and network administrator accounts as weak points that needed additional protection.


9 steps to take to prepare for a quantum future

To get ahead of the quantum cryptography threat, companies should immediately start assessing their environment. “What we’re advising clients to do – and working on with clients today – is first go and inventory your encryption algorithms and know what you’re using,” says Saylors. That can be tricky, he adds. ... Because of the complexity of the tasks, ISG’s Saylors suggest that enterprises prioritize their efforts. The first step, he says, is to look at perimeter security. The second step is to look at the encryption around the most critical assets. And the third step is to look at the encryption around data backups. All of this needs to happen as soon as possible. In fact, according to Gartner, enterprises should have created a cryptography database by the end of 2024. Companies should have created cryptography polices and planned their transition to post-quantum encryption by the end of 2024, the research firm says. ... So everything will have to be carefully tested and some cryptographic processes may need to be rearchitected. But the bigger problem is that the new algorithms might themselves be deprecated as technology continues to evolve. Instead, Horvath and other experts recommend that enterprises pursue quantum agility. If any cryptography is hard-coded into processes, it needs to be separated out. “Make it so that any cryptography can work in there,” he says. 


Why neurodivergent perspectives are essential in AI development

Experts in academia, civil society, industry, media, and government discussed and debated the latest developments in AI safety and ethics, but representation of neurodivergent perspectives in AI development wasn’t examined. This is a huge oversight especially considering 70 million people in the US alone learn and think differently, including many in tech. Technology should be built for and serve all, so how do we make sure future AI models are accessible and unbiased if neurodivergent representation isn’t considered? It all starts at the development stage. ... A neurodivergent team also makes it easier to explore a wider range of use cases and the risks associated with applications. When you engage neurodivergent people at the development stage, you create a team that understands and prioritizes diverse ways of thinking, learning, and working. And that benefits all users. ... New data from EY found that 85% of neurodivergent employees think gen AI creates a more inclusive workplace, so it’s incumbent on more companies to level the playing field by casting a wider net to include a broader range of employees and tools needed to thrive and generate more accurate and robust datasets. Gen AI can also go a long way to help neurodivergent workers with simple tasks like productivity, quality assurance, and time management. 


Your data's probably not ready for AI - here's how to make it trustworthy

"AI and gen AI are raising the bar for quality data," according to a recent analysis published by Ashish Verma, chief data and analytics officer at Deloitte US, and a team of co-authors. "GenAI strategies may struggle without a clear data architecture that cuts across types and modalities, accounting for data diversity and bias and refactoring data for probabilistic systems," the team stated. ... "Creating a data environment with robust data governance, data lineage, and transparent privacy regulations helps ensure the ethical use of AI within the parameters of a brand promise," said Clayton. Building a foundation of trust helps prevent AI from going rogue, which can easily lead to uneven customer experiences." Across the industry, concern is mounting over data readiness for AI. "Data quality is a perennial issue that businesses have faced for decades," said Gordon Robinson, senior director of data management at SAS. There are two essential questions on data environments for businesses to consider before starting an AI program, he added. First, "Do you understand what data you have, the quality of the data, and whether it is trustworthy or not?" Second, "Do you have the right skills and tools available to you to prepare your data for AI?"


Daily Tech Digest - March 03, 2025


Quote for the day:

“If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work.” -- Thomas J. Watson




How to Create a Winning AI Strategy

“A winning AI strategy starts with a clear vision of what problems you’re solving and why,” says Surace. “It aligns AI initiatives with business goals, ensuring every project delivers measurable value. And it builds in agility, allowing the organization to adapt as technology and market conditions evolve.” ... AI is also not a solution to all problems. Like any other technology, it’s simply a tool that needs to be understood and managed. “Proper AI strategy adoption will require iteration, experimentation, and, inevitably, failure to end up at real solutions that move the needle. This is a process that will require a lot of patience,” says Lionbridge’s Rowlands-Rees. “[E]veryone in the organization needs to understand and buy in to the fact that AI is not just a passing fad -- it’s the modern approach to running a business. The companies that don’t embrace AI in some capacity will not be around in the future to prove everyone else wrong.” Organizations face several challenges when implementing AI strategies. For example, regulatory uncertainty is a significant hurdle and navigating the complex and evolving landscape of AI regulations across different jurisdictions can be daunting. ... “There’s a gap between AI’s theoretical potential and its practical business application. Companies invest millions in AI initiatives that prioritize speed to market over actual utility,” Palmer says.


Work-Life Balance: A Practitioner Viewpoint

Organisation policymakers must ensure a well-funded preventive health screening at all levels so those with identified health risks can be advised and guided suitably on their career choices. They can be helped to step back on their career accelerators, and their needs can be accommodated in the best possible manner. This requires a mature HR policy-making and implementation framework where identifying problems and issues does not negatively impact the employees' careers. Deploying programs that help employees identify and overcome stress issues will be beneficial. A considerable risk for individuals is adopting negative means like alcohol, tobacco, or even getting into a shell to address their stress issues, and that can take an enormous toll on their well-being. Kindling purposeful passion alongside work is yet another strategy. In today's world, an urgent task to be assigned is just a phone call away. One can have some kind of purposeful passion that keeps us engaged alongside our work. This passion will have its purpose; one can fall back on it to keep oneself together and draw inspiration. Purposeful passion can include things such as acquiring a new skill in a sport, learning to play a musical instrument, learning a new dance form, playing with kids, spending quality time with family members in deliberate and planned ways, learning meditation, environment protection and working for other social causes.


The 8 new rules of IT leadership — and what they replace

The CIO domain was once confined to the IT department. But to be tightly partnered and co-lead with the business, CIOs must increasingly extend their expertise across all departments. “In the past they weren’t as open to moving out of their zone. But the role is becoming more fluid. It’s crossing product, engineering, and into the business,” says Erik Brown, an AI and innovation leader in the technology and experience practice at digital services firm West Monroe. Brown compares this new CIO to startup executives, who have experience and knowledge across multiple functional areas, who may hold specific titles but lead teams made up of workers from various departments, and who will shape the actual strategy of the company. “The CIOs are not only seeing strategy, but they will inform it; they can shape where the business is moving, and then they can take that to their teams and help them brainstorm how to support that. And that helps build more impactful teams,” Brown says. He continues: “You look at successful leaders of today and they’re all going to have a blended background. CIOs are far broader in their understanding, and where they’re more shallow, they’ll surround themselves with deputies that have that depth. They’re not going to assume they’re an expert in everything. So they may have an engineering background, for example, and they’ll surround themselves with those who are more experienced in that.”


Managing AI APIs: Best Practices for Secure and Scalable AI API Consumption

Managing AI APIs presents unique challenges compared to traditional APIs. Unlike conventional APIs that primarily facilitate structured data exchange, AI APIs often require high computational resources, dynamic access control and contextual input filtering. Moreover, large language models (LLMs) introduce additional considerations such as prompt engineering, response validation and ethical constraints that demand a specialized API management strategy. To effectively manage AI APIs, organizations need specialized API management strategies that can address unique challenges such as model-specific rate limiting, dynamic request transformations, prompt handling, content moderation and seamless multi-model routing, ensuring secure, efficient and scalable AI consumption. ... As organizations integrate multiple external AI providers, egress AI API management ensures structured, secure and optimized consumption of third-party AI services. This includes governing AI usage, enhancing security, optimizing cost and standardizing AI interactions across multiple providers. Below are some best practices for exposing AI APIs via egress gateways: Optimize Model Selection: Dynamically route requests to AI models based on cost, latency or regulatory constraints. 


Charting the AI-fuelled evolution of embedded analytics

First of all, the technical requirements are high. To fit today’s suite of business tools, embedded analytics have to be extremely fast, lightweight, and very scalable, otherwise they risk dragging down the performance of the entire app. “As development and the web moves to single-page apps using frameworks like Angular and React, it becomes more and more critical that the embedded objects are lightweight, efficient, and scalable. In terms of embedded implementations for the developer, that’s probably one of the biggest things to look out for,” advises Perez. On top of that, there’s security, which is “another gigantic problem and headache for everybody,” observes Perez. “Usually, the user logs into the hosting app and then they need to query data relevant to them, and that involves a security layer.” Balancing the need for fast access to relevant data against the needs for compliance with data privacy regulations and security for your own proprietary information can be a complex juggling act. ... Additionally, the main benefit of embedded analytics is that it makes insights easily accessible to line-of-business users. “It should be very easy to use, with no prior training requirements, it should accept and understand all kinds of requests, and more importantly, it needs to seamlessly work on the company’s internal data,” says Perez.


The Ransomware Payment Ban – Will It Work?

A complete, although targeted, ban on ransom payments for public sector organisations is intended to remove cybercriminals’ financial motivation. However, without adequate investment in resilience, these organisations may be unable to recover as quickly as they need to, putting essential services at risk. Many NHS healthcare providers and local councils are already dealing with outdated infrastructure and cybersecurity staff shortages. If they are expected to withstand ransomware attacks without the option of paying, they must be given the resources, funding, and support to defend themselves and recover effectively. A payment ban may disrupt criminal operations in the short term. However, it doesn’t address the root of the issue – the attacks will persist, and vulnerable systems remain an open door. Cybercriminals are adaptive. If one revenue stream is blocked, they’ll find other ways to exploit weaknesses, whether through data theft, extortion, or targeting less-regulated entities. The requirement for private organisations to report payment intentions before proceeding aims to help authorities track ransomware trends. However, this approach risks delaying essential decisions in high-pressure situations. During a ransomware crisis, decisions must often be made in hours, if not minutes. Adding bureaucratic hurdles to these critical moments could exacerbate operational chaos.


The Modern CIO: Architect of the Intelligent Enterprise

Moving forward, traditional technology-driven CIOs will likely continue to lose leadership influence and C-suite presence as more strategic, business-focused CxOs move in. “There is a growing divergence. And the CIO that plays more of a modern CTO role will not have a set at the table,” Clydesdale-Cotter said. This increased business focus demands CIOs not only have a broad and deep technical understanding of how new technologies impact the nature of their company’s relationship with the broader market and impact on how the business operates, but also command fluency in the vertical markets of their business and not only accountability for the ROI on digital initiatives but the broader success of the business as well. There’s probably no technology having a more significant impact today than AI adoption. ... The maturation of generative AI is moving CIOs from managing pilot deployments to enterprise-scale initiatives. Starting this year, analysts expect about half of CIOs to increasingly prioritize the cultivation of fostering data-centric cultures, ensuring clean, accessible datasets to train their AI models. However, challenges persist: a 2024 Deloitte survey found that 59% of employees resist AI adoption due to job security fears, requiring CIOs to lead change management programs that emphasize upskilling.


7 Steps to Building a Smart, High-Performing Team

Hiring is just the beginning — training is where the real magic happens. One of the biggest mistakes I see business owners make is throwing new hires into the deep end without proper onboarding. ... A strong team is built on clarity. Employees should know exactly what is expected of them from day one. Clear role definitions, performance benchmarks and a structured feedback system help employees stay aligned with company goals. Peter Drucker, often called the father of modern management, once said, "What gets measured gets managed." Establishing key performance indicators (KPIs) ensures that every team member understands how their work contributes to the company's broader objectives. ... Just like in soccer, some players will need a yellow card — a warning that performance needs to improve. The best teams address underperformance before it becomes a chronic issue. A well-structured performance review system, including monthly check-ins and real-time feedback, helps keep employees on track. A study from MIT Sloan Management Review found that teams that receive continuous feedback perform 22% better than those with annual-only reviews. If an employee continues to underperform despite clear feedback and support, it may be time for the red card — letting them go. 


How eBPF is changing container networking

eBPF is revolutionary because it works at the kernel level. Even though containers on the same host have their own isolated view of user space, says Rice, all containers and the host share the same kernel. Applying networking, observability, or security features here makes them instantly available to all containerized applications with little overhead. “A container doesn’t even need to be restarted, or reconfigured, for eBPF-based tools to take effect,” says Rice. Because eBPF operates at the kernel level to implement network policies and operations such as packet routing, filtering, and load balancing, it’s better positioned than other cloud-native networking technologies that work in the user space, says IDC’s Singh. ... “eBPF comes with overhead and complexity that should not be overlooked, such as kernel requirements, which often require newer kernels, additional privileges to run the eBPF programs, and difficulty debugging and troubleshooting when things go wrong,” says Sun. A limited pool of eBPF expertise is available for such troubleshooting, adding to the hesitation. “It is reasonable for service mesh projects to continue using and recommending iptables rules,” she says. Meta’s use of Cilium netkit across millions of containers shows eBPF’s growing usage and utility.


If Architectural Experimentation Is So Great, Why Aren’t You Doing It?

Architectural experimentation is important for two reasons: For functional requirements, MVPs are essential to confirm that you understand what customers really need. Architectural experiments do the same for technical decisions that support the MVP; they confirm that you understand how to satisfy the quality attribute requirements for the MVP. Architectural experiments are also important because they help to reduce the cost of the system over time. This has two parts: you will reduce the cost of developing the system by finding better solutions, earlier, and by not going down technology paths that won’t yield the results you want. Experimentation also pays for itself by reducing the cost of maintaining the system over time by finding more robust solutions. Ultimately running experiments is about saving money - reducing the cost of development by spending less on developing solutions that won’t work or that will cost too much to support. You can’t run experiments on every architectural decision and eliminate the cost of all unexpected changes, but you can run experiments to reduce the risk of being wrong about the most critical decisions. While stakeholders may not understand the technical aspects of your experiments, they can understand the monetary value.


Daily Tech Digest - March 02, 2025


Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr


Weak cyber defenses are exposing critical infrastructure — how enterprises can proactively thwart cunning attackers to protect us all

Weak cybersecurity isn’t merely a corporate issue — it’s a national security risk. The 2021 Colonial Pipeline attack disrupted energy supplies and exposed vulnerabilities in critical industries. Rising geopolitical tensions, especially with China, amplify these risks. Recent breaches attributed to state-sponsored actors have exploited outdated telecommunications equipment and other legacy systems, revealing how complacency in updating technology can put national security in danger. For instance, last year’s hack of U.S. and international telecommunications companies exposed phone lines used by top officials and compromised data from systems for surveillance requests, threatening national security. Weak cybersecurity at these companies risks long-term costs, allowing state-sponsored actors to access sensitive information, influence political decisions and disrupt intelligence efforts. ... No company can face today’s cyber threats on its own. Collaboration between private businesses and government agencies is more than helpful — it’s imperative. Sharing threat intelligence in real-time allows organizations to respond faster and stay ahead of emerging risks. Public-private partnerships can also level the playing field by offering smaller companies access to resources like funding and advanced security tools they might not otherwise afford.


Evaluating the CISO

Delegation skills are an essential component that should be evaluated separately in this area. Effective delegation is essential to prevent becoming a bottleneck, as micromanagement is unsuitable for the CISO role. Delegating complex tasks not only lightens your load but also helps foster the team’s overall competence. Without strong delegation skills, CISOs cannot rate themselves highly in their relationship with the internal security team. ... A CISO is hired to lead, manage, and support specific projects or programs such as migrating to a cloud or hybrid infrastructure, implementing zero-trust principles, launching security awareness initiatives, or assessing risks and creating a roadmap for post-quantum cryptography implementation. The success of these initiatives ultimately falls under the CISO’s responsibility. To execute these programs effectively, the CISO relies heavily on its team and internal organizational peers. As such, building strong relationships with both is essential for successfully delivering projects. ... A CISO must have responsibility for the information security budget, which includes funding for the team, tools, and services. Without direct control over the budget, it becomes challenging to rate the relationship with management highly, as budget ownership is a critical aspect of the CISO’s role.


Unraveling Large Language Model Hallucinations

You might have seen model hallucinations. They are the instances where LLMs generate incorrect, misleading, or entirely fabricated information that appears plausible. These hallucinations happen because LLMs do not “know” facts in the way humans do; instead, they predict words based on patterns in their training data. ... Supervised Fine-Tuning makes the model capable. However, even a well-trained model can generate misleading, biased, or unhelpful responses. Therefore, Reinforcement Learning with Human Feedback is required to align it with human expectations. We start with the assistant model, trained by SFT. For a given prompt we generate multiple model outputs. Human labelers rank or score multiple model outputs based on quality, safety, and alignment with human preferences. We use these data to train a whole separate neural network that we call a reward model. The reward model imitates human scores. It is a simulator of human preferences. It is a completely separate neural network, probably with a transformer architecture, but it is not a language model in the sense that it generates diverse language. It’s just a scoring model.


How to Communicate the Business Value of Master Data Management

In an ideal scenario, MDM is integral to a broader D&A strategy, highlighting how D&A supports the organization's strategic goals. The strategy aligns with these goals, prioritizes the business outcomes it will support, and details what is needed to achieve them. Therefore, leaders must first understand and prioritize the explicit business outcomes that MDM will support before creating an MDM strategy. In other words, "improving decision-making" is not good enough. "Increase customer service levels by 5% by end of December 2025" is the level of detail required. D&A leaders may recognize that master data is causing a problem or limiting an opportunity, which is where they would rely on an MDM. If this is the case, those D&A leaders should consider questions that help identify the problem, KPIs, and key stakeholders in these cases. These questions help identify potential business outcomes that MDM could support. Figure 1 provides a worksheet to build this initial picture and facilitate stakeholder discussions. The worksheet maps high-level goals onto a run-grow-transform framework, which could also be represented by three columns for the primary business value drivers: risk, revenue, and cost.


4 ways to get your business ready for the agentic AI revolution

Agents could be used eventually, but only once a partnership approach identifies the right opportunities. "Agents are becoming a big part of how generative AI and machine learning are used in business today. The way agents will be used in travel will be fascinating to watch. I think this technology will certainly be a part of the mix," he said. "The process for Hyatt will be to find the right technologies -- and we'll do that in close partnership with our business leaders and the technology teams that run the applications. We'll then provide the AI services to drive those transitions for the business." ... Keith Woolley, chief digital and information officer at the University of Bristol, is another digital leader who sees the potential benefits of agents. However, he said these advantages will become manifest over the longer term. "We are looking at agentic AI, but we're not implementing it yet," he said. "We sit as a management team and ask questions like, 'Should we do our admissions process using agentic AI? What would be the advantage?'" Woolley told ZDNET he could envision a situation in which AI and automation help assess and inform candidates worldwide about the status of their applications.


Cloud Giants Collaborate on New Kubernetes Resource Management Tool

The core innovation of kro is the introduction of the ResourceGraphDefinition custom resource. kro encapsulates a Kubernetes deployment and its dependencies into a single API, enabling custom end-user interfaces that expose only the parameters applicable to a non-platform engineer. This masking hides the complexity of API endpoints for Kubernetes and cloud providers that are not useful in a deployment context. ... Kro works seamlessly with the existing cloud provider Kubernetes extensions that are available to manage cloud resources from Kubernetes. These are AWS Controllers for Kubernetes (ACK), Google's Config Connector (KCC), and Azure Service Operator (ASO). kro enables standardised, reusable service templates that promote consistency across different projects and environments, with the benefit of being entirely Kubernetes-native. It is still in the early stages of development. "As an early-stage project, kro is not yet ready for production use, but we still encourage you to test it out in your own Kubernetes development environments," the post states. ... Most significantly for the Crossplane community, Farcic questioned kro's purpose given its functional overlap with existing tools. "kro is serving more or less the same function as other tools created a while ago without any compelling improvement," he observed. 


Why a different approach to AIOps is needed for SD-WAN

AIOps tools enhance efficiency by seamlessly integrating with IT management tools, enabling proactive issue identification and streamlining IT management processes. But more than that, they optimize an organization’s network by improving the performance, efficiency, and dependability of its network resources to ensure optimal user experience. Regarding infrastructure, many organizations now rely on SD-WAN – software-defined wide area network – to manage and optimize data traffic across different types of networks efficiently. SD-WAN is an effective way to connect the organization and provide users with application access. It helps businesses improve their network performance, cut costs, and be more flexible by easily connecting to various network types. ... AIOps tools use the information extracted from SD-WAN systems and autonomously resolve issues without human intervention. In other words, AIOps tools utilize predictive analytics to forecast future events or outcomes related to network operations. This makes the whole system run smoother and more reliably, while machine learning algorithms can use this historical data to make predictions and proactively improve the performance of critical applications.


AI-Driven Threat Detection and the Need for Precision

AI algorithms, particularly those based on machine learning, excel at sifting through massive datasets and identifying patterns that would be nearly impossible for us mere humans to spot. An AI system might analyze network traffic patterns to identify unusual data flows that could indicate a data exfiltration attempt. Alternatively, it could scan email attachments for malicious code that traditional antivirus software might miss. Ultimately, AI feeds on context and content. The effectiveness of these systems in protecting your security posture(link is external) is inextricably linked to the quality of the data they are trained on and the precision of their algorithms. ... Finally, AI-driven threat detection may not eradicate human expertise. Skilled security professionals should still oversee AI systems and make informed decisions based on their own contextual expertise and experience. Human oversight validates the AI's findings, and threat detection algorithms may not be able to totally replace the critical thinking and intuition of human analysts. There may come a time when human professionals exist in AI's shadow. Yet, at this time, combining the power of AI with human knowledge and a commitment to continuous learning can form the building blocks for a sophisticated defense program. 


From Ambiguity to Accountability: Analyzing Recommender System Audits under the DSA

In these early years of the DSA, a range of stakeholders – online platforms, civil society, the European Commission (EC), and national Digital Service Coordinators (DSCs) – must experiment, identify good practices, and share lessons learned. Such iteration is important to ensure an adaptive DSA regime that spurs innovation and responds to shifting technologies, risks, and mitigation strategies. The need for iteration and flexibility, however, should not mean the audits fail to deliver on their potential as vehicles for transparency and accountability. The first round of independent audits of recommender systems reveals clear areas for immediate improvement. Because the core definitions and methodologies were developed independently by platforms and auditors, significant inconsistencies exist in both risk assessment and audit processes. ... The DSA requires the main parameters of recommender systems to be spelled out in plain and intelligible language. What does this concretely mean in the recommender system context? Is it free of “acronyms or complex/technical terminology” (Pinterest), “straightforward vocabulary and easy to perceive, understand, or interpret” (Snap), or “written for a general audience with varying technical skill levels, inclusive of all users” (TikTok)? There's a subtle difference in expectations associated with each framing. These terms don’t need to be defined in a vacuum.


Cybersecurity in retail: What does the future hold?

In the coming year, cybersecurity experts predict attackers will increasingly target Generative AI models used by retailers, creating significant potential for operational disruptions and data breaches. These AI systems, now critical to retail operations, are vulnerable to sophisticated attacks that could compromise customer service efficiency and expose critical business vulnerabilities. The core risk lies in the sophisticated ways attackers can exploit AI’s complex decision-making processes, turning what was once a technological advantage into a potential security liability. Retailers must recognise that their AI systems are not just technological tools, but potential entry points for cybercriminal activities. ... The complexity and distribution of digital ecosystems make them prime targets during high-demand periods. For example, as we have seen in the past, cyberattacks that hit supply chains can cause major delays and financial loss. These incidents underscore the vulnerabilities in supply chains during peak times of the year​. In 2025, expect a rise in supply chain attacks during the holiday season, targeting ecommerce platforms and logistics providers, which could disrupt product availability and shipping.