Showing posts with label insider threat. Show all posts
Showing posts with label insider threat. Show all posts

Daily Tech Digest - May 14, 2025


Quote for the day:

"Success is what happens after you have survived all of your mistakes." -- Anonymous


3 Stages of Building Self-Healing IT Systems With Multiagent AI

Multiagent AI systems can allow significant improvements to existing processes across the operations management lifecycle. From intelligent ticketing and triage to autonomous debugging and proactive infrastructure maintenance, these systems can pave the way for IT environments that are largely self-healing. ... When an incident is detected, AI agents can attempt to debug issues with known fixes using past incident information. When multiple agents are combined within a network, they can work out alternative solutions if the initial remediation effort doesn’t work, while communicating the ongoing process with engineers. Keeping a human in the loop (HITL) is vital to verifying the outputs of an AI model, but agents must be trusted to work autonomously within a system to identify fixes and then report these back to engineers. ... The most important step in creating a self-healing system is training AI agents to be able to learn from each incident, as well as from each other, to become truly autonomous. For this to happen, AI agents cannot be siloed into incident response. Instead, they must be incorporated into an organization’s wider system, communicate with third-party agents and allow them to draw correlations from each action taken to resolve each incident. In this way, each organization’s incident history becomes the training data for its AI agents, ensuring that the actions they take are organization-specific and relevant.


The three refactorings every developer needs most

If I had to rely on only one refactoring, it would be Extract Method, because it is the best weapon against creating a big ball of mud. The single best thing you can do for your code is to never let methods get bigger than 10 or 15 lines. The mess created when you have nested if statements with big chunks of code in between the curly braces is almost always ripe for extracting methods. One could even make the case that an if statement should have only a single method call within it. ... It’s a common motif that naming things is hard. It’s common because it is true. We all know it. We all struggle to name things well, and we all read legacy code with badly named variables, methods, and classes. Often, you name something and you know what the subtleties are, but the next person that comes along does not. Sometimes you name something, and it changes meaning as things develop. But let’s be honest, we are going too fast most of the time and as a result we name things badly. ... In other words, we pass a function result directly into another function as part of a boolean expression. This is… problematic. First, it’s hard to read. You have to stop and think about all the steps. Second, and more importantly, it is hard to debug. If you set a breakpoint on that line, it is hard to know where the code is going to go next.


ENISA launches EU Vulnerability Database to strengthen cybersecurity under NIS2 Directive, boost cyber resilience

The EU Vulnerability Database is publicly accessible and serves various stakeholders, including the general public seeking information on vulnerabilities affecting IT products and services, suppliers of network and information systems, and organizations that rely on those systems and services. ... To meet the requirements of the NIS2 Directive, ENISA initiated a cooperation with different EU and international organisations, including MITRE’s CVE Programme. ENISA is in contact with MITRE to understand the impact and next steps following the announcement of the funding to the Common Vulnerabilities and Exposures Program. CVE data, data provided by Information and Communication Technology (ICT) vendors disclosing vulnerability information through advisories, and relevant information, such as CISA’s Known Exploited Vulnerability Catalogue, are automatically transferred into the EU Vulnerability Database. This will also be achieved with the support of member states, who established national Coordinated Vulnerability Disclosure (CVD) policies and designated one of their CSIRTs as the coordinator, ultimately making the EUVD a trusted source for enhanced situational awareness in the EU. 


Welcome to the age of paranoia as deepfakes and scams abound

Welcome to the Age of Paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a time stamp, proving you are who you claim to be. Some colleagues say they even share code words with each other, so they have a way to ensure they’re not being misled if an encounter feels off. ... Ken Schumacher, founder of the recruitment verification service Ropes, says he’s worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their résumé, such as their favorite coffee shops and places to hang out. If the applicant is actually based in that geographic region, Schumacher says, they should be able to respond quickly with accurate details. Another verification tactic some people use, Schumacher says, is what he calls the “phone camera trick.” If someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to show their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings.


CEOs Sound Alarm: C-Suite Behind in AI Savviness

According to the survey, CEOs now see upskilling internal teams as the cornerstone of AI strategy. The top two limiting factors impacting AI's deployment and use, they said, are the inability to hire adequate numbers of skilled people and to calculate value or outcomes. "CEOs have shifted their view of AI from just a tool to a transformative way of working," said Jennifer Carter, senior principal analyst at Gartner. Contrary to the CEOs' assessments by Gartner, most CIOs view themselves as the key drivers and leaders of their organizations' AI strategies. According to a recent report by CIO.com, 80% of CIOs said they are responsible for researching and evaluating AI products, positioning them as "central figures in their organizations' AI strategies." As CEOs increasingly prioritize AI, customer experience and digital transformation, these agenda items are directly shaping the evolving role and responsibilities of the CIO. But 66% of CEOs say their business models are not fit for AI purposes. Billions continue to be spent on enterprisewide AI use cases but little has come in way of returns. Gartner's forecast predicts a 76.4% surge in worldwide spending on gen AI by 2024, fueled by better foundational models and a global quest for AI-powered everything. But organizations are yet to see consistent results despite the surge in investment. 


Dropping the SBOM, why software supply chains are too flaky

“Mounting software supply chain risk is driving organisations to take action. [There is a] 200% increase in organistions making software supply chain security a top priority and growing use of SBOMs,” said Josh Bressers, vice president of security at Anchore. ... “There’s a clear disconnect between security goals and real-world implementation. Since open source code is the backbone of today’s software supply chains, any weakness in dependencies or artifacts can create widespread risk. To effectively reduce these risks, security measures need to be built into the core of artifact management processes, ensuring constant and proactive protection,” said Douglas. If we take anything from these market analysis pieces, it may be true that organisations struggle to balance the demands of delivering software at speed while addressing security vulnerabilities to a level which is commensurate with the composable interconnectedness of modern cloud-native applications in the Kubernetes universe. ... Alan Carson, Cloudsmith’s CSO and co-founder, remarked, “Without visibility, you can’t control your software supply chain… and without control, there’s no security. When we speak to enterprises, security is high up on their list of most urgent priorities. But security doesn’t have to come at the cost of speed. ...”


Does agentic AI spell doom for SaaS?

The reason agentic AI is perceived as a threat to SaaS and not traditional apps is that traditional apps have all but disappeared, replaced in favor of on-demand versions of former client software. But it goes beyond that. AI is considered a potential threat to SaaS for several reasons, mostly because of how it changes who is in control and how software is used. Agentic AI changes how work gets done because agents act on behalf of users, performing tasks across software platforms. If users no longer need to open and use SaaS apps directly because the agents are doing it for them, those apps lose their engagement and perceived usefulness. That ultimately translates into lost revenue, since SaaS apps typically charge either per user or by usage. An advanced AI agent can automate the workflows of an entire department, which may be covered by multiple SaaS products. So instead of all those subscriptions, you just use an agent to do it all. That can lead to significant savings in software costs. On top of the cost savings are time savings. Jeremiah Stone, CTO with enterprise integration platform vendor SnapLogic, said agents have resulted in a 90% reduction in time for data entry and reporting into the company’s Salesforce system. 


Ask a CIO Recruiter: Where Is the ‘I’ in the Modern CIO Role?

First, there are obviously huge opportunities AI can provide the business, whether it’s cost optimization or efficiencies, so there is a lot of pressure from boards and sometimes CEOs themselves saying ‘what are we doing in AI?’ The second side is that there are significant opportunities AI can enable the business in decision-making. The third leg is that AI is not fully leveraged today; it’s not in a very easy-to-use space. That is coming, and CIOs need to be able to prepare the organization for that change. CIOs need to prepare their teams, as well as business users, and say ‘hey, this is coming, we’ve already experimented with a few things. There are a lot of use cases applied in certain industries; how are we prepared for that?’ ... Just having that vision to see where technology is going and trying to stay ahead of it is important. Not necessarily chasing the shiny new toy,, new technology, but just being ahead of it is the most important skill set. Look around the corner and prepare the organization for the change that will come. Also, if you retrained some of the people, you have to be more analytical, more business minded. Those are good skills. That’s not easy to find. A lot of people [who] move into the CIO role are very technical, whether it is coding or heavily on the infrastructure side. That is a commodity today; you need to be beyond that.


Insider risk management needs a human strategy

A technical-only response to insider risk can miss the mark, we need to understand the human side. That means paying attention to patterns, motivations, and culture. Over-monitoring without context can drive good people away and increase risk instead of reducing it. When it comes to workplace monitoring, clarity and openness matter. “Transparency starts with intentional communication,” said Itai Schwartz, CTO of MIND. That means being upfront with employees, not just that monitoring is happening, but what’s being monitored, why it matters, and how it helps protect both the company and its people. According to Schwartz, organizations often gain employee support when they clearly connect monitoring to security, rather than surveillance. “Employees deserve to know that monitoring is about securing data – not surveilling individuals,” he said. If people can see how it benefits them and the business, they’re more likely to support it. Being specific is key. Schwartz advises clearly outlining what kinds of activities, data, or systems are being watched, and explaining how alerts are triggered. ... Ethical monitoring also means drawing boundaries. Schwartz emphasized the importance of proportionality: collecting only what’s relevant and necessary. “Allow employees to understand how their behavior impacts risk, and use that information to guide, not punish,” he said.


Sharing Intelligence Beyond CTI Teams, Across Wider Functions and Departments

As companies’ digital footprints expand exponentially, so too do their attack surfaces. And since most phishing attacks can be carried out by even the least sophisticated hackers due to the prevalence of phishing kits sold in cybercrime forums, it has never been harder for security teams to plug all the holes, let alone other departments who might be undertaking online initiatives which leave them vulnerable. CTI, digital brand protection and other cyber risk initiatives shouldn’t only be utilized by security and cyber teams. Think about legal teams, looking to protect IP and brand identities, marketing teams looking to drive website traffic or demand generation campaigns. They might need to implement digital brand protection to safeguard their organization’s online presence against threats like phishing websites, spoofed domains, malicious mobile apps, social engineering, and malware. In fact, deepfakes targeting customers and employees now rank as the most frequently observed threat by banks, according to Accenture’s Cyber Threat Intelligence Research. For example, there have even been instances where hackers are tricking large language models into creating malware that can be used to hack customers’ passwords.

Daily Tech Digest - April 27, 2025


Quote for the day:

“Most new jobs won’t come from our biggest employers. They will come from our smallest. We’ve got to do everything we can to make entrepreneurial dreams a reality.” -- Ross Perot



7 key strategies for MLops success

Like many things in life, in order to successfully integrate and manage AI and ML into business operations, organisations first need to have a clear understanding of the foundations. The first fundamental of MLops today is understanding the differences between generative AI models and traditional ML models. Cost is another major differentiator. The calculations of generative AI models are more complex resulting in higher latency, demand for more computer power, and higher operational expenses. Traditional models, on the other hand, often utilise pre-trained architectures or lightweight training processes, making them more affordable for many organisations. ... Creating scalable and efficient MLops architectures requires careful attention to components like embeddings, prompts, and vector stores. Fine-tuning models for specific languages, geographies, or use cases ensures tailored performance. An MLops architecture that supports fine-tuning is more complicated and organisations should prioritise A/B testing across various building blocks to optimise outcomes and refine their solutions. Aligning model outcomes with business objectives is essential. Metrics like customer satisfaction and click-through rates can measure real-world impact, helping organisations understand whether their models are delivering meaningful results. 


If we want a passwordless future, let's get our passkey story straight

When passkeys work, which is not always the case, they can offer a nearly automagical experience compared to the typical user ID and password workflow. Some passkey proponents like to say that passkeys will be the death of passwords. More realistically, however, at least for the next decade, they'll mean the death of some passwords -- perhaps many passwords. We'll see. Even so, the idea of killing passwords is a very worthy objective. ... With passkeys, the device that the end user is using – for example, their desktop computer or smartphone -- is the one that's responsible for generating the public/private key pair as a part of an initial passkey registration process. After doing so, it shares the public key – the one that isn't a secret – with the website or app that the user wants to login to. The private key -- the secret -- is never shared with that relying party. This is where the tech article above has it backward. It's not "the site" that "spits out two pieces of code" saving one on the server and the other on your device. ... Passkeys have a long way to go before they realize their potential. Some of the current implementations are so alarmingly bad that it could delay their adoption. But adoption of passkeys is exactly what's needed to finally curtail a decades-long crime spree that has plagued the internet. 



AI: More Buzzword Than Breakthrough

While Artificial Intelligence focuses on creating systems that simulate human intelligence, Intelligent Automation leverages these AI capabilities to automate end-to-end business processes. In essence, AI is the brain that provides cognitive functions, while Intelligent Automation is the body that executes tasks using AI’s intelligence. This distinction is critical; although Artificial Intelligence is a component of Intelligent Automation, not all AI applications result in automation, and not all automation requires advanced Artificial Intelligence. ... Intelligent Automation automates and optimizes business processes by combining AI with automation tools. This integration results in increased efficiency and reduced operating costs. For instance, Intelligent Automation can streamline supply chain operations by automating inventory management, order fulfillment, and logistics, resulting in faster turnaround times and fewer errors. ... In recent years, the term “AI” has been widely used as a marketing buzzword, often applied to technologies that do not have true AI capabilities. This phenomenon, sometimes referred to as “AI washing,” involves branding traditional automation or data processing systems as AI in order to capitalize on the term’s popularity. Such practices can mislead consumers and businesses, leading to inflated expectations and potential disillusionment with the technology.


Introduction to API Management

API gateways are pivotal in managing both traffic and security for APIs. They act as the frontline interface between APIs and the users, handling incoming requests and directing them to the appropriate services. API gateways enforce policies such as rate limiting and authentication, ensuring secure and controlled access to API functions. Furthermore, they can transform and route requests, collect analytics data and provide caching capabilities. ... With API governance, businesses get the most out of their investment. The purpose of API governance is to make sure that APIs are standardized so that they are complete, compliant and consistent. Effective API governance enables organizations to identify and mitigate API-related risks, including performance concerns, compliance issues and security vulnerabilities. API governance is complex and involves security, technology, compliance, utilization, monitoring, performance and education. Organizations can make their APIs secure, efficient, compliant and valuable to users by following best practices in these areas. ... Security is paramount in API management. Advanced security features include authentication mechanisms like OAuth, API keys and JWT (JSON Web Tokens) to control access. Encryption, both in transit and at rest, ensures data integrity and confidentiality.


Sustainability starts within: Flipkart & Furlenco on building a climate-conscious culture

Based on the insights from Flipkart and Furlenco, here are six actionable steps for leaders seeking to embed climate goals into their company culture: Lead with intent: Make climate goals a strategic priority, not just a CSR initiative. Signal top-level commitment and allocate leadership roles accordingly. Operationalise sustainability: Move beyond policies into process design — from green supply chains to net-zero buildings and water reuse systems. Make It measurable: Integrate climate-related KPIs into team goals, performance reviews, and business dashboards. Empower employees: Create space for staff to lead climate initiatives, volunteer, learn, and innovate. Build purpose into daily roles. Foster dialogue and storytelling: Share wins, losses, and journeys. Use Earth Day campaigns, internal newsletters, and learning modules to bring sustainability to life. Measure Culture, Not Just Carbon: Assess how employees feel about their role in climate action — through surveys, pulse checks, and feedback loops. ... Beyond the company walls, this cultural approach to climate leadership has ripple effects. Customers are increasingly drawn to brands with strong environmental values, investors are rewarding companies with robust ESG cultures, and regulators are moving from voluntary frameworks to mandatory disclosures.


Proof-of-concept bypass shows weakness in Linux security tools

An Israeli vendor was able to evade several leading Linux runtime security tools using a new proof-of-concept (PoC) rootkit that it claims reveals the limitations of many products in this space. The work of cloud and Kubernetes security company Armo, the PoC is called ‘Curing’, a portmanteau word that combines the idea of a ‘cure’ with the io_uring Linux kernel interface that the company used in its bypass PoC. Using Curing, Armo found it was possible to evade three Linux security tools to varying degrees: Falco (created by Sysdig but now a Cloud Native Computing Foundation graduated project), Tetragon from Isovalent (now part of Cisco), and Microsoft Defender. ... Armo said it was motivated to create the rootkit to draw attention to two issues. The first was that, despite the io_uring technique being well documented for at least two years, vendors in the Linux security space had yet to react to the danger. The second purpose was to draw attention to deeper architectural challenges in the design of the Linux security tools that large numbers of customers rely on to protect themselves: “We wanted to highlight the lack of proper attention in designing monitoring solutions that are forward-compatible. Specifically, these solutions should be compatible with new features in the Linux kernel and address new techniques,” said Schendel.


Insider threats could increase amid a chaotic cybersecurity environment

Most organisations have security plans and policies in place to decrease the potential for insider threats. No policy will guarantee immunity to data breaches and IT asset theft but CISOs can make sure their policies are being executed through routine oversight and audits. Best practices include access control and least privilege, which ensures employees, contractors and all internal users only have access to the data and systems necessary for their specific roles. Regular employee training and awareness programmes are also critical. Training sessions are an effective means to educate employees on security best practices such as how to recognise phishing attempts, social engineering attacks and the risks associated with sharing sensitive information. Employees should be trained in how to report suspicious activities – and there should be a defined process for managing these reports. Beyond the security controls noted above, those that govern the IT asset chain of custody are crucial to mitigating the fallout of a breach should assets be stolen by employees, former employees or third parties. The IT asset chain of custody refers to the process that tracks and documents the physical possession, handling and movement of IT assets throughout their lifecycle. A sound programme ensures that there is a clear, auditable trail of who has access to and controls the asset at any given time. 


Distributed Cloud Computing: Enhancing Privacy with AI-Driven Solutions

AI has the potential to play a game-changing role in distributed cloud computing and PETs. By enabling intelligent decision-making and automation, AI algorithms can help us optimize data processing workflows, detect anomalies, and predict potential security threats. AI has been instrumental in helping us identify patterns and trends in complex data sets. We're excited to see how it will continue to evolve in the context of distributed cloud computing. For instance, homomorphic encryption allows computations to be performed on encrypted data without decrypting it first. This means that AI models can process and analyze encrypted data without accessing the underlying sensitive information. Similarly, AI can be used to implement differential privacy, a technique that adds noise to the data to protect individual records while still allowing for aggregate analysis. In anomaly detection, AI can identify unusual patterns or outliers in data without requiring direct access to individual records, ensuring that sensitive information remains protected. While AI offers powerful capabilities within distributed cloud environments, the core value proposition of integrating PETs remains in the direct advantages they provide for data collaboration, security, and compliance. Let's delve deeper into these key benefits, challenges and limitations of PETs in distributed cloud computing.


Mobile Applications: A Cesspool of Security Issues

"What people don't realize is you ship your entire mobile app and all your code to this public store where any attacker can download it and reverse it," Hoog says. "That's vastly different than how you develop a Web app or an API, which sit behind a WAF and a firewall and servers." Mobile platforms are difficult for security researchers to analyze, Hoog says. One problem is that developers rely too much on the scanning conducted by Apple and Google on their app stores. When a developer loads an application, either company will conduct specific scans to detect policy violations and to make malicious code more difficult to upload to the repositories. However, developers often believe the scanning is looking for security issues, but it should not be considered a security control, Hoog says. "Everybody thinks Apple and Google have tested the apps — they have not," he says. "They're testing apps for compliance with their rules. They're looking for malicious malware and just egregious things. They are not testing your application or the apps that you use in the way that people think." ... In addition, security issues on mobile devices tend to have a much shorter lifetime, because of the closed ecosystems and the relative rarity of jailbreaking. When NowSecure finds a problem, there is no guarantee that it will last beyond the next iOS or Android update, he says.


The future of testing in compliance-heavy industries

In today’s fast-evolving technology landscape, being an engineering leader in compliance-heavy industries can be a struggle. Managing risks and ensuring data integrity are paramount, but the dangers are constant when working with large data sources and systems. Traditional integration testing within the context of stringent regulatory requirements is more challenging to manage at scale. This leads to gaps, such as insufficient test coverage across interconnected systems, a lack of visibility into data flows, inadequate logging, and missed edge case conditions, particularly in third-party interactions. Due to these weaknesses, security vulnerabilities can pop up and incident response can be delayed, ultimately exposing organizations to violations and operational risk. ... API contract testing is a modern approach used to validate the expectations between different systems, making sure that any changes in APIs don’t break expectations or contracts. Changes might include removing or renaming a field and altering data types or response structures. These seemingly small updates can cause downstream systems to crash or behave incorrectly if they are not properly communicated or validated ahead of time. ... The shifting left practice has a lesser-known cousin: shifting right. Shifting right focuses on post-deployment validation using concepts such as observability and real-time monitoring techniques.

Daily Tech Digest - April 18, 2025


Quote for the day:

“Failures are finger posts on the road to achievement.” -- C.S. Lewis



How to Use Passive DNS To Trace Hackers Command And Control Infrastructure

This technology works through a network of sensors that monitor DNS query-response pairs, forwarding this information to central collection points for analysis without disrupting normal network operations. The resulting historical databases contain billions of unique records that security analysts can query to understand how domain names have resolved over time. ... When investigating potential threats, analysts can review months or even years of DNS resolution data without alerting adversaries to their investigation—a critical advantage when dealing with sophisticated threat actors. ... The true power of passive DNS in C2 investigation comes through various pivoting techniques that allow analysts to expand from a single indicator to map entire attack infrastructures. These techniques leverage the interconnected nature of DNS to reveal relationships between seemingly disparate domains and IP addresses. IP-based pivoting represents one of the most effective approaches. Starting with a known malicious IP address, analysts can query passive DNS to identify all domains that have historically resolved to that address. This technique often reveals additional malicious domains that share infrastructure but might otherwise appear unrelated.


Why digital identity is the cornerstone of trust in modern business

The foundation of digital trust is identity. It is no longer sufficient to treat identity management as a backend IT concern. Enterprises must now embed identity solutions into every digital touchpoint, ensuring that user interactions – whether by customers, employees, or partners – are both frictionless and secure. Modern enterprises must shift from fragmented, legacy systems to a unified identity platform. This evolution allows organisations to scale securely, eliminate redundancies and deliver the streamlined experiences users now expect. ... Digital identity is also a driver of customer experience. In today’s hyper-competitive digital landscape, the sign-up process can make or break a brand relationship. Clunky login screens or repeated verification prompts are quick ways to lose a customer. ... The foundation of digital trust is identity. It is no longer sufficient to treat identity management as a backend IT concern. Enterprises must now embed identity solutions into every digital touchpoint, ensuring that user interactions – whether by customers, employees, or partners – are both frictionless and secure. Modern enterprises must shift from fragmented, legacy systems to a unified identity platform. This evolution allows organisations to scale securely, eliminate redundancies and deliver the streamlined experiences users now expect.


Is your business ready for the IDP revolution?

AI-powered document processing offers significant advantages. Using advanced ML, IDP systems accurately interpret even complex and low-quality documents, including those with intricate tables and varying formats. This reduces manual work and the risk of human error. ... IDP also significantly improves data quality and accuracy by eliminating manual data entry, ensuring critical information is captured correctly and consistently. This leads to better decision-making, regulatory compliance and increased efficiency. IDP has wide-ranging applications. In healthcare, it speeds up claims processing and improves patient data management. In finance, it automates invoice processing and streamlines loan applications. In legal, it assists with contract analysis and due diligence. And in insurance, IDP automates information extraction from claims and reports, accelerating processing and boosting customer satisfaction. One specific example of this innovation in action is DocuWare’s own Intelligent Document Processing (DocuWare IDP). Our AI-powered solution streamlines how businesses handle even the most complex documents. Available as a standalone product, in the DocuWare Cloud or on-premises, DocuWare IDP automates text recognition, document classification and data extraction from various document types, including invoices, contracts and ID cards.


Practical Strategies to Overcome Cyber Security Compliance Standards Fatigue

The suitability of a cyber security framework must be determined based on applicable laws, industry standards, organizational risk profile, business goals, and resource constraints. It goes without saying that organizations providing critical services to the USA federal government will pursue NIST compliance while Small and Medium-sized Enterprises (SMEs) may want to focus on CIS Top 20, given resource constraints. Once the cyber security team has selected the most suitable framework, they should seek endorsement from the executive team or cyber risk governance committee to ensure shared sense of purpose. ... Mapping will enable organizations to identify overlapping controls to create a unified control set that addresses the requirements of multiple frameworks. This way, the organization can avoid redundant controls and processes, which in turn reduces cyber security team fatigue, accelerates innovation and lowers the cost of security. ... Cyber compliance standards play an integral role to ensure organizations prioritize the protection of consumer confidential and sensitive information above profits. But to reduce pressure on cyber teams already battling stress, cyber leaders must take a pragmatic approach that carefully balances compliance with innovation, agility and efficiency.


The Elaboration of a Modern TOGAF Architecture Maturity Model

This innovative TOGAF architecture maturity model provides a structured framework for assessing and enhancing an organization’s enterprise architecture capabilities in organizations that need to become more agile. By defining maturity levels across ten critical domains, the model enables organizations to transition from unstructured, reactive practices to well-governed, data-driven, and continuously optimized architectural processes. The five maturity levels—Initial, Under Development, Defined, Managed, and Measured—offer a clear roadmap for organizations to integrate EA into strategic decision-making, align business and IT investments, and establish governance frameworks that enhance operational efficiency. Through this approach, EA evolves from a support function into a key driver of innovation and business transformation. This model emphasizes continuous improvement and strategic alignment, ensuring that EA not only supports but actively contributes to an organization’s long-term success. By embedding EA into business strategy, security, governance, and solution delivery, enterprises can enhance agility, mitigate risks, and drive competitive advantage. Measuring EA’s impact through financial metrics and performance indicators further ensures that architecture initiatives provide tangible business value. 


Securing digital products under the Cyber Resilience Act

CRA explicitly states that products should have appropriate level of cybersecurity based on the risks, the risk based approach is fundamental in the regulation. This has the advantage that we can set the bar wherever we want as long as we make a good risk based argumentation for this level. This implies that we must have a methodical categorization of risk, hence we need application risk profiles. In order to implement this we can follow the quality criteria of maturity level 1, 2 and 3 of the application risk profiles practice. This includes having a clearly agreed upon, understood, accessible and updated risk classification system. ... Many companies already have SAMM assessments, if you do not have SAMM assessments but use another maturity framework such as OWASP DSOMM or NIST CSF you could use the available mappings to accelerate the translation to SAMM. Otherwise we recommend doing SAMM assessments and identifying the gaps in the processes needed. Then deciding on a roadmap to develop the processes and capabilities in time. ... In CRA we need to demonstrate that we have adequate security processes in place, and that we do not ship products with known vulnerabilities. So apart from having a good picture of the data flows we need to have a good picture of the processes in place.


Insider Threats, AI and Social Engineering: The Triad of Modern Cybersecurity Threats

Insiders who are targeted or influenced by external adversaries to commit data theft may not be addressed by traditional security solutions because attackers might use a combination of manipulation techniques with tactics to get access to the confidential data of an organization.  This can be seen in the case of Insider Threats carried out by Famous Chollima, a cyber-criminal group that targeted organizations through the employees, that were working for the criminal group. This criminal group collected individuals, falsified their identities, and helped them secure employment with the organization. Once inside, the group got access to sensitive information through the employees they helped get into the organization. ... Since AI can mimic user behavior, it is hard for security teams to detect the difference between normal activity and AI-generated activity. AI can also be used by insiders to assist in their plans, such as like an insider could use AI or train AI models to analyze user activity and pinpoint the window of least activity to deploy malware onto a critical system at an optimal time and disguise this activity under a legitimate action, to avoid detection with monitoring solutions.


How Successful Leaders Get More Done in Less Time

In order to be successful, leaders must make a conscious shift to move from reactive to intentional. They must guard their calendars, build in time for deep work, and set clear boundaries to focus on what truly drives progress. ... Time-blocking is one of the simplest, most powerful tools a leader can use. At its core, time-blocking is the practice of assigning specific blocks of time to different types of work: deep focus, meetings, admin, creative thinking or even rest. Why does it work? Because it eliminates context-switching, which is the silent killer of productivity. Instead of bouncing between tasks and losing momentum, time-blocking gives your day structure. It creates rhythm and ensures that what matters most actually gets done. ... Not everything on your to-do list matters. But without a clear system to prioritize, everything feels urgent. That's how leaders end up spending hours on reactive work while their most impactful tasks get pushed to "tomorrow." The fix? Use prioritization frameworks like the 80/20 rule (20% of tasks drive 80% of results) to stay focused on what actually moves the needle. ... If you're still doing everything yourself, there's a chance you're creating a bottleneck. The best leaders know that delegation buys back time and creates opportunities for others to grow. 


The tech backbone creating the future of infrastructure

Governments and administrators around the world are rapidly realizing the benefits of integrated infrastructure. A prime example is the growing trend for connecting utilities across borders to streamline operations and enhance efficiency. The Federal-State Modern Grid Deployment Initiative, involving 21 US states, is a major step towards modernizing the power grid, boosting reliability and enhancing resource management. Across the Atlantic, the EU is linking energy systems; by 2030, each member nation should be sharing at least 15% of its electricity production with its neighbors. On a smaller scale, the World Economic Forum is encouraging industrial clusters—including in China, Indonesia, Ohio and Australia—to share resources, infrastructure and risks to maximize economic and environmental value en route to net zero. ... Data is a nation’s most valuable asset. It is now being collected from multiple infrastructure points—traffic, energy grids, utilities. Infusing it with artificial intelligence (AI) in the cloud enables businesses to optimize their operations in real time. Centralizing this information, such as in an integrated command-and-control center, facilitates smoother collaboration and closer interaction among different sectors. 


No matter how advanced the technology is, it can all fall apart without strong security

One cybersecurity trend that truly excites me is the convergence of Artificial Intelligence (AI) with cybersecurity, especially in the areas of threat detection, incident response, and predictive risk management. This has motivated me to pursue a PhD in Cybersecurity using AI. Unlike traditional rule-based systems, AI is revolutionising cybersecurity by enabling proactive and adaptive defence strategies through contextual intelligence, shifting the focus from reactive to proactive measures. ... The real magic lies in combining AI with human judgement — what I often refer to as “human-in-the-loop cybersecurity.” This balance allows teams to scale faster, stay sharp, and focus on strategic defence instead of chasing every alert manually. What I have learnt from all this is the fusion of AI and cybersecurity is not just an enhancement, it’s a paradigm shift. However, the key is achieving balance. Hence, AI should augment human intelligence, rather than supplant them.... In the realm of financial cybersecurity, the most significant risk isn’t solely technical; it stems from the gap between security measures and business objectives. As the CISO, my responsibility extends beyond merely protecting against threats; I aim to integrate cybersecurity into the core of the organisation, transforming it into a strategic enabler rather than a reactive measure.

Daily Tech Digest - January 08, 2025

GenAI Won’t Work Until You Nail These 4 Fundamentals

Too often, organizations leap into GenAI fueled by excitement rather than strategic intent. The urgency to appear innovative or keep up with competitors drives rushed implementations without distinct goals. They see GenAI as the “shiny new [toy],” as Kevin Collins, CEO of Charli AI, aptly puts it, but the reality check comes hard and fast: “Getting to that shiny new toy is expensive and complicated.” This rush is reflected in over 30,000 mentions of AI on earnings calls in 2023 alone, signaling widespread enthusiasm but often without the necessary clarity of purpose. ... The shortage of strategic clarity isn’t the only roadblock. Even when organizations manage to identify a business case, they often find themselves hamstrung by another pervasive issue: their data. Messy data hampers organizations’ ability to mature beyond entry-level use cases. Data silos, inconsistent formats and incomplete records create bottlenecks that prevent GenAI from delivering its promised value. ... Weak or nonexistent governance structures expose companies to various ethical, legal and operational risks that can derail their GenAI ambitions. According to data from an Info-Tech Research Group survey, only 33% of GenAI adopters have implemented clear usage policies. 


Inside the AI Data Cycle: Understanding Storage Strategies for Optimised Performance

The AI Data Cycle is a six-stage framework, beginning with the gathering and storing of raw data. In this initial phase, data is collected from multiple sources, with a focus on assessing its quality and diversity, which establishes a strong foundation for the stages that follow. For this phase, high-capacity enterprise hard disk drives (eHDDs) are recommended, as they provide high storage capacity and cost-effectiveness per drive. In the next stage, data is prepared for ingestion, and this is where insight from the initial data collection phase is processed, cleaned and transformed for model training. To support this phase, data centers are upgrading their storage infrastructure – such as implementing fast data lakes – to streamline data preparation and intake. At this point, high-capacity SSDs play a critical role, either augmenting existing HDD storage or enabling the creation of all-flash storage systems for faster, more efficient data handling. Next is the model training phase, where AI algorithms learn to make accurate predictions using the prepared training data. This stage is executed on high-performance supercomputers, which require specialised, high-performing storage to function optimally. 


Buy or Build: Commercial Versus DIY Network Automation

DIY automation can be tailored to your specific network and, in some cases, to meet security or compliance requirements more easily than vendor products. And they come at a great price: free! The cost of a commercial tool is sometimes higher than the value it creates, especially if you have unusual use cases. But DIY tools take time to build and support. Over 50% of organizations in EMA’s survey spend 6-20 hours per week debugging and supporting homegrown tools. Cultural preferences also come into play. While engineers love to grumble about vendors and their products, that doesn’t mean they prefer DIY. In my experience, NetOps teams are often set in their ways, preferring manual processes that do not scale up to match the complexity of modern networks. Many network engineers do not have the coding skills to build good automation, and most don't think about how to tackle problems with automation broadly. The first and most obvious fix for the issues holding back automation is simply for automation tools to get better. They must have broad integrations and be vendor neutral. Deep network mapping capabilities help resolve the issue of legacy networks and reduce the use cases that require DIY. Low or no-code tools help ease budget, staffing, and skills issues.


How HR can lead the way in embracing AI as a catalyst for growth

Common workplace concerns include job displacement, redundancy, bias in AI decision-making, output accuracy, and the handling of sensitive data. Tracy notes that these are legitimate worries that HR must address proactively. “Clear policies are essential. These should outline how AI tools can be used, especially with sensitive data, and safeguards must be in place to protect proprietary information,” she explains. At New Relic, open communication about AI integration has built trust. AI is viewed as a tool to eliminate repetitive tasks, freeing time for employees to focus on strategic initiatives. For instance, their internally developed AI tools support content drafting and research, enabling leaders like Tracy to prioritize high-value activities, such as driving organizational strategy. “By integrating AI thoughtfully and transparently, we’ve created an environment where it’s seen as a partner, not a threat,” Tracy says. This approach fosters trust and positions AI as an ally in smarter, more secure work practices. The key is to highlight how AI can help everyone excel in their roles and elevate the work they do every day. While it’s realistic to acknowledge that some aspects of our jobs—or even certain roles—may evolve with AI, the focus should be on how we integrate it into our workflow and use it to amplify our impact and efficiency,” notes Tracy.


Cloud providers are running out of ‘next big things’

Yes, every cloud provider is now “an AI company,” but let’s be honest — they’re primarily engineering someone else’s innovations into cloud-consumable services. GPT-4 through Microsoft Azure? That’s OpenAI’s innovation. Vector databases? They came from the open source community. Cloud providers are becoming AI implementation platforms rather than AI innovators. ... The root causes of the slowdown in innovation are clear. Market maturity indicates that the foundational issues in cloud computing have mostly been resolved. What’s left are increasingly specialized niche cases. Second, AWS, Azure, and Google Cloud are no longer the disruptors — they’re the defenders of market share. Their focus has shifted from innovation to optimization and retention. A defender’s mindset manifests itself in product strategies. Rather than introducing revolutionary new services, cloud providers are fine-tuning existing offerings. They’re also expanding geographically, with the hyperscalers expected to announce 30 new regions in 2025. However, these expansions are driven more by data sovereignty requirements than innovative new capabilities. This innovation slowdown has profound implications for enterprises. Many organizations bet their digital transformation on cloud-native architectures with continuous innovation. 


Historical Warfare’s Parallels with Cyber Warfare

In 1942, the British considered Singapore nearly impregnable. They fortified its coast heavily, believing any attack would come from the sea. Instead, the Japanese stunned the defenders by advancing overland through dense jungle terrain the British deemed impassable. This unorthodox approach using bicycles in great numbers and small tracks through the jungle enabled the Japanese forces to hit the defences at the weakest point and well ahead of the projected time catching the British defences off guard. In cybersecurity, this corresponds to zero-day vulnerabilities and unconventional attack vectors. Hackers exploit flaws that defenders never saw coming, turning supposedly secure systems into easy marks. The key lesson is to never to grow complacent because you never know what you can be hit with and when. ... Cyber attackers also use psychology against their targets. Phishing emails appeal to curiosity, trust, greed, or fear thus luring victims into clicking malicious links or revealing passwords. Social engineering exploits human nature rather than code and defenders must recognise that people, not just machines, are the frontline. Regular training, clear policies, and an ingrained culture of healthy scepticism which is present in most IT staff can thwart even the most artful psychological ploys.


Insider Threat: Tackling the Complex Challenges of the Enemy Within

Third-party background checking can only go so far. It must be supported by old fashioned and experienced interview techniques. Omri Weinberg, co-founder and CRO at DoControl, explains his methodology “We’re primarily concerned with two types of bad actors. First, there are those looking to use the company’s data for nefarious purposes. These individuals typically have the skills to do the job and then some – they’re often overqualified. They pose a severe threat because they can potentially access and exploit sensitive data or systems.” The second type includes those who oversell their skills and are actually under or way underqualified. “While they might not have malicious intent, they can still cause significant damage through incompetence or by introducing vulnerabilities due to their lack of expertise. For the overqualified potential bad actors, we’re wary of candidates whose skills far exceed the role’s requirements without a clear explanation. For the underqualified group, we look for discrepancies between claimed skills and actual experience or knowledge during interviews.” This means it is important to probe the candidate during the interview to gauge the true skill level of the candidate. “it’s essential that the person evaluating the hire has the technical expertise to make these determinations,” he added.


Raise your data center automation game with easy ecosystem integration

If integrations are the key, then the things you look for to understand whether a product is flashy or meaningful should change. The UI matters, but the way tools are integrated is the truly telling characteristic. What APIs exist? How is data normalized? Are interfaces versioned and maintained across different releases? Can you create complex dashboards that pull things together from different sources using no-code models that don't require source access to contextualize your environment? How are workflows strung together into more complex operations? By changing your focus, you can start to evaluate these platforms based on how well they integrate rather than on how snazzy the time series database interface is. Of course, things like look and feel matter, but anyone who wants to scale their operations will realize that the UI might not even be the dominant consumption model over time. Is your team looking to click their way through to completion? ... Wherever you are in this discovery process, let me offer some simple advice: Expand your purview from the network to the ecosystem and evaluate your options in the context of that ecosystem. When you do that effectively, you should know which solutions are attractive but incremental and which are likely to create more durable value for you and your organization.


Why Scrum Masters Should Grow Their Agile Coaching Skills

More than half of the organizations surveyed report that finding scrum masters with the right combination of skills to meet their evolving demands is very challenging. Notably, 93% of companies seek candidates with strong coaching skills but state that it’s one of the skills hardest to find. Building strong coaching and facilitation skills can help you stand out in the job market and open doors to new career opportunities. As scrum masters are expected to take on increasingly strategic roles, your skills become even more valuable. Senior scrum masters, in particular, are called upon to handle politically sensitive and technically complex situations, bridging gaps between development teams and upper management. Coaching and facilitation skills are requested nearly three times more often for senior scrum master roles than for other positions. Growing these coaching competencies can give you an edge and help you make a bigger impact in your career. ... Who wouldn’t want to move up in their career into roles with greater responsibilities and bigger impact? Regardless of the area of the company you’re in—product, sales, marketing, IT, operations—you’ll need leadership skills to guide people and enable change within the organization. 


Scaling penetration testing through smart automation

Automation undoubtedly has tremendous potential to streamline the penetration testing lifecycle for MSSPs. The most promising areas are the repetitive, data-intensive, and time-consuming aspects of the process. For instance, automated tools can cross-reference vulnerabilities against known exploit databases like CVE, significantly reducing manual research time. They can enhance accuracy by minimizing human error in tasks like calculating CVSS scores. Automation can also drastically reduce the time required to compile, format, and standardize pen-testing reports, which can otherwise take hours or even days depending on the scope of the project. For MSSPs handling multiple client engagements, this could translate into faster project delivery cycles and improved operational efficiency. For their clients – it enables near real-time responses to vulnerabilities, reducing the window of exposure and bolstering their overall security posture. However – and this is crucial – automation should not be treated as a silver bullet. Human expertise remains absolutely indispensable in the testing itself. The human ability to think creatively, to understand complex system interactions, to develop unique attack scenarios that an algorithm might miss—these are irreplaceable. 



Quote for the day:

"Don't judge each day by the harvest you reap but by the seeds that you plant." -- Robert Louis Stevenson

Daily Tech Digest - November 30, 2024

API Mocking Is Essential to Effective Change Management

A constant baseline is essential when managing API updates. Without it, teams risk diverging from the API’s intended design, resulting in more drift and potentially disruptive breaking changes. API mocks serve as a baseline by accurately simulating the API’s intended behavior and data formats. This enables development and quality assurance teams to compare proposed changes to a standardized benchmark, ensuring that new features or upgrades adhere to the API’s specified architecture before deployment. ... A centralized mocking environment is helpful for teams who have to manage changes over time and monitor API versions. Teams create a transparent, trusted source of truth from a centralized environment where all stakeholders may access the mock API, which forms the basis of version control and change tracking. By making every team operate from the same baseline in keeping with the desired API behavior and structure, this centralized approach helps reduce drift. ... Teams that want to properly use API mocking in change management must include mocking techniques in their daily development processes. These techniques ensure that the API’s documented specifications, implementation and testing environments remain in line, lowering the risk of drift and supporting consistent, open updates.


How Open-Source BI Tools Are Transforming DevOps Pipelines

BI tools automate the tracking of all DevOps processes so one can easily visualize, analyze, and interpret the key metrics. Rather than manually monitoring the metrics, such as the percentage ratio of successfully deployed applications or the time taken to deploy an application, one is now able to simply rely on BI to spot such trends in the first place. This gives one the ability to operationalize insights which saves time and ensures that pipelines are well managed. ... If you are looking for an easy-to-use tool, Metabase is the best option available. It allows you to build dashboards and query databases without the need to write elaborate codes. It also allows the user to retrieve data from a variety of systems, which, from a business perspective, allows a user to measure KPIs, for example, deployment frequency or the occurrence of system-related problems. ... If you have big resources that need monitoring, Superset is perfect. Superset was designed with the concept of big data loads in mind, offering advanced visualization and projection technology for different data storage devices. Businesses with medium-complexity operational structures optimize the usage of Superset thanks to its state-of-the-art data manipulation abilities. 


Inside threats: How can companies improve their cyber hygiene?

Reflecting on the disconnect between IT and end users, Dyer says that there will “always be a disparity between the two classes of employees”. “IT is a core fundamental dependency to allow end users to perform their roles to the best of their ability – delivered as a service for which they consume as customers,” he says. “Users wish to achieve and excel in their employment, and restrictions of IT can be a negative detractor in doing so. He adds that users are seldom consciously trying to compromise the security of an organisation, and that the incompetence in security hygiene is due to a lack of investment, awareness, engagement or reinforcement. “It is the job of IT leaders to bridge that gap [and] partner with their respective peers to build a positive security awareness culture where employees feel empowered to speak up if something doesn’t look right and to believe in the mission of effectively securing the organisation from the evolving world of outside and inside threats.” And to build that culture, Dyer has some advice, such as making policies clearly defined and user-friendly, allowing employees to do their jobs using tech to the best of their ability (with an understanding of the guardrails they have) and instructing them on what to do should something suspicious happen.


Navigating Responsible AI in the FinTech Landscape

Cross-functional collaboration is critical to successful, responsible AI implementation. This requires the engagement of multiple departments, including security, compliance, legal, and AI governance teams, to collectively reassess and reinforce risk management strategies within the AI landscape. Bringing together these diverse teams allows for a more comprehensive understanding of risks and safeguards across departments, contributing to a well-rounded approach to AI governance. A practical way to ensure effective oversight and foster this collaboration is by establishing an AI review board composed of representatives from each key function. This board would serve as a centralized body for overseeing AI policy adherence, compliance, and ethical considerations, ensuring that all aspects of AI risk are addressed cohesively and transparently. Organizations should also focus on creating realistic and streamlined processes for responsible AI use, balancing regulatory requirements with operational feasibility. While it may be tempting to establish one consistent process, for instance, where conformity assessments would be generated for every AI system, this would lead to a significant delay in time to value. Instead, companies should carefully evaluate the value vs. effort of the systems, including any regulatory documentation, before proceeding toward production.


The Future Of IT Leadership: Lessons From INTERPOL

Cyber threats never keep still. The same can be said of the challenges IT leaders face. Historically, IT functions were reactive—fixing problems as they arose. Today, that approach is no longer sufficient. IT leaders must anticipate challenges before they materialise. This proactive stance involves harnessing the power of data, artificial intelligence (AI), and predictive analytics. It is by analysing trends and identifying vulnerabilities that IT leaders can prevent disruptions and position their organisations to respond effectively to emerging risks. This shift from reactive to predictive leadership is essential for navigating the complexities of digital transformation. ... Cybercrime doesn’t respect boundaries, and neither should IT leadership. Successful cybersecurity efforts often rely on partnerships—between businesses, governments, and international organisations. INTERPOL’s Africa Cyber Surge operations demonstrate the power of collaboration in tackling threats at scale. An IT leader needs to adopt a similar mindset by building networks of trust across industries, government agencies, and even with and through competitors. It can help create shared defences against common threats. Besides, collaboration isn’t limited to external partnerships. 


4 prerequisites for IT leaders to navigate today’s era of disruption

IT leaders aren’t just tech wizards, but savvy data merchants. Imagine yourself as a store owner, but instead of shelves stocked with physical goods, your inventory consists of valuable data, insights, and AI/ML products. To succeed, they need to make their data products appealing by understanding customer needs, ensuring products are current, of a high-quality, and organized. Offering value-added services on top of data, like analysis and consulting, can further enhance the appeal. By adopting this mindset and applying business principles, IT leaders can unlock new revenue streams. ... With AI becoming more pervasive, the ethical and responsible use of it is paramount. Leaders must ensure that data governance policies are in place to mitigate risks of bias or discrimination, especially when AI models are trained on biased datasets. Transparency is key in AI, as it builds trust and empowers stakeholders to understand and challenge AI-generated insights. By building a program on the existing foundation of culture, structure, and governance, IT leaders can navigate the complexities of AI while upholding ethical standards and fostering innovation. ... IT leaders need to maintain a balance of intellectual (IQ) and emotional (EQ) intelligence to manage an AI-infused workplace. 


How to Build a Strong and Resilient IT Bench

Since talent is likely to be short in new technology areas and in older tech areas that must still be supported, CIOs should consider a two-pronged approach that develops bench strength talent for new technologies while also ensuring that older infrastructure technologies have talent waiting in the wings. ... Companies that partner with universities and community colleges in their local areas have found a natural synergy with these institutions, which want to ensure that what they teach is relevant to the workplace. This synergy consists of companies offering input for computer science and IT courses and also providing guest lecturers for classes. Those companies bring “real world” IT problems into student labs and offer internships for course credit that enable students to work in company IT departments with an IT staff mentor. ... It’s great to send people to seminars and certification programs, but unless they immediately apply what they learned to an IT project, they’ll soon forget it. Mindful of this, we immediately placed newly trained staff on actual IT projects so they could apply what they learned. Sometimes a more experienced staff member had to mentor them, but it was worth it. Confidence and competence built quickly.


The Growing Quantum Threat to Enterprise Data: What Next?

One of the most significant implications of quantum computing for cybersecurity is its potential to break widely used encryption algorithms. Many of the encryption systems that safeguard sensitive enterprise data today rely on the computational difficulty of certain mathematical problems, such as factoring large numbers or solving discrete logarithms. Classical computers would take an impractical amount of time to crack these encryption schemes, but quantum computers could theoretically solve these problems in a matter of seconds, rendering many of today's security protocols obsolete. ... Recognizing the urgent need to address the quantum threat, the National Institute of Standards and Technology launched a multi-phase effort to develop post-quantum cryptographic standards. After eight years of rigorous research and relentless effort, NIST released the first set of finalized post-quantum encryption standards on Aug. 13. These standards aim to provide a clear and practical framework for organizations seeking to transition to quantum-safe cryptography. The final selection included algorithms for both public-key encryption and digital signatures, two of the most critical components of modern cybersecurity systems.


Are we worse at cloud computing than 10 years ago?

Rapid advancements in cloud technologies combined with mounting pressures for digital transformation have led organizations to hastily adopt cloud solutions without establishing the necessary foundations for success. This is especially common if companies migrate to infrastructure as a service without adequate modernization, which can increase costs and technical debt. ... The growing pressure to adopt AI and generative AI technologies further complicates the situation and adds another layer of complexity. Organizations are caught between the need to move quickly and the requirement for careful, strategic implementation. ... Include thorough application assessment, dependency mapping, and detailed modeling of the total cost of ownership before migration begins. Success metrics must be clearly defined from the outset. ... When it comes to modernization, organizations must consider the appropriate refactoring and cloud-native development based on business value rather than novelty. The overarching goal is to approach cloud adoption as a strategic transformation. We must stop looking at this as a migration from one type of technology to another. Cloud computing and AI will work best when business objectives drive technology decisions rather than the other way around.
A well-structured Data Operating Model integrates data efforts within business units, ensuring alignment with actual business needs. I’ve seen how a "Hub and Spoke" model, which places central governance at the core while embedding data professionals in individual business units, can break down silos. This alignment ensures that data solutions are built to drive specific business outcomes rather than operating in isolation. ... Data leaders must ruthlessly prioritize initiatives that deliver tangible business outcomes. It’s easy to get caught up in hype cycles—whether it’s the latest AI model or a cutting-edge data governance framework—but real success lies in identifying the use cases that have a direct line of sight to revenue or cost savings. ... A common mistake I’ve seen in organizations is focusing too much on static reports or dashboards. The real value comes when data becomes actionable — when it’s integrated into decision-making processes and products. ... Being "data-driven" has become a dangerous buzzword. Overemphasizing data can lead to analysis paralysis. The true measure of success is not how much data you have or how many dashboards you create but the value you deliver to the business. 



Quote for the day:

"Efficiency is doing the thing right. Effectiveness is doing the right thing." -- Peter F. Drucker