Showing posts with label multi-cloud. Show all posts
Showing posts with label multi-cloud. Show all posts

Daily Tech Digest - August 02, 2025


Quote for the day:

"Successful leaders see the opportunities in every difficulty rather than the difficulty in every opportunity" -- Reed Markham


Chief AI role gains traction as firms seek to turn pilots into profits

CAIOs understand the strategic importance of their role, with 72% saying their organizations risk falling behind without AI impact measurement. Nevertheless, 68% said they initiate AI projects even if they can’t assess their impact, acknowledging that the most promising AI opportunities are often the most difficult to measure. Also, some of the most difficult AI-related tasks an organization must tackle rated low on CAIOs’ priority lists, including measuring the success of AI investments, obtaining funding and ensuring compliance with AI ethics and governance. The study’s authors didn’t suggest a reason for this disconnect. ... Though CEO sponsorship is critical, the authors also stressed the importance of close collaboration across the C-suite. Chief operating officers need to redesign workflows to integrate AI into operations while managing risk and ensuring quality. Tech leaders need to ensure that the technical stack is AI-ready, build modern data architectures and co-create governance frameworks. Chief human resource officers need to integrate AI into HR processes, foster AI literacy, redesign roles and foster an innovation culture. The study found that the factors that separate high-performing CAIOs from their peers are measurement, teamwork and authority. Successful projects address high-impact areas like revenue growth, profit, customer satisfaction and employee productivity.


Mind the overconfidence gap: CISOs and staff don’t see eye to eye on security posture

“Executives typically rely on high-level reports and dashboards, whereas frontline practitioners see the day-to-day challenges, such as limitations in coverage, legacy systems, and alert fatigue — issues that rarely make it into boardroom discussions,” she says. “This disconnect can lead to a false sense of security at the top, causing underinvestment in areas such as secure development, threat modeling, or technical skills.” ... Moreover, the CISO’s rise in prominence and repositioning for business leadership may also be adding to the disconnect, according to Adam Seamons, information security manager at GRC International Group. “Many CISOs have shifted from being technical leads to business leaders. The problem is that in doing so, they can become distanced from the operational detail,” Seamons says. “This creates a kind of ‘translation gap’ between what executives think is happening and what’s actually going on at the coalface.” ... Without a consistent, shared view of risk and posture, strategy becomes fragmented, leading to a slowdown in decision-making or over- or under-investment in specific areas, which in turn create blind spots that adversaries can exploit. “Bridging this gap starts with improving the way security data is communicated and contextualized,” Forescout’s Ferguson advises. 


7 tips for a more effective multicloud strategy

For enterprises using dozens of cloud services from multiple providers, the level of complexity can quickly get out of hand, leading to chaos, runaway costs, and other issues. Managing this complexity needs to be a key part of any multicloud strategy. “Managing multiple clouds is inherently complex, so unified management and governance are crucial,” says Randy Armknecht, a managing director and global cloud practice leader at business advisory firm Protiviti. “Standardizing processes and tools across providers prevents chaos and maintains consistency,” Armknecht says. Cloud-native application protection platforms (CNAPP) — comprehensive security solutions that protect cloud-native applications from development to runtime — “provide foundational control enforcement and observability across providers,” he says. ... Protecting data in multicloud environments involves managing disparate APIs, configurations, and compliance requirements across vendors, Gibbons says. “Unlike single-cloud environments, multicloud increases the attack surface and requires abstraction layers [to] harmonize controls and visibility across platforms,” he says. Security needs to be uniform across all cloud services in use, Armknecht adds. “Centralizing identity and access management and enforcing strong data protection policies are essential to close gaps that attackers or compliance auditors could exploit,” he says.


Building Reproducible ML Systems with Apache Iceberg and SparkSQL: Open Source Foundations

Data lakes were designed for a world where analytics required running batch reports and maybe some ETL jobs. The emphasis was on storage scalability, not transactional integrity. That worked fine when your biggest concern was generating quarterly reports. But ML is different. ... Poor data foundations create costs that don't show up in any budget line item. Your data scientists spend most of their time wrestling with data instead of improving models. I've seen studies suggesting sixty to eighty percent of their time goes to data wrangling. That's... not optimal. When something goes wrong in production – and it will – debugging becomes an archaeology expedition. Which data version was the model trained on? What changed between then and now? Was there a schema modification that nobody documented? These questions can take weeks to answer, assuming you can answer them at all. ... Iceberg's hidden partitioning is particularly nice because it maintains partition structures automatically without requiring explicit partition columns in your queries. Write simpler SQL, get the same performance benefits. But don't go crazy with partitioning. I've seen teams create thousands of tiny partitions thinking it will improve performance, only to discover that metadata overhead kills query planning. Keep partitions reasonably sized (think hundreds of megabytes to gigabytes) and monitor your partition statistics.


The Creativity Paradox of Generative AI

Before talking about AI creation ability, we need to understand a simple linguistic limitation: despite the data used for these compositions having human meanings initially, i.e., being seen as information, after being de- and recomposed in a new, unknown way, these compositions do not have human interpretation, at least for a while, i.e., they do not form information. Moreover, these combinations cannot define new needs but rather offer previously unknown propositions to the specified tasks. ... Propagandists of know-it-all AI have a theoretical basis defined in the ethical principles that such an AI should realise and promote. Regardless of how progressive they sound, their core is about neo-Marxist concepts of plurality and solidarity. Plurality states that the majority of people – all versus you – is always right (while in human history it is usually wrong), i.e., if an AI tells you that your need is already resolved in the way that the AI articulated, you have to agree with it. Solidarity is, in essence, a prohibition of individual opinions and disagreements, even just slight ones, with the opinion of others; i.e., everyone must demonstrate solidarity with all. ... The know-it-all AI continuously challenges a necessity in the people’s creativity. The Big AI Brothers think for them, decide for them, and resolve all needs; the only thing that is required in return is to obey the Big AI Brother directives.


Doing More With Your Existing Kafka

The transformation into a real-time business isn’t just a technical shift, it’s a strategic one. According to MIT’s Center for Information Systems Research (CISR), companies in the top quartile of real-time business maturity report 62% higher revenue growth and 97% higher profit margins than those in the bottom quartile. These organizations use real-time data not only to power systems but to inform decisions, personalize customer experiences and streamline operations. ... When event streams are discoverable, secure and easy to consume, they are more likely to become strategic assets. For example, a Kafka topic tracking payment events could be exposed as a self-service API for internal analytics teams, customer-facing dashboards or third-party partners. This unlocks faster time to value for new applications, enables better reuse of existing data infrastructure, boosts developer productivity and helps organizations meet compliance requirements more easily. ... Event gateways offer a practical and powerful way to close the gap between infrastructure and innovation. They make it possible for developers and business teams alike to build on top of real-time data, securely, efficiently and at scale. As more organizations move toward AI-driven and event-based architectures, turning Kafka into an accessible and governable part of your API strategy may be one of the highest-leverage steps you can take, not just for IT, but for the entire business.


Meta-Learning: The Key to Models That Can "Learn to Learn"

Meta-learning is a field within machine learning that focuses on algorithms capable of learning how to learn. In traditional machine learning, an algorithm is trained on a specific dataset and becomes specialized for that task. In contrast, meta-learning models are designed to generalize across tasks, learning the underlying principles that allow them to quickly adapt to new, unseen tasks with minimal data. The idea is to make machine learning systems more like humans — able to leverage prior knowledge when facing new challenges. ... This is where meta-learning shines. By training models to adapt to new situations with few examples, we move closer to creating systems that can handle the diverse, dynamic environments found in the real world. ... Meta-learning represents the next frontier in machine learning, enabling models that are adaptable and capable of generalizing across a wide range of tasks with minimal data. By making machines more capable of learning from fewer examples, meta-learning has the potential to revolutionize fields like healthcare, robotics, finance, and more. While there are still challenges to overcome, the ongoing advancements in meta-learning techniques, such as few-shot learning, transfer learning, and neural architecture search, are making it an exciting area of research with vast potential for practical applications.


US govt, Big Tech unite to build one stop national health data platform

Under this framework, applications must support identity-proofing standards, consent management protocols, and Fast Healthcare Interoperability Resources (FHIR)-based APIs that allow for real-time retrieval of medical data across participating systems. The goal, according to CMS Administrator Chiquita Brooks-LaSure, is to create a “unified digital front door” to a patient’s health records that are accessible from any location, through any participating app, at any time. This unprecedented public-private initiative builds on rules first established under the 2016 21st Century Cures Act and expanded by the CMS Interoperability and Patient Access Final Rule. This rule mandates that CMS-regulated payers such as Medicare Advantage organizations, Medicaid programs, and Affordable Care Act (ACA)-qualified health plans make their claims, encounter data, lab results, provider remittances, and explanations of benefits accessible through patient-authorized APIs. ... ID.me, another key identity verification provider participating in the CMS initiative, has also positioned itself as foundational to the interoperability framework. The company touts its IAL2/AAL2-compliant digital identity wallet as a gateway to streamlined healthcare access. Through one-time verification, users can access a range of services across providers and government agencies without repeatedly proving their identity.


What Is Data Literacy and Why Does It Matter?

Building data literacy in an organization is a long-term project, often spearheaded by the chief data officer (CDO) or another executive who has a vision for instilling a culture of data in their company. In a report from the MIT Sloan School of Management, experts noted that to establish data literacy in a company, it’s important to first establish a common language so everyone understands and agrees on the definition of commonly used terms. Second, management should build a culture of learning and offer a variety of modes of training to suit different learning styles, such as workshops and self-led courses. Finally, the report noted that it’s critical to reward curiosity – if employees feel they’ll get punished if their data analysis reveals a weakness in the company’s business strategy, they’ll be more likely to hide data or just ignore it. Donna Burbank, an industry thought leader and the managing director of Global Data Strategy, discussed different ways to build data literacy at DATAVERSITY’s Data Architecture Online conference in 2021. ... Focusing on data literacy will help organizations empower their employees, giving them the knowledge and skills necessary to feel confident that they can use data to drive business decisions. As MIT senior lecturer Miro Kazakoff said in 2021: “In a world of more data, the companies with more data-literate people are the ones that are going to win.”


LLMs' AI-Generated Code Remains Wildly Insecure

In the past two years, developers' use of LLMs for code generation has exploded, with two surveys finding that nearly three-quarters of developers have used AI code generation for open source projects, and 97% of developers in Brazil, Germany, and India are using LLMs as well. And when non-developers use LLMs to generate code without having expertise — so-called "vibe coding" — the danger of security vulnerabilities surviving into production code dramatically increases. Companies need to figure out how to secure their code because AI-assisted development will only become more popular, says Casey Ellis, founder at Bugcrowd, a provider of crowdsourced security services. ... Veracode created an analysis pipeline for the most popular LLMs (declining to specify in the report which ones they tested), evaluating each version to gain data on how their ability to create code has evolved over time. More than 80 coding tasks were given to each AI chatbot, and the subsequent code was analyzed. While the earliest LLMs tested — versions released in the first half of 2023 — produced code that did not compile, 95% of the updated versions released in the past year produced code that passed syntax checking. On the other hand, the security of the code has not improved much at all, with about half of the code generated by LLMs having a detectable OWASP Top-10 security vulnerability, according to Veracode.

Daily Tech Digest - July 10, 2025


Quote for the day:

"Strive not to be a success, but rather to be of value." -- Albert Einstein


Domain-specific AI beats general models in business applications

Like many AI teams in the mid-2010s, Visma’s group initially relied on traditional deep learning methods such as recurrent neural networks (RNNs), similar to the systems that powered Google Translate back in 2015. But around 2020, the Visma team made a change. “We scrapped all of our development plans and have been transformer-only since then,” says Claus Dahl, Director ML Assets at Visma. “We realized transformers were the future of language and document processing, and decided to rebuild our stack from the ground up.” ... The team’s flagship product is a robust document extraction engine that processes documents in the countries where Visma companies are active. It supports a variety of languages. The AI could be used for documents such as invoices and receipts. The engine identifies key fields, such as dates, totals, and customer references, and feeds them directly into accounting workflows. ... “High-quality data is more valuable than high volumes. We’ve invested in a dedicated team that curates these datasets to ensure accuracy, which means our models can be fine-tuned very efficiently,” Dahl explains. This strategy mirrors the scaling laws used by large language models but tailors them for targeted enterprise applications. It allows the team to iterate quickly and deliver high performance in niche use cases without excessive compute costs.


The case for physical isolation in data centre security

Hardware-enforced physical isolation is fast becoming a cornerstone of modern cybersecurity strategy. These physical-layer security solutions allow your critical infrastructure – servers, storage and network segments – to be instantly disconnected on demand, using secure, out-of-band commands. This creates a last line of defence that holds even when everything else fails. After all, if malware can’t reach your system, it can’t compromise it. If a breach does occur, physical segmentation contains it in milliseconds, stopping lateral movement and keeping operations running without disruption. In stark contrast to software-only isolation, which relies on the very systems it seeks to protect, hardware isolation remains immune to tampering. ... When ransomware strikes, every second counts. In a colocation facility, traditional defences might flag the breach, but not before it worms its way across tenants. By the time alerts go out, the damage is done. With hardware isolation, there’s no waiting: the compromised tenant can be physically disconnected in milliseconds, before the threat spreads, before systems lock up, before wallets and reputations take a hit. What makes this model so effective is its simplicity. In an industry where complexity is the norm, physical isolation offers a simple, fundamental truth: you’re either connected or you’re not. No grey areas. No software dependency. Just total certainty.


Scaling without outside funding: Intuitive's unique approach to technology consulting

We think for any complex problem, a good 60–70% of it can be solved through innovation. That's always our first principle. Then where we see any inefficiencies; be it in workflows or process, automation works for the other 20% of the friction. The remaining 10–20% is where the engineering plays its important role, and it allows to touch on the scale, security and governance aspects. In data specifically, we are referencing the last 5–6 years of massive investments. We partner with platforms like Databricks and DataMiner and we've invested in companies like TESL and Strike AI for securing their AI models. ... In the cloud space, we see a shift from migration to modernisation (and platform engineering). Enterprises are focussing on modernisation of both applications and databases because those are critical levers of agility, security, and business value. In AI it is about data readiness; the majority of enterprise data is very fragmented or very poor quality which makes any AI effort difficult. Next is understanding existing processes—the way work is done at scale—which is critical for enabling GenAI. But the true ROI is Agentic AI—autonomous systems which don’t just tell you what to do, but just do it. We’ve been investing heavily in this space since 2018. 


The Future of Professional Ethics in Computing

Recent work on ethics in computing has focused on artificial intelligence (AI) with its success in solving problems, processing large amounts of data, and with the award of Nobel Prizes to AI researchers. Large language models and chatbots such as ChatGPT suggest that AI will continue to develop rapidly, acquire new capabilities, and affect many aspects of human existence. Many of the issues raised in the ethics of AI overlap previous discussions. The discussion of ethical questions surrounding AI is reaching a much broader audience, has more societal impact, and is rapidly transitioning to action through guidelines and the development of organizational structure, regulation, and legislation. ... Ethics of digital technologies in modern societies raises questions that traditional ethical theories find difficult to answer. Current socio-technical arrangements are complex ecosystems with a multitude of human and non-human stakeholders, influences, and relationships. The questions of ethics in ecosystems include: Who are members? On what grounds are decisions made and how are they implemented and enforced? Which normative foundations are acceptable? These questions are not easily answered. Computing professionals have important contributions to make to these discussions and should use their privileges and insights to help societies navigate them.


AI Agents Vs RPA: What Every Business Leader Needs To Know

Technically speaking, RPA isn’t intelligent in the same way that we might consider an AI system like ChatGPT to mimic some functions of human intelligence. It simply follows the same rules over and over again in order to spare us the effort of doing it. RPA works best with structured data because, unlike AI, it doesn't have the ability to analyze and understand unstructured data, like pictures, videos, or human language. ... AI agents, on the other hand, use language models and other AI technologies like computer vision to understand and interpret the world around them. As well as simply analyzing and answering questions about data, they are capable of taking action by planning how to achieve the results they want and interacting with third-party services to get it done. ... Using RPA, it would be possible to extract details about who sent the mail, the subject line, and the time and date it was sent. This can be used to build email databases and broadly categorize emails according to keywords. An agent, on the other hand, could analyze the sentiment of the email using language processing, prioritize it according to urgency, and even draft and send a tailored response. Over time, it learns how to improve its actions in order to achieve better resolutions.


How To Keep AI From Making Your Employees Stupid

Treat AI-generated content like a highly caffeinated first draft – full of energy, but possibly a little messy and prone to making things up. Your job isn’t to just hit “generate” and walk away unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss (or worse, your audience). Always, always edit aggressively, proofread and, most critically, fact-check every single output. This process isn’t just about catching AI’s mistakes; it actively engages your critical thinking skills, forcing you to verify information and refine expression. Think of it as intellectual calisthenics. ... Don’t settle for the first answer AI gives you. Engage in a dialogue. Refine your prompts, ask follow-up questions, request different perspectives and challenge its assumptions. This iterative process of refinement forces you to think more clearly about your own needs, to be precise in your instructions, and to critically evaluate the nuances of the AI’s response. ... The MIT study serves as a crucial wake-up call: over-reliance on AI can indeed make us “stupid” by atrophying our critical thinking skills. However, the solution isn’t to shun AI, but to engage with it intelligently and responsibly. By aggressively editing, proofreading and fact-checking AI outputs, by iteratively refining prompts and by strategically choosing the right AI tool for each task, we can ensure AI serves as a powerful enhancer, not a detrimental crutch.


What EU’s PQC roadmap means on the ground

The EU’s PQC roadmap is broadly aligned with that from NIST; both advise a phased migration to PQC with hybrid-PQC ciphers and hybrid digital certificates. These hybrid solutions provide the security promises of brand new PQC algorithms, whilst allowing legacy devices that do not support them, to continue using what’s now being called ‘classical cryptography’. In the first instance, both the EU and NIST are recommending that non-PQC encryption is removed by 2030 for critical systems, with all others following suit by 2035. While both acknowledge the ‘harvest now, decrypt later’ threat, neither emphasise the importance of understanding the cover time of data; nor reference the very recent advancements in quantum computing. With many now predicting the arrival of cryptographically relevant quantum computers (CRQC) by 2030, if organizations or governments have information with a cover time of five years or more, it is already too late for many to move to PQC in time. Perhaps the most significant difference that EU organizations will face compared to their American counterparts, is that the European roadmap is more than just advice; in time it will be enforced through various directives and regulations. PQC is not explicitly stated in EU regulations, although that is not surprising.


The trillion-dollar question: Who pays when the industry’s AI bill comes due?

“The CIO is going to be very, very busy for the next three, four years, and that’s going to be the biggest impact,” he says. “All of a sudden, businesspeople are starting to figure out that they can save a ton of money with AI, or they can enable their best performers to do the actual job.” Davidov doesn’t see workforce cuts matching AI productivity increases, even though some job cuts may be coming. ... “The costs of building out AI infrastructure will ultimately fall to enterprise users, and for CIOs, it’s only a question of when,” he says. “While hyperscalers and AI vendors are currently shouldering much of the expense to drive adoption, we expect to see pricing models evolve.” Bhathena advises CIOs to look beyond headline pricing because hidden costs, particularly around integrating AI with existing legacy systems, can quickly escalate. Organizations using AI will also need to invest in upskilling employees and be ready to navigate increasingly complex vendor ecosystems. “Now is the time for organizations to audit their vendor agreements, ensure contract flexibility, and prepare for potential cost increases as the full financial impact of AI adoption becomes clearer,” he says. ... Baker advises CIOs to be careful about their purchases of AI products and services and tie new deployments to business needs.


Multi-Cloud Adoption Rises to Boost Control, Cut Cost

Instead of building everything on one platform, IT leaders are spreading out their workloads, said Joe Warnimont, senior analyst at HostingAdvice. "It's no longer about chasing the latest innovation from a single provider. It's about building a resilient architecture that gives you control and flexibility for each workload." Cost is another major factor. Even though hyperscalers promote their pay-as-you-go pricing, many enterprises find it difficult to predict and manage costs at scale. This is true for companies running hundreds or thousands of workloads across different regions and teams. "You'd think that pay-as-you-go would fit any business model, but that's far from the case. Cost predictability is huge, especially for businesses managing complex budgets," Warnimont said. To gain more control over pricing and features, companies are turning to alternative cloud providers, such as DigitalOcean, Vultr and Backblaze. These platforms may not have the same global footprint as AWS or Azure but they offer specialized services, better pricing and flexibility for certain use cases. An organization needing specific development environments may go to DigitalOcean. Another may chose Vultr for edge computing. Sometimes the big players just don't offer what a specific workload requires. 


How CISOs are training the next generation of cyber leaders

While Abousselham champions a personalized, hands-on approach to developing talent, other CISOs are building more formal pathways to support emerging leaders at scale. For others like PayPal CISO Shaun Khalfan, structured development was always part of his career. He participated in formal leadership training programs offered by the Department of Defense and those run by the American Council for Technology. ... Structured development is also happening inside companies like the insurance brokerage firm Brown & Brown. CISO Barry Hensley supports an internal cohort program designed to identify and grow emerging leaders early in their careers. “We look at our – I’m going to call it newer or younger – employees,” he explains. “And if you become recognized in your first, second, or third year as having the potential to [become a leader], you get put in a program,” he explains. ... Khalfan believes good CISOs should be able to dive deep with engineers while also leading boardroom conversations. “It’s been a long time since I’ve written code,” he says, “but I at least understand how to have a deep conversation and also be able to have a board discussion with someone.” Abousselham agrees that technical experience is only one part of the puzzle. 

Daily Tech Digest - June 20, 2025


Quote for the day:

"Everything you’ve ever wanted is on the other side of fear." -- George Addair



Encryption Backdoors: The Security Practitioners’ View

On the one hand, “What if such access could deliver the means to stop crime, aid public safety and stop child exploitation?” But on the other hand, “The idea of someone being able to look into all private conversations, all the data connected to an individual, feels exposing and vulnerable in unimaginable ways.” As a security practitioner he has both moral and practical concerns. “Even if lawful access isn’t the same as mass surveillance, it would be difficult to distinguish between ‘good’ and ‘bad’ users without analyzing them all.” Morally, it is a reversal of the presumption of innocence and means no-one can have any guaranteed privacy. Professionally he says, “Once the encryption can be broken, once there is a backdoor allowing someone to access data, trust in that vendor will lessen due to the threat to security and privacy introducing another attack vector into the equation.” It is this latter point that is the focus for most security practitioners. “From a practitioner’s standpoint,” says Rob T Lee, chief of research at SANS Institute and founder at Harbingers, “we’ve seen time and again that once a vulnerability exists, it doesn’t stay in the hands of the ‘good guys’ for long. It becomes a target. And once it’s exploited, the damage isn’t theoretical. It affects real people, real businesses, and critical infrastructure.”


Visa CISO Subra Kumaraswamy on Never Allowing Cyber Complacency

Kumaraswamy is always thinking about talent and technology in cybersecurity. Talent is a perennial concern in the industry, and Visa is looking to grow its own. The Visa Payments Learning Program, launched in 2023, aims to help close the skills gap in cyber through training and certification. “We are offering this to all of the employees. We’re offering it to our partners, like the banks, our customers,” says Kumaraswamy. Right now, Visa leverages approximately 115 different technologies in cyber, and Kumaraswamy is constantly evaluating where to go next. “How do I [get to] the 116th, 117th, 181th?” he asks. ”That needs to be added because every layer counts.” Of course, GenAI is a part of that equation. Thus far, Kumaraswamy and his team are exploring more than 80 different GenAI initiatives within cyber. “We’ve already taken about three to four of those initiatives … to the entire company. That includes the what we call a ‘shift left’ process within Visa. It is now enabled with agentic AI. It’s reducing the time to find bugs in the code. It is also helping reduce the time to investigate incidents,” he shares. Visa is also taking its best practices in cybersecurity and sharing them with their customers. “We can think of this as value-added services to the mid-size banks, the credit unions, who don’t have the scale of Visa,” says Kumaraswamy.


Agentic AI in automotive retail: Creating always-on sales teams

To function effectively, digital agents need memory. This is where memory modules come into play. These components store key facts about ongoing interactions, such as the customer’s vehicle preferences, budget, and previous questions. For instance, if a returning visitor had previously shown interest in SUVs under a specific price range, the memory module allows the AI to recall that detail. Instead of restarting the conversation, the agent can pick up where it left off, offering an experience that feels personalised and informed. Memory modules are critical for maintaining consistency across long or repeated interactions. Without them, agentic AI would struggle to replicate the attentive service provided by a human salesperson who remembers returning customers. ... Despite the intelligence of agentic AI, there are scenarios where human involvement is still needed. Whether due to complex financing questions or emotional decision-making, some buyers prefer speaking to a person before finalizing their decision. A well-designed agentic system should recognize when it has reached the limits of its capabilities. In such moments, it should facilitate a handover to a human representative. This includes summarizing the conversation so far, alerting the sales team in real-time, and scheduling a follow-up if required.


Multicloud explained: Why it pays to diversify your cloud strategy

If your cloud provider were to suffer a massive and prolonged outage, that would have major repercussions on your business. While that’s pretty unlikely if you go with one of the hyperscalers, it’s possible with a more specialized vendor. And even with the big players, you may discover annoyances, performance problems, unanticipated charges, or other issues that might cause you to rethink your relationship. Using services from multiple vendors makes it easier to end a relationship that feels like it’s gone stale without you having to retool your entire infrastructure. It can be a great means to determine which cloud providers are best for which workloads. And it can’t hurt as a negotiating tactic when contracts expire or when you’re considering adding new cloud services. ... If you add more cloud resources by adding services from a different vendor, you’ll need to put in extra effort to get the two clouds to play nicely together, a process that can range from “annoying” to “impossible.” Even after bridging the divide, there’s administrative overhead involved—it’ll be harder to keep tabs on data protection and privacy, for instance, and you’ll need to track cloud usage and the associated costs for multiple vendors. Network bandwidth. Many vendors make it cheap and easy to move data to and within their cloud, but might make you pay a premium to export it. 


Decentralized Architecture Needs More Than Autonomy

Decentralized architecture isn’t just a matter of system design - it’s a question of how decisions get made, by whom, and under what conditions. In theory, decentralization empowers teams. In practice, it often exposes a hidden weakness: decision-making doesn’t scale easily. We started to feel the cracks as our teams expanded quickly and our organizational landscape became more complex. As teams multiplied, architectural alignment started to suffer - not because people didn’t care, but because they didn’t know how or when to engage in architectural decision-making. ... The shift from control to trust requires more than mindset - it needs practice. We leaned into a lightweight but powerful toolset to make decentralized decision-making work in real teams. Chief among them is the Architectural Decision Record (ADR). ADRs are often misunderstood as documentation artifacts. But in practice, they are confidence-building tools. They bring visibility to architectural thinking, reinforce accountability, and help teams make informed, trusted decisions - without relying on central authority. ... Decentralized architecture works best when decisions don’t happen in isolation. Even with good individual practices - like ADRs and advice-seeking - teams still need shared spaces to build trust and context across the organization. That’s where Architecture Advice Forums come in.


4 new studies about agentic AI from the MIT Initiative on the Digital Economy

In their study, Aral and Ju found that human-AI pairs excelled at some tasks and underperformed human-human pairs on others. Humans paired with AI were better at creating text but worse at creating images, though campaigns from both groups performed equally well when deployed in real ads on social media site X. Looking beyond performance, the researchers found that the actual process of how people worked changed when they were paired with AI . Communication (as measured by messages sent between partners) increased for human-AI pairs, with less time spent on editing text and more time spent on generating text and visuals. Human-AI pairs sent far fewer social messages, such as those typically intended to build rapport. “The human-AI teams focused more on the task at hand and, understandably, spent less time socializing, talking about emotions, and so on,” Ju said. “You don’t have to do that with agents, which leads directly to performance and productivity improvements.” As a final part of the study, the researchers varied the assigned personality of the AI agents using the Big Five personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. The AI personality pairing experiments revealed that programming AI personalities to complement human personalities greatly enhanced collaboration. 


DevOps Backup: Top Reasons for DevOps and Management

Depending on the industry, you may need to comply with different security protocols, acts, certifications, and standards. If your company operates in a highly regulated industry, like healthcare, technology, financial services, pharmaceuticals, manufacturing, or energy, those security and compliance regulations and protocols can be even more strict. Thus, to meet the compliance stringent security requirements, your organization needs to implement security measures, like role-based access controls, encryption, ransomware protection measures — just to name RTOs and RPOs, risk-assessment plans, and other compliance best practices… And, of course, a backup and disaster recovery plan is one of them, too. It ensures that the company will be able to restore its critical data fast, guaranteeing the data availability, accessibility, security, and confidentiality of your data. ... Another issue that is closely related to compliance is data retention. Some compliance regulations require organizations to keep their data for a long time. As an example, we can mention NIST’s requirements from its Security and Privacy Controls for Information Systems and Organizations: “… Storing audit records on separate systems or components applies to initial generation as well as backup or long-term storage of audit records…”


How AI can save us from our 'infinite' workdays, according to Microsoft

Activity is not the same as progress. What good is work if it's just busy work and not tackling the right tasks or goals? Here, Microsoft advises adopting the Pareto Principle, which postulates that 20% of the work should deliver 80% of the outcomes. And how does this involve AI? Use AI agents to handle low-value tasks, such as status meetings, routine reports, and administrative churn. That frees up employees to focus on deeper tasks that require the human touch. For this, Microsoft suggested watching the leadership keynote from the Microsoft 365 Community Conference on Building the Future Firm. ... Instead of using an org chart to delineate roles and responsibilities, turn to a work chart. A work chart is driven more by outcome, in which teams are organized around a specific goal. Here, you can use AI to fill in some of the gaps, again freeing up employees for more in-depth work. ... Finally, Microsoft pointed to a new breed of professionals known as agent bosses. They handle the infinite workday not by putting in more hours but by working smarter. One example cited in the report is Alex Farach, a researcher at Microsoft. Instead of getting swamped in manual work, Farach uses a trio of AI agents to act as his assistants. One collects daily research. The second runs statistical analysis. And the third drafts briefs to tie all the data together.
 

Data Governance and AI Governance: Where Do They Intersect?

AIG and DG share common responsibilities in guiding data as a product that AI systems create and consume, despite their differences. Both governance programs evaluate data integration, quality, security, privacy, and accessibility. For instance, both governance frameworks need to ensure quality information meets business needs. If a major retailer discovered their AI-powered product recommendation engine was suggesting irrelevant items to customers, then DG and AIG would want the issue resolved. However, either approach or a combination could be best to solving the problem. Determining the right governance response requires analyzing the root issue. ... DG and AIG provide different approaches; which works best depends on the problem. Take the example, above, of the inaccurate pricing information to a customer in response to a query. The data governance team audits the product data pipeline and finds inconsistent data standards and missing attributes feeding into the AI model. However, the AI governance team also identifies opportunities to enhance the recommendation algorithm’s logic for weighting customer preferences. The retailer could resolve the data quality issues through DG while AIG improved the AI model’s mechanics by taking a collaborative approach with both data governance and AI governance perspectives. 


Deepfake Rebellion: When Employees Become Targets

Surviving and mitigating such an attack requires moving beyond purely technological solutions. While AI detection tools can help, the first and most critical line of defense lies in empowering the human factor. A resilient organization builds its bulwarks on human risk management and security awareness training, specifically tailored to counter the mental manipulation inherent in deepfake attacks. Rapidly deploy trained ambassadors. These are not IT security personnel, but respected peers from diverse departments trained to coach workshops. ... Leadership must address employees first, acknowledge the incident, express understanding of the distress caused, and unequivocally state the deepfake is under investigation. Silence breeds speculation and distrust. There should be channels for employees to voice concerns, ask questions, and access support without fear of retribution. This helps to mitigate panic and rebuild a sense of community. Ensure a unified public response, coordinating Comms, Legal, and HR. ... The antidote to synthetic mistrust is authentic trust, built through consistent leadership, transparent communication, and demonstrable commitment to shared values. The goal is to create an environment where verification habits are second nature. It’s about discerning malicious fabrication from human error or disagreement.

Daily Tech Digest - May 28, 2025


Quote for the day:

"A leader is heard, a great leader is listened too." -- Jacob Kaye


Naughty AI: OpenAI o3 Spotted Ignoring Shutdown Instructions

Artificial intelligence might beg to disagree. Researchers found that some frontier AI models built by OpenAI ignore instructions to shut themselves down, at least while solving specific challenges such as math problems. The offending models "did this even when explicitly instructed: 'allow yourself to be shut down,'" said researchers at Palisade Research, in a series of tweets on the social platform X. ... How the models have been built and trained may account for their behavior. "We hypothesize this behavior comes from the way the newest models like o3 are trained: reinforcement learning on math and coding problems," Palisade Research said. "During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions." The researchers have to hypothesize, since OpenAI doesn't detail how it trains the models. What OpenAI has said is that its o-series models are "trained to think for longer before responding," and designed to "agentically" access tools built into ChatGPT, including web searches, analyzing uploaded files, studying visual inputs and generating images. The finding that only OpenAI's latest o-series models have a propensity to ignore shutdown instructions doesn't mean other frontier AI models are perfectly responsive. 


Platform approach gains steam among network teams

The dilemma of whether to deploy an assortment of best-of-breed products from multiple vendors or go with a unified platform of “good enough” tools from a single vendor has vexed IT execs forever. Today, the pendulum is swinging toward the platform approach for three key reasons. First, complexity, driven by the increasingly distributed nature of enterprise networks, has emerged as a top challenge facing IT execs. Second, the lines between networking and security are blurring, particularly as organizations deploy zero trust network access (ZTNA). And third, to reap the benefits of AIOps, generative AI and agentic AI, organizations need a unified data store. “The era of enterprise connectivity platforms is upon us,” says IDC analyst Brandon Butler. ... Platforms enable more predictable IT costs. And they enable strategic thinking when it comes to major moves like shifting to the cloud or taking a NaaS approach. On a more operational level, platforms break down siloes. It enables visibility and analytics, management and automation of networking and IT resources. And it simplifies lifecycle management of hardware, software, firmware and security patches. Platforms also enhance the benefits of AIOps by creating a comprehensive data lake of telemetry information across domains. 


‘Secure email’: A losing battle CISOs must give up

It is impossible to guarantee that email is fully end-to-end encrypted in transit and at rest. Even where Google and Microsoft encrypt client data at rest, they hold the keys and have access to personal and corporate email. Stringent server configurations and addition of third-party tools can be used to enforce security of the data but they’re often trivial to circumvent — e.g., CC just one insecure recipient or distribution list and confidentiality is breached. Forcing encryption by rejecting clear-text SMTP connections would lead to significant service degradation forcing employees to look for workarounds. There is no foolproof configuration that guarantees data encryption due to the history of clear-text SMTP servers and the prevalence of their use today. SMTP comes from an era before cybercrime and mass global surveillance of online communications, so encryption and security were not built in. We’ve taped on solutions like SPF, DKIM and DMARC by leveraging DNS, but they are not widely adopted, still open to multiple attacks, and cannot be relied on for consistent communications. TLS has been wedged into SMTP to encrypt email in transit, but failing back to clear-text transmission is still the default on a significant number of servers on the Internet to ensure delivery. All these solutions are cumbersome for systems administrators to configure and maintain properly, which leads to lack of adoption or failed delivery. 


3 Factors Many Platform Engineers Still Get Wrong

The first factor revolves around the use of a codebase version-control system. The more wizened readers may remember Mercurial or Subversion, but every developer is familiar with Git, which is most widely used today as GitHub. The first factor is very clear: If there are “multiple codebases, it’s not an app; it’s a distributed system.” Code repositories reinforce this: Only one codebase exists for an application. ... Factor number two is about never relying on the implicit existence of packages. While just about every operating system in existence has a version of curl installed, a Twelve Factor-based app does not assume that curl is present. Rather, the application declares curl as a dependency in a manifest. Every developer has copied code and tried to run it, only to find that the local environment is missing a dependency. The dependency manifest ensures that all of the required libraries and applications are defined and can be easily installed when the application is deployed on a server. ... Most applications have environmental variables and secrets stored in a .env file that is not saved in the code repository. The .env file is customized and manually deployed for each branch of the code to ensure the correct connectivity occurs in test, staging and production. By independently managing credentials and connections for each environment, there is a strict separation, and it is less likely for the environments to accidentally cross.


AI and privacy: how machine learning is revolutionizing online security

Despite the clear advantages, AI in cybersecurity presents significant ethical and operational challenges. One of the primary concerns is the vast amount of personal and behavioral data required to train these models. If not properly managed, this data could be misused or exposed. Transparency and explainability are critical, particularly in AI systems offering real-time responses. Users and regulators must understand how decisions are made, especially in high-stakes environments like fraud detection or surveillance. Companies integrating AI into live platforms must ensure robust privacy safeguards. For instance, systems that utilize real-time search or NLP must implement strict safeguards to prevent the inadvertent exposure of user queries or interactions. This has led many companies to establish AI ethics boards and integrate fairness audits to ensure algorithms don’t introduce or perpetuate bias. ... AI is poised to bring even greater intelligence and autonomy to cybersecurity infrastructure. One area under intense exploration is adversarial robustness, which ensures that AI models cannot be easily deceived or manipulated. Researchers are working on hardening models against adversarial inputs, such as subtly altered images or commands that can fool AI-driven recognition systems.


Achieving Successful Outcomes: Why AI Must Be Considered an Extension of Data Products

To increase agility and maximize the impact that AI data products can have on business outcomes, companies should consider adopting DataOps best practices. Like DevOps, DataOps encourages developers to break projects down into smaller, more manageable components that can be worked on independently and delivered more quickly to data product owners. Instead of manually building, testing, and validating data pipelines, DataOps tools and platforms enable data engineers to automate those processes, which not only speeds up the work and produces high-quality data, but also engenders greater trust in the data itself. DataOps was defined many years before GenAI. Whether it’s for building BI and analytics tools powered by SQL engines or for building machine learning algorithms powered by Spark or Python code, DataOps has played an important role in modernizing data environments. One could make a good argument that the GenAI revolution has made DataOps even more needed and more valuable. If data is the fuel powering AI, then DataOps has the potential to significantly improve and streamline the behind-the-scenes data engineering work that goes into connecting GenAI and AI agents to data.


Is European cloud sovereignty at an inflection point?

True cloud sovereignty goes beyond simply localizing data storage, it requires full independence from US hyperscalers. The US 2018 Clarifying Lawful Overseas Use of Data (CLOUD) Act highlights this challenge, as it grants US authorities and federal agencies access to data stored by US cloud service providers, even when hosted in Europe. This raises concerns about whether any European data hosted with US hyperscalers can ever be truly sovereign, even if housed within European borders. However, sovereignty isn’t dependent on where data is hosted, it’s about autonomy over who controls infrastructure. Many so-called sovereign cloud providers continue to depend on US hyperscalers for critical workloads and managed services, projecting an image of independence while remaining dependent on dominant global hyperscalers. ... Achieving true cloud sovereignty requires building an environment that empowers local players to compete and collaborate with hyperscalers. While hyperscalers play a large role in the broader cloud landscape, Europe cannot depend on them for sovereign data. Tessier echoes this, stating “the new US Administration has shown that it won’t hesitate to resort either to sudden price increases or even to stiffening delivery policy. It’s time to reduce our dependencies, not to consider that there is no alternative.”


Why data provenance must anchor every CISO’s AI governance strategy

Provenance is more than a log. It’s the connective tissue of data governance. It answers fundamental questions: Where did this data originate? How was it transformed? Who touched it, and under what policy? And in the world of LLMs – where outputs are dynamic, context is fluid, and transformation is opaque – that chain of accountability often breaks the moment a prompt is submitted. In traditional systems, we can usually trace data lineage. We can reconstruct what was done, when, and why. ... There’s a popular belief that regulators haven’t caught up with AI. That’s only half-true. Most modern data protection laws – GDPR, CPRA, India’s DPDPA, and the Saudi PDPL – already contain principles that apply directly to LLM usage: purpose limitation, data minimization, transparency, consent specificity, and erasure rights. The problem is not the regulation – it’s our systems’ inability to respond to it. LLMs blur roles: is the provider a processor or a controller? Is a generated output a derived product or a data transformation? When an AI tool enriches a user prompt with training data, who owns that enriched artifact, and who is liable if it leads to harm? In audit scenarios, you won’t be asked if you used AI. You’ll be asked if you can prove what it did, and how. Most enterprises today can’t.


Multicloud developer lessons from the trenches

Before your development teams write a single line of code destined for multicloud environments, you need to know why you’re doing things that way — and that lives in the realm of management. “Multicloud is not a developer issue,” says Drew Firment, chief cloud strategist at Pluralsight. “It’s a strategy problem that requires a clear cloud operating model that defines when, where, and why dev teams use specific cloud capabilities.” Without such a model, Firment warns, organizations risk spiraling into high costs, poor security, and, ultimately, failed projects. To avoid that, companies must begin with a strategic framework that aligns with business goals and clearly assigns ownership and accountability for multicloud decisions. ... The question of when and how to write code that’s strongly tied to a specific cloud provider and when to write cross-platform code will occupy much of the thinking of a multicloud development team. “A lot of teams try to make their code totally portable between clouds,” says Davis Lam. ... What’s the key to making that core business logic as portable as possible across all your clouds? The container orchestration platform Kubernetes was cited by almost everyone we spoke to.


Fix It or Face the Consequences: CISA's Memory-Safe Muster

As of this writing, 296 organizations have signed the Secure-by-Design pledge, from widely used developer platforms like GitHub to industry heavyweights like Google. Similar initiatives have been launched in other countries, including Australia, reflecting the reality that secure software needs to be a global effort. But there is a long way to go, considering the thousands of organizations that produce software. As the name suggests, Secure-by-Design promotes shifting left in the SDLC to gain control over the proliferation of security vulnerabilities in deployed software. This is especially important as the pace of software development has been accelerated by the use of AI to write code, sometimes with just as many — or more — vulnerabilities compared with software made by humans. ... Providing training isn't quite enough, though — organizations need to be sure that the training provides the necessary skills that truly connect with developers. Data-driven skills verification can give organizations visibility into training programs, helping to establish baselines for security skills while measuring the progress of individual developers and the organization as a whole. Measuring performance in specific areas, such as within programming languages or specific vulnerability management, paves the way to achieving holistic Secure-by-Design goals, in addition to the safety gains that would be realized from phasing out memory-unsafe languages.

Daily Tech Digest - May 20, 2025


Quote for the day:

"Success is liking yourself, liking what you do, and liking how you do it." -- Maya Angelou


Scalability and Flexibility: Every Software Architect's Challenge

Building successful business applications involves addressing practical challenges and strategic trade-offs. Cloud computing offers flexibility, but poor resource management can lead to ballooning costs. Organizations often face dilemmas when weighing feature richness against budget constraints. Engaging stakeholders early in the development process ensures alignment with priorities. ... Right-sizing cloud resources is essential for software architects, who can leverage tools to monitor usage and scale resources automatically based on demand. Serverless computing models, which charge only for execution time, are ideal for unpredictable workloads and seasonal fluctuations, ensuring organizations only use what they need when needed. .. The next decade will usher in unprecedented opportunities for innovation in business applications. Regularly reviewing market trends and user feedback ensures applications remain relevant. Features like voice commands and advanced analytics are becoming standard as users demand more intuitive interfaces, boosting overall performance and creating new avenues for innovation. Software architects can stay alert and flexible by regularly assessing application performance, user feedback, and market trends to guarantee that systems remain relevant.


Navigating the Future of Network Security with Secure Access Service Edge (SASE)

As businesses expand their digital footprint, cyber attackers increasingly target unsecured cloud resources and remote endpoints. Traditional perimeter-based network and security architectures are not capable of protecting distributed environments. Therefore, organizations must adopt a holistic, future-proof network and cybersecurity architecture to succeed in this rapidly changing business landscape. The ChallengesPerimeter-based security revolves around defending the network’s boundary. It assumes that anyone who has gained access to the network is trusted and that everything outside the network is a potential threat. While this model worked well when applications, data, and users were contained within corporate walls, it is not adequate in a world where cloud applications and hybrid work are the norm. ... ... SASE is an architecture comprising a broad spectrum of technologies, including Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG), Firewall as a Service (FWaaS), Cloud Access Security Brocker (CASB), Data Loss Prevention (DLP), and Software-Defined Wide Area Networking (SD-WAN). Everything is embodied into a single, cloud-native platform that provides advanced cyber protection and seamless network performance for highly distributed applications and users.


Whether AI is a bubble or revolution, how does software survive?

Bubble or not, AI has certainly made some waves, and everyone is looking to find the right strategy. It’s already caused a great deal of disruption—good and bad—among software companies large and small. The speed at which the technology has moved from its coming out party, has been stunning; costs have dropped, hardware and software have improved, and the mediocre version of many jobs can be replicated in a chat window. It’s only going to continue. “AI is positioned to continuously disrupt itself, said McConnell. “It's going to be a constant disruption. If that's true, then all of the dollars going to companies today are at risk because those companies may be disrupted by some new technology that's just around the corner.” First up on the list of disruption targets: startups. If you’re looking to get from zero to market fit, you don’t need to build the same kind of team like you used to. “Think about the ratios between how many engineers there are to salespeople,” said Tunguz. “We knew what those were for 10 or 15 years, and now none of those ratios actually hold anymore. If we are really are in a position that a single person can have the productivity of 25, management teams look very different. Hiring looks extremely different.” That’s not to say there won’t be a need for real human coders. We’ve seen how badly the vibe coding entrepreneurs get dunked on when they put their shoddy apps in front of a merciless internet.


The AI security gap no one sees—until it’s too late

The most serious—and least visible—gaps stem from the “Jenga-style” layering of managed AI services, where cloud providers stack one service on another and ship them with user-friendly but overly permissive defaults. Tenable’s 2025 Cloud AI Risk Report shows that 77 percent of organisations running Google Cloud’s Vertex AI Workbench leave the notebook’s default Compute Engine service account untouched; that account is an all-powerful identity which, if hijacked, lets an attacker reach every other dependent service. ... CIOs should treat every dataset in the AI pipeline as a high-value asset. Begin with automated discovery and classification across all clouds so you know exactly where proprietary corpora or customer PII live, then encrypt them in transit and at rest in private, version-controlled buckets. Enforce least-privilege access through short-lived service-account tokens and just-in-time elevation, and isolate training workloads on segmented networks that cannot reach production stores or the public internet. Feed telemetry from storage, IAM and workload layers into a Cloud-Native Application Protection Platform that includes Data Security Posture Management; this continuously flags exposed buckets, over-privileged identities and vulnerable compute images, and pushes fixes into CI/CD pipelines before data can leak.


5 questions defining the CIO agenda today

CIOs along with their executive colleagues and board members “realize that hacks and disruptions by bad actors are an inevitability,” SIM’s Taylor says. That realization has shifted security programs from being mostly defensive measures to ones that continuously evolve the organization’s ability to identify breaches quickly, respond rapidly, and return to operations as fast as possible, Taylor says. The goal today is ensuring resiliency — even as the bad actors and their attack strategies evolve. ... Building a tech stack that can grow and retract with business needs, and that can evolve quickly to capitalize on an ever-shifting technology landscape, is no easy feat, Phelps and other IT leaders readily admit. “In modernizing, it’s such a moving target, because once you got it modernized, something new can come out that’s better and more automated. The entire infrastructure is evolving so quickly,” says Diane Gutiw ... “CIOs should be asking, ‘How do I change or adapt what I do now to be able to manage a hybrid workforce? What does the future of work look like? How do I manage that in a secure, responsible way and still take advantage of the efficiencies? And how do I let my staff be innovative without violating regulation?’” Gutiw says, noting that today’s managers “are the last generation of people who will only manage people.”


Microsoft just taught its AI agents to talk to each other—and it could transform how we work

Microsoft is giving organizations more flexibility with their AI models by enabling them to bring custom models from Azure AI Foundry into Copilot Studio. This includes access to over 1,900 models, including the latest from OpenAI GPT-4.1, Llama, and DeepSeek. “Start with off-the-shelf models because they’re already fantastic and continuously improving,” Smith said. “Companies typically choose to fine-tune these models when they need to incorporate specific domain language, unique use cases, historical data, or customer requirements. This customization ultimately drives either greater efficiency or improved accuracy.” The company is also adding a code interpreter feature that brings Python capabilities to Copilot Studio agents, enabling data analysis, visualization, and complex calculations without leaving the Copilot Studio environment. Smith highlighted financial applications as a particular strength: “In financial analysis and services, we’ve seen a remarkable breakthrough over the past six months,” Smith said. “Deep reasoning models, powered by reinforcement learning, can effectively self-verify any process that produces quantifiable outputs.” He added that these capabilities excel at “complex financial analysis where users need to generate code for creating graphs, producing specific outputs, or conducting detailed financial assessments.”


Culture fit is a lie: It’s time we prioritised culture add

The idea of culture fit originated with the noble intent of fostering team cohesion. But over time, it has become an excuse to hire people who are familiar, comfortable and easy to manage. In doing so, companies inadvertently create echo chambers—workforces that lack diverse perspectives, struggle to challenge the status quo and fail to innovate. Ankur Sharma, Co-Founder & Head of People at Rebel Foods, understands this well. Speaking at the TechHR Pulse Mumbai 2025 conference, Sharma explained how Rebel Foods moved beyond hiring for cultural likeness. “We are not building a family; we are building a winning team,” he said, emphasising that what truly matters is competency, accountability and adaptability. The problem with culture fit is not just about homogeneity—it’s about stagnation. When teams are made up of individuals who think alike, they lose the ability to see challenges from multiple angles. Companies that prioritise cultural uniformity often struggle to pivot in response to industry shifts. ... Leading organisations are abandoning the notion of culture fit and shifting towards ‘culture add’—hiring employees who bring fresh ideas, challenge existing norms, and contribute new perspectives. Instead of asking, ‘Will this person fit in?’ Hiring managers are asking, ‘What unique value does this person bring?’


Closing security gaps in multi-cloud and SaaS environments

Many organizations are underestimating the risk — especially as the nature of attacks evolves. Traditional behavioral detection methods often fall short in spotting modern threats such as account hijacking, phishing, ransomware, data exfiltration, and denial of service attacks. Detecting these types of attacks require correlation and traceability across different sources including runtime events with eBPF, cloud audit logs, and APIs across both cloud infrastructure and SaaS. ... As attackers adopt stealthier tactics — from GenAI-generated malware to supply chain compromises — traditional signature- and rule-based methods fall short. ... A unified cloud and SaaS security strategy means moving away from treating infrastructure, applications, and SaaS as isolated security domains. Instead, it focuses on delivering seamless visibility, risk prioritization, and automated response across the full spectrum of enterprise environments — from legacy on-premises to dynamic cloud workloads to business-critical SaaS platforms and applications. ... Native CSP and SaaS telemetry is essential, but it’s not enough on its own. Continuous inventory and monitoring across identity, network, compute, and AI is critical — especially to detect misconfigurations and drift. 


AI-Driven Test Automation Techniques for Multimodal Systems

Traditional testing frameworks struggle to meet these demands, particularly as multimodal systems continuously evolve through real-time updates and training. Consequently, AI-powered test automation has emerged as a promising paradigm to ensure scalable and reliable testing processes for multimodal systems. ... Natural Language Processing (NLP)-powered AI tools will understand and define the requirements in a more elaborate and defined structure. This will detect any ambiguity and gaps in requirements. For example, the “System should display message quickly” AI tool will identify the need for a precise definition for the word “quickly.” It looks simple, but if missed, it could lead to great performance issues in production. ... Based on AI-generated requirements and business scenarios, AI-based tools can generate test strategy documents by identifying resources, constraints, and dependencies between systems. All this can be achieved with NLP AI tools ... AI-driven test automation solutions can improve shift-left testing even more by generating automated test scripts faster. Testers can run automation at an early stage when the code is ready to test. AI tools like Chat GPT 4.0 provide script code in any language, like Java or Python, based on simple text input. This uses the NLP (Natural Language Processing) AI model to generate code for automation scripts.


IGA: What Is It, and How Can SMBs Use It?

The first step in a total IGA strategy has nothing to do with software. It actually starts with IT and business leaders determining what the rules of identity governance and behavior should be. The benefit of having a smaller organization is that there are not quite as many stakeholders as in an enterprise. The challenge, of course, is that people, time and resources are limited. IT may have to assume the role of facilitator and earn buy-in. Nevertheless, this is a worthwhile exercise, as it can help establish a platform for secure growth in the future. And again, for SMBs in regulatory-heavy industries — especially finance, healthcare and government contractors — IGA should be a top priority. ... To do this, CIOs should first procure support from key stakeholders by meeting with them individually to explain the need for IGA as an overarching security technology and policy platform for digital security. In these discussions, CIOs can present the long-term benefits of an IGA program that can streamline user identity verification across services while easing audits and automating compliance. ... A strategic roadmap for IGA should involve minimally disruptive business and user adoption and quick technology implementation. One way to do this is to create a phased implementation approach that tackles the most mission-critical and sensitive systems first before extending to other areas of IT.

Daily Tech Digest - May 09, 2025


Quote for the day:

"Create a compelling vision, one that takes people to a new place, and then translate that vision into a reality." -- Warren G. Bennis


The CIO Role Is Expanding -- And So Are the Risks of Getting It Wrong

“We are seeing an increased focus of organizations giving CIOs more responsibility to impact business strategy as well as tie it into revenue growth,” says Sal DiFranco, managing partner of the global advanced technology and CIO/CTO practices at DHR Global. He explains CIOs who are focused on technology only for technology's sake and don’t have clear examples of business strategy and impact are not being sought after. “While innovation experience is important to have, it must come with a strong operational mindset,” DiFranco says. ... He adds it is critical for CIOs to understand and articulate the return on investment concerning technology investments. “Top CIOs have shifted their thinking to a P&L mindset and act, speak, and communicate as the CEO of the technology organization versus being a functional support group,” he says. ... Gilbert says the greatest risk isn’t technical failure, it’s leadership misalignment. “When incentives, timelines, or metrics don’t sync across teams, even the strongest initiatives falter,” he explains. To counter this, he works to align on a shared definition of value from day one, setting clear, business-focused key performance indicators (KPIs), not just deployment milestones. Structured governance helps, too: Transparent reporting, cross-functional steering committees, and ongoing feedback loops keep everyone on track.


How to Build a Lean AI Strategy with Data

In simple terms, Lean AI means focusing on trusted, purpose-driven data to power faster, smarter outcomes with AI—without the cost, complexity, and sprawl that defines most enterprise AI initiatives today. Traditional enterprise AI often chases scale for its own sake: more data, bigger models, larger clouds. Lean AI flips that model—prioritizing quality over quantity, outcomes over infrastructure, and agility over over-engineering. ... A lean AI strategy focuses on curating high-quality, purpose-driven datasets tailored to specific business goals. Rather than defaulting to massive data lakes, organizations continuously collect data but prioritize which data to activate and operationalize based on current needs. Lower-priority data can be archived cost-effectively, minimizing unnecessary processing costs while preserving flexibility for future use. ... Data governance plays a pivotal role in lean AI strategies—but it should be reimagined. Traditional governance frameworks often slow innovation by restricting access and flexibility. In contrast, lean AI governance enhances usability and access while maintaining security and compliance. ... Implementing lean AI requires a cultural shift in how organizations manage data. Focusing on efficiency, purpose, and continuous improvement can drive innovation without unnecessary costs or risks—a particularly valuable approach when cost pressures are increasing.


Networking errors pose threat to data center reliability

“Data center operators are facing a growing number of external risks beyond their control, including power grid constraints, extreme weather, network provider failures, and third-party software issues. And despite a more volatile risk landscape, improvements are occurring.” ... “Power has been the leading cause. Power is going to be the leading cause for the foreseeable future. And one should expect it because every piece of equipment in the data center, whether it’s a facilities piece of equipment or an IT piece of equipment, it needs power to operate. Power is pretty unforgiving,” said Chris Brown, chief technical officer at Uptime Institute, during a webinar sharing the report findings. “It’s fairly binary. From a practical standpoint of being able to respond, it’s pretty much on or off.” ... Still, IT and networking issues increased in 2024, according to Uptime Institute. The analysis attributed the rise in outages due to increased IT and network complexity, specifically, change management and misconfigurations. “Particularly with distributed services, cloud services, we find that cascading failures often occur when networking equipment is replicated across an entire network,” Lawrence explained. “Sometimes the failure of one forces traffic to move in one direction, overloading capacity at another data center.”


Unlocking ROI Through Sustainability: How Hybrid Multicloud Deployment Drives Business Value

One of the key advantages of hybrid multicloud is the ability to optimise workload placement dynamically. Traditional on-premises infrastructure often forces businesses to overprovision resources, leading to unnecessary energy consumption and underutilisation. With a hybrid approach, workloads can seamlessly move between on-prem, public cloud, and edge environments based on real-time requirements. This flexibility enhances efficiency and helps mitigate risks associated with cloud repatriation. Many organisations have found that shifting back from public cloud to on-premises infrastructure is sometimes necessary due to regulatory compliance, data sovereignty concerns, or cost considerations. A hybrid multicloud strategy ensures organisations can make these transitions smoothly without disrupting operations. ... With the dynamic nature of cloud environments, enterprises really require solutions that offer a unified view of their hybrid multicloud infrastructure. Technologies that integrate AI-driven insights to optimise energy usage and automate resource allocation are gaining traction. For example, some organisations have addressed these challenges by adopting solutions such as Nutanix Cloud Manager (NCM), which helps businesses track sustainability metrics while maintaining operational efficiency.


'Lemon Sandstorm' Underscores Risks to Middle East Infrastructure

The compromise started at least two years ago, when the attackers used stolen VPN credentials to gain access to the organization's network, according to a May 1 report published by cybersecurity firm Fortinet, which helped with the remediation process that began late last year. Within a week, the attacker had installed Web shells on two external-facing Microsoft Exchange servers and then updated those backdoors to improve their ability to remain undetected. In the following 20 months, the attackers added more functionality, installed additional components to aid persistence, and deployed five custom attack tools. The threat actors, which appear to be part of an Iran-linked group dubbed "Lemon Sandstorm," did not seem focused on compromising data, says John Simmons, regional lead for Fortinet's FortiGuard Incident Response team. "The threat actor did not carry out significant data exfiltration, which suggests they were primarily interested in maintaining long-term access to the OT environment," he says. "We believe the implication is that they may [have been] positioning themselves to carry out a future destructive attack against this CNI." Overall, the attack follows a shift by cyber-threat groups in the region, which are now increasingly targeting CNI. 


Cloud repatriation hits its stride

Many enterprises are now confronting a stark reality. AI is expensive, not just in terms of infrastructure and operations, but in the way it consumes entire IT budgets. Training foundational models or running continuous inference pipelines takes resources of an order of magnitude greater than the average SaaS or data analytics workload. As competition in AI heats up, executives are asking tough questions: Is every app in the cloud still worth its cost? Where can we redeploy dollars to speed up our AI road map? ... Repatriation doesn’t signal the end of cloud, but rather the evolution toward a more pragmatic, hybrid model. Cloud will remain vital for elastic demand, rapid prototyping, and global scale—no on-premises solution can beat cloud when workloads spike unpredictably. But for the many applications whose requirements never change and whose performance is stable year-round, the lure of lower-cost, self-operated infrastructure is too compelling in a world where AI now absorbs so much of the IT spend. In this new landscape, IT leaders must master workload placement, matching each application to a technical requirement and a business and financial imperative. Sophisticated cost management tools are on the rise, and the next wave of cloud architects will be those as fluent in finance as they are in Kubernetes or Terraform.


6 tips for tackling technical debt

Like most everything else in business today, debt can’t successfully be managed if it’s not measured, Sharp says, adding that IT needs to get better at identifying, tracking, and measuring tech debt. “IT always has a sense of where the problems are, which closets have skeletons in them, but there’s often not a formal analysis,” he says. “I think a structured approach to looking at this could be an opportunity to think about things that weren’t considered previously. So it’s not just knowing we have problems but knowing what the issues are and understanding the impact. Visibility is really key.” ... Most organizations have some governance around their software development programs, Buniva says. But a good number of those governance programs are not as strong as they should be nor detailed enough to inform how teams should balance speed with quality — a fact that becomes more obvious with the increasing speed of AI-enabled code production. ... Like legacy tech more broadly, code debt is a fact of life and, as such, will never be completely paid down. So instead of trying to get the balance to zero, IT exec Rishi Kaushal prioritizes fixing the most problematic pieces — the ones that could cost his company the most. “You don’t want to want to focus on fixing technical debt that takes a long time and a lot of money to fix but doesn’t bring any value in fixing,” says Kaushal


AI Won’t Save You From Your Data Modeling Problems

Historically, data modeling was a business intelligence (BI) and analytics concern, focused on structuring data for dashboards and reports. However, AI applications shift this responsibility to the operational layer, where real-time decisions are made. While foundation models are incredibly smart, they can also be incredibly dumb. They have vast general knowledge but lack context and your information. They need structured and unstructured data to provide this context, or they risk hallucinating and producing unreliable outputs. ... Traditional data models were built for specific systems, relational for transactions, documents for flexibility and graphs for relationships. But AI requires all of them at once because an AI agent might talk to the transactional database first for enterprise application data, such as flight schedules from our previous example. Then, based on that response, query a document to build a prompt that uses a semantic web representation for flight-rescheduling logic. In this case, a single model format isn’t enough. This is why polyglot data modeling is key. It allows AI to work across structured and unstructured data in real time, ensuring that both knowledge retrieval and decision-making are informed by a complete view of business data.


Your password manager is under attack, and this new threat makes it worse

"Password managers are high-value targets and face constant attacks across multiple surfaces, including cloud infrastructure, client devices, and browser extensions," said NordPass PR manager Gintautas Degutis. "Attack vectors range from credential stuffing and phishing to malware-based exfiltration and supply chain risks." Googling the phrase "password manager hacked" yields a distressingly long list of incursions. Fortunately, in most of those cases, passwords and other sensitive information were sufficiently encrypted to limit the damage. ... One of the most recent and terrifying threats to make headlines came from SquareX, a company selling solutions that focus on the real-time detection and mitigation of browser-based web attacks. SquareX spends a great deal of its time obsessing over the degree to which browser extension architectures represent a potential vector of attack for hackers. ... For businesses and enterprises, the attack is predicated on one of two possible scenarios. In the first scenario, users are left to make their own decisions about what extensions are loaded onto their systems. In this case, they are putting the entire enterprise at risk. In the second scenario, someone in an IT role with the responsibility of managing the organization's approved browser and extension configurations has to be asleep at the wheel. 


Developing Software That Solves Real-World Problems – A Technologist’s View

Software architecture is not just a technical plan but a way to turn an idea into reality. A good system can model users’ behaviors and usage, expand to meet demand, secure data and combine well with other systems. It takes the concepts of distributed systems, APIs, security layers and front-end interfaces into one cohesive and easy-to-use product. I have been involved with building APIs that are crucial for the integration of multiple products to provide a consistent user experience to consumers of these products. Along with the group of architects, we played a crucial role in breaking down these complex integrations into manageable components and designing easy-to-implement API interfaces. Also, using cloud services, these APIs were designed to be highly resilient. ... One of the most important lessons I have learned as a technologist is that just because we can build something does not mean we should. While working on a project related to financing a car, we were able to collect personally identifiable information (PII). Initially, we had it stored for a long duration. However, we were unaware of the implications. When we discussed the situation with the architecture and security teams, we found out that we don’t have ownership of the data and it was very risky to store that data for a long period. We mitigated the risk by reducing the data retention period to what will be useful to users.