Showing posts with label mainframe. Show all posts
Showing posts with label mainframe. Show all posts

Daily Tech Digest - August 09, 2025


Quote for the day:

“Develop success from failures. Discouragement and failure are two of the surest stepping stones to success.” -- Dale Carnegie


Is ‘Decentralized Data Contributor’ the Next Big Role in the AI Economy?

Training AI models requires real-world, high-quality, and diverse data. The problem is that the astronomical demand is slowly outpacing the available sources. Take public datasets as an example. Not only is this data overused, but it’s often restricted to avoid privacy or legal concerns. There’s also a huge issue with geographic or spatial data gaps where the information is incomplete regarding specific regions, which can and will lead to inaccuracies or biases with AI models. Decentralized contributors can help bust these challenges. ... Even though a large part of the world’s population has no problem with passively sharing data when browsing the web, due to the relative infancy of decentralized systems, active data contribution may seem to many like a bridge too far. Anonymized data isn’t 100% safe. Determined threat actor parties can sometimes re-identify individuals from unnamed datasets. The concern is valid, which is why decentralized projects working in the field must adopt privacy-by-design architectures where privacy is a core part of the system instead of being layered on top after the fact. Zero-knowledge proofs is another technique that can reduce privacy risks by allowing contributors to prove the validity of the data without exposing any information. For example, demonstrating their identity meets set criteria without divulging anything identifiable.


The ROI of Governance: Nithesh Nekkanti on Taming Enterprise Technical Debt

A key symptom of technical debt is rampant code duplication, which inflates maintenance efforts and increases the risk of bugs. A multi-pronged strategy focused on standardization and modularity proved highly effective, leading to a 30% reduction in duplicated code. This initiative went beyond simple syntax rules to forge a common development language, defining exhaustive standards for Apex and Lightning Web Components. By measuring metrics like technical debt density, teams can effectively track the health of their codebase as it evolves. ... Developers may perceive stricter quality gates as a drag on velocity, and the task of addressing legacy code can seem daunting. Overcoming this resistance requires clear communication and a focus on the long-term benefits. "Driving widespread adoption of comprehensive automated testing and stringent code quality tools invariably presents cultural and operational challenges," Nekkanti acknowledges. The solution was to articulate a compelling vision. ... Not all technical debt is created equal, and a mature governance program requires a nuanced approach to prioritization. The PEC developed a technical debt triage framework to systematically categorize issues based on type, business impact, and severity. This structured process is vital for managing a complex ecosystem, where a formal Technical Governance Board (TGB) can use data to make informed decisions about where to invest resources.


Why Third-Party Risk Management (TPRM) Can’t Be Ignored in 2025

In today’s business world, no organization operates in a vacuum. We rely on vendors, suppliers, and contractors to keep things running smoothly. But every connection brings risk. Just recently, Fortinet made headlines as threat actors were found maintaining persistent access to FortiOS and FortiProxy devices using known vulnerabilities—while another actor allegedly offered a zero-day exploit for FortiGate firewalls on a dark web forum. These aren’t just IT problems—they’re real reminders of how vulnerabilities in third-party systems can open the door to serious cyber threats, regulatory headaches, and reputational harm. That’s why Third-Party Risk Management (TPRM) has become a must-have, not a nice-to-have. ... Think of TPRM as a structured way to stay on top of the risks your third parties, suppliers and vendors might expose you to. It’s more than just ticking boxes during onboarding—it’s an ongoing process that helps you monitor your partners’ security practices, compliance with laws, and overall reliability. From cloud service providers, logistics partners, and contract staff to software vendors, IT support providers, marketing agencies, payroll processors, data analytics firms, and even facility management teams—if they have access to your systems, data, or customers, they’re part of your risk surface. 


Ushering in a new era of mainframe modernization

One of the key challenges in modern IT environments is integrating data across siloed systems. Mainframe data, despite being some of the most valuable in the enterprise, often remains underutilized due to accessibility barriers. With a z17 foundation, software data solutions can more easily bridge critical systems, offering unprecedented data accessibility and observability. For CIOs, this is an opportunity to break down historical silos and make real-time mainframe data available across cloud and distributed environments without compromising performance or governance. As data becomes more central to competitive advantage, the ability to bridge existing and modern platforms will be a defining capability for future-ready organizations. ... For many industries, mainframes continue to deliver unmatched performance, reliability, and security for mission-critical workloads—capabilities that modern enterprises rely on to drive digital transformation. Far from being outdated, mainframes are evolving through integration with emerging technologies like AI, automation, and hybrid cloud, enabling organizations to modernize without disruption. With decades of trusted data and business logic already embedded in these systems, mainframes provide a resilient foundation for innovation, ensuring that enterprises can meet today’s demands while preparing for tomorrow’s challenges.


Fighting Cyber Threat Actors with Information Sharing

Effective threat intelligence sharing creates exponential defensive improvements that extend far beyond individual organizational benefits. It not only raises the cost and complexity for attackers but also lowers their chances of success. Information Sharing and Analysis Centers (ISACs) demonstrate this multiplier effect in practice. ISACs are, essentially, non-profit organizations that provide companies with timely intelligence and real-world insights, helping them boost their security. The success of existing ISACs has also driven expansion efforts, with 26 U.S. states adopting the NAIC Model Law to encourage information sharing in the insurance sector. ... Although the benefits of information sharing are clear, actually implementing them is a different story. Common obstacles include legal issues regarding data disclosure, worries over revealing vulnerabilities to competitors, and the technical challenge itself – evidently, devising standardized threat intelligence formats is no walk in the park. And yet it can certainly be done. Case in point: the above-mentioned partnership between CrowdStrike and Microsoft. Its success hinges on its well-thought-out governance system, which allows these two business rivals to collaborate on threat attribution while protecting their proprietary techniques and competitive advantages. 


The Ultimate Guide to Creating a Cybersecurity Incident Response Plan

Creating a fit-for-purpose cyber incident response plan isn’t easy. However, by adopting a structured approach, you can ensure that your plan is tailored for your organisational risk context and will actually help your team manage the chaos that ensues a cyber attack. In our experience, following a step-by-step process to building a robust IR plan always works. Instead of jumping straight into creating a plan, it’s best to lay a strong foundation with training and risk assessment and then work your way up. ... Conducting a cyber risk assessment before creating a Cybersecurity Incident Response Plan is critical. Every business has different assets, systems, vulnerabilities, and exposure to risk. A thorough risk assessment identifies what assets need the most protection. The assets could be customer data, intellectual property, or critical infrastructure. You’ll be able to identify where the most likely entry points for attackers may be. This insight ensures that the incident response plan is tailored and focused on the most pressing risks instead of being a generic checklist. A risk assessment will also help you define the potential impact of various cyber incidents on your business. You can prioritise response strategies based on what incidents would be most damaging. Without this step, response efforts may be misaligned or inadequate in the face of a real threat.


How to Become the Leader Everyone Trusts and Follows With One Skill

Leaders grounded in reason have a unique ability; they can take complex situations and make sense of them. They look beyond the surface to find meaning and use logic as their compass. They're able to spot patterns others might miss and make clear distinctions between what's important and what's not. Instead of being guided by emotion, they base their decisions on credibility, relevance and long-term value. ... The ego doesn't like reason. It prefers control, manipulation and being right. At its worst, it twists logic to justify itself or dominate others. Some leaders use data selectively or speak in clever soundbites, not to find truth but to protect their image or gain power. But when a leader chooses reason, something shifts. They let go of defensiveness and embrace objectivity. They're able to mediate fairly, resolve conflicts wisely and make decisions that benefit the whole team, not just their own ego. This mindset also breaks down the old power structures. Instead of leading through authority or charisma, leaders at this level influence through clarity, collaboration and solid ideas. ... Leaders who operate from reason naturally elevate their organizations. They create environments where logic, learning and truth are not just considered as values, they're part of the culture. This paves the way for innovation, trust and progress. 


Why enterprises can’t afford to ignore cloud optimization in 2025

Cloud computing has long been the backbone of modern digital infrastructure, primarily built around general-purpose computing. However, the era of one-size-fits-all cloud solutions is rapidly fading in a business environment increasingly dominated by AI and high-performance computing (HPC) workloads. Legacy cloud solutions struggle to meet the computational intensity of deep learning models, preventing organizations from fully realizing the benefits of their investments. At the same time, cloud-native architectures have become the standard, as businesses face mounting pressure to innovate, reduce time-to-market, and optimize costs. Without a cloud-optimized IT infrastructure, organizations risk losing key operational advantages—such as maximizing performance efficiency and minimizing security risks in a multi-cloud environment—ultimately negating the benefits of cloud-native adoption. Moreover, running AI workloads at scale without an optimized cloud infrastructure leads to unnecessary energy consumption, increasing both operational costs and environmental impact. This inefficiency strains financial resources and undermines corporate sustainability goals, which are now under greater scrutiny from stakeholders who prioritize green initiatives.


Data Protection for Whom?

To be clear, there is no denying that a robust legal framework for protecting privacy is essential. In the absence of such protections, both rich and poor citizens face exposure to fraud, data theft and misuse. Personal data leakages – ranging from banking details to mobile numbers and identity documents – are rampant, and individuals are routinely subjected to financial scams, unsolicited marketing and phishing attacks. Often, data collected for one purpose – such as KYC verification or government scheme registration – finds its way into other hands without consent. ... The DPDP Act, in theory, establishes strong penalties for violations. However, the enforcement mechanisms under the Act are opaque. The composition and functioning of the Data Protection Board – a body tasked with adjudicating complaints and imposing penalties – are entirely controlled by the Union government. There is no independent appointments process, no safeguards against arbitrary decision-making, and no clear procedure for appeals. Moreover, there is a genuine worry that smaller civil society initiatives – such as grassroots surveys, independent research and community-based documentation efforts – will be priced out of existence. The compliance costs associated with data processing under the new framework, including consent management, data security audits and liability for breaches, are likely to be prohibitive for most non-profit and community-led groups.


Stargate’s slow start reveals the real bottlenecks in scaling AI infrastructure

“Scaling AI infrastructure depends less on the technical readiness of servers or GPUs and more on the orchestration of distributed stakeholders — utilities, regulators, construction partners, hardware suppliers, and service providers — each with their own cadence and constraints,” Gogia said. ... Mazumder warned that “even phased AI infrastructure plans can stall without early coordination” and advised that “enterprises should expect multi-year rollout horizons and must front-load cross-functional alignment, treating AI infra as a capital project, not a conventional IT upgrade.” ... Given the lessons from Stargate’s delays, analysts recommend a pragmatic approach to AI infrastructure planning. Rather than waiting for mega-projects to mature, Mazumder emphasized that “enterprise AI adoption will be gradual, not instant and CIOs must pivot to modular, hybrid strategies with phased infrastructure buildouts.” ... The solution is planning for modular scaling by deploying workloads in hybrid and multi-cloud environments so progress can continue even when key sites or services lag. ... For CIOs, the key lesson is to integrate external readiness into planning assumptions, create coordination checkpoints with all providers, and avoid committing to go-live dates that assume perfect alignment.

Daily Tech Digest - June 29, 2025


Quote for the day:

“Great minds discuss ideas; average minds discuss events; small minds discuss people.” -- Eleanor Roosevelt


Who Owns End-of-Life Data?

Enterprises have never been more focused on data. What happens at the end of that data's life? Who is responsible when it's no longer needed? Environmental concerns are mounting as well. A Nature study warns that AI alone could generate up to 5 million metric tons of e-waste by 2030. A study from researchers at Cambridge University and the Chinese Academy of Sciences said top reason enterprises dispose of e-waste rather than recycling computers is the cost. E-waste can contain metals, including copper, gold, silver aluminum and rare earth elements, but proper handling is expensive. Data security is a concern as well as breach proofing doesn't get better than destroying equipment. ... End-of-life data management may sit squarely in the realm of IT, but it increasingly pulls in compliance, risk and ESG teams, the report said. Driven by rising global regulations and escalating concerns over data leaks and breaches, C-level involvement at every stage signals that end-of-life data decisions are being treated as strategically vital - not simply handed off. Consistent IT participation also suggests organizations are well-positioned to select and deploy solutions that work with their existing tech stack. That said, shared responsibility doesn't guarantee seamless execution. Multiple stakeholders can lead to gaps unless underpinned by strong, well-communicated policies, the report said.


How AI is Disrupting the Data Center Software Stack

Over the years, there have been many major shifts in IT infrastructure – from the mainframe to the minicomputer to distributed Windows boxes to virtualization, the cloud, containers, and now AI and GenAI workloads. Each time, the software stack seems to get torn apart. What can we expect with GenAI? ... Galabov expects severe disruption in the years ahead on a couple of fronts. Take coding, for example. In the past, anyone wanting a new industry-specific application for their business might pay five figures for development, even if they went to a low-cost region like Turkey. For homegrown software development, the price tag would be much higher. Now, an LLM can be used to develop such an application for you. GenAI tools have been designed explicitly to enhance and automate several elements of the software development process. ... Many enterprises will be forced to face the reality that their systems are fundamentally legacy platforms that are unable to keep pace with modern AI demands. Their only course is to commit to modernization efforts. Their speed and degree of investment are likely to determine their relevance and competitive positioning in a rapidly evolving market. Kleyman believes that the most immediate pressure will fall on data-intensive, analytics-driven platforms such as CRM and business intelligence (BI). 


AI Improves at Improving Itself Using an Evolutionary Trick

The best SWE-bench agent was not as good as the best agent designed by expert humans, which currently scores about 70 percent, but it was generated automatically, and maybe with enough time and computation an agent could evolve beyond human expertise. The study is a “big step forward” as a proof of concept for recursive self-improvement, said Zhengyao Jiang, a cofounder of Weco AI, a platform that automates code improvement. Jiang, who was not involved in the study, said the approach could made further progress if it modified the underlying LLM, or even the chip architecture. DGMs can theoretically score agents simultaneously on coding benchmarks and also specific applications, such as drug design, so they’d get better at getting better at designing drugs. Zhang said she’d like to combine a DGM with AlphaEvolve. ... One concern with both evolutionary search and self-improving systems—and especially their combination, as in DGM—is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned.


Data center costs surge up to 18% as enterprises face two-year capacity drought

Smart enterprises are adapting with creative strategies. CBRE’s Magazine emphasizes “aggressive and long-term planning,” suggesting enterprises extend capacity forecasts to five or 10 years, and initiate discussions with providers much earlier than before. Geographic diversification has become essential. While major hubs price out enterprises, smaller markets such as São Paulo saw pricing drops of as much as 20.8%, while prices in Santiago fell 13.7% due to shifting supply dynamics. Magazine recommended “flexibility in location as key, exploring less-constrained Tier 2 or Tier 3 markets or diversifying workloads across multiple regions.” For Gogia, “Tier-2 markets like Des Moines, Columbus, and Richmond are now more than overflow zones, they’re strategic growth anchors.” Three shifts have elevated these markets: maturing fiber grids, direct renewable power access, and hyperscaler-led cluster formation. “AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance.


Fintech’s AI Obsession Is Useless Without Culture, Clarity and Control

what does responsible AI actually mean in a fintech context? According to PwC’s 2024 Responsible AI Survey, it encompasses practices that ensure fairness, transparency, accountability and governance throughout the AI lifecycle. It’s not just about reducing model bias — it’s about embedding human oversight, securing data, ensuring explainability and aligning outputs with brand and compliance standards. In financial services, these aren’t "nice-to-haves" — they’re essential for scaling AI safely and effectively. Financial marketing is governed by strict regulations and AI-generated content can create brand and legal risks. ... To move AI adoption forward responsibly, start small. Low-risk, high-reward use cases let teams build confidence and earn trust from compliance and legal stakeholders. Deloitte’s 2024 AI outlook recommends beginning with internal applications that use non-critical data — avoiding sensitive inputs like PII — and maintaining human oversight throughout. ... As BCG highlights, AI leaders devote 70% of their effort to people and process — not just technology. Create a cross-functional AI working group with stakeholders from compliance, legal, IT and data science. This group should define what data AI tools can access, how outputs are reviewed and how risks are assessed.


Is Microsoft’s new Mu for you?

Mu uses a transformer encoder-decoder design, which means it splits the work into two parts. The encoder takes your words and turns them into a compressed form. The decoder takes that form and produces the correct command or answer. This design is more efficient than older models, especially for tasks such as changing settings. Mu has 32 encoder layers and 12 decoder layers, a setup chosen to fit the NPU’s memory and speed limits. The model utilizes rotary positional embeddings to maintain word order, dual-layer normalization to maintain stability, and grouped-query attention to use memory more efficiently. ... Mu is truly groundbreaking because it is the first SLM built to let users control system settings using natural language, running entirely on a mainstream shipping device. Apple’s iPhones, iPads, and Macs all have a Neural Engine NPU and run on-device AI for features like Siri and Apple Intelligence. But Apple does not have a small language model as deeply integrated with system settings as Mu. Siri and Apple Intelligence can change some settings, but not with the same range or flexibility. ... By processing data directly on the device, Mu keeps personal information private and responds instantly. This shift also makes it easier to comply with privacy laws in places like Europe and the US since no data leaves your computer.


Is It a Good Time to Be a Software Engineer?

AI may be rewriting the rules of software development, but it hasn’t erased the thrill of being a programmer. If anything, the machines have revitalised the joy of coding. New tools make it possible to code in natural language, ship prototypes in hours, and bypass tedious setup work. From solo developers to students, the process may feel more immediate or rewarding. Yet, this sense of optimism exists alongside an undercurrent of anxiety. As large language models (LLMs) begin to automate vast swathes of development, some have begun to wonder if software engineering is still a career worth betting on. ... Meanwhile, Logan Thorneloe, a software engineer at Google, sees this as a golden era for developers. “Right now is the absolute best time to be a software engineer,” he wrote on LinkedIn. He points out “development velocity” as the reason. Thorneleo believes AI is accelerating workflows, shrinking prototype cycles from months to days, and giving developers unprecedented speed. Companies that adapt to this shift will win, not by eliminating engineers, but by empowering them. More than speed, there’s also a rediscovered sense of fun. Programmers who once wrestled with broken documentation and endless boilerplate are rediscovering the creative satisfaction that first drew them to the field. 


Dumping mainframes for cloud can be a costly mistake

Despite industry hype, mainframes are not going anywhere. They quietly support the backbone of our largest banks, governments, and insurance companies. Their reliability, security, and capacity for massive transactions give mainframes an advantage that most public cloud platforms simply can’t match for certain workloads. ... At the core of this conversation is culture. An innovative IT organization doesn’t pursue technology for its own sake. Instead, it encourages teams to be open-minded, pragmatic, and collaborative. Mainframe engineers have a seat at the architecture table alongside cloud architects, data scientists, and developers. When there’s mutual respect, great ideas flourish. When legacy teams are sidelined, valuable institutional knowledge and operational stability are jeopardized. A cloud-first mantra must be replaced by a philosophy of “we choose the right tool for the job.” The financial institution in our opening story learned this the hard way. They had to overcome their bias and reconnect with their mainframe experts to avoid further costly missteps. It’s time to retire the “legacy versus modern” conflict and recognize that any technology’s true value lies in how effectively it serves business goals. Mainframes are part of a hybrid future, evolving alongside the cloud rather than being replaced by it. 


Why Modern Data Archiving Is Key to a Scalable Data Strategy

Organizations are quickly learning they can’t simply throw all data, new and old, at an AI strategy; instead, it needs to be accurate, accessible, and, of course, cost-effective. Without these requirements in place, it’s far from certain AI-powered tools can deliver the kind of insight and reliability businesses need. As part of the various data management processes involved, archiving has taken on a new level of importance. ... For organizations that need to migrate data, for example, archiving is used to identify which essential datasets, while enabling users to offload inactive data in the most cost-effective way. This kind of win-win can also be applied to cloud resources, where moving data to the most appropriate service can potentially deliver significant savings. Again, this contrasts with tiering systems and NAS gateways, which rely on global file systems to provide cloud-based access to local files. The challenge here is that access is dependent on the gateway remaining available throughout the data lifecycle because, without it, data recall can be interrupted or cease entirely. ... It then becomes practical to strike a much better balance across the typical enterprise storage technology stack, including long-term data preservation and compliance, where data doesn’t need to be accessed so often, but where reliability and security are crucial.


The Impact of Regular Training and Timely Security Policy Changes on Dev Teams

Constructive refresher training drives continuous improvement by reinforcing existing knowledge while introducing new concepts like AI-powered code generation, automated debugging and cross-browser testing in manageable increments. Teams that implement consistent training programs see significant productivity benefits as developers spend less time struggling with unfamiliar tools and more time automating tasks to focus on delivering higher value. ... Security policies that remain static as teams grow create dangerous blind spots, compromising both the team’s performance and the organization’s security posture. Outdated policies fail to address emerging threats like malware infections and often become irrelevant to the team’s current workflow, leading to workarounds and system vulnerabilities. ... Proactive security integration into development workflows represents a fundamental shift from reactive security measures to preventative strategies. This approach enables growing teams to identify and address security concerns early in the development process, reducing the cost and complexity of remediation. Cultivating a security-first culture becomes increasingly important as teams grow. This involves embedding security considerations into various stages of the development life cycle. Early risk identification in cloud infrastructure reduces costly breaches and improves overall team productivity.

Daily Tech Digest - April 30, 2025


Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown


Common Pitfalls and New Challenges in IT Automation

“You don’t know what you don’t know and can’t improve what you can’t see. Without process visibility, automation efforts may lead to automating flawed processes. In effect, accelerating problems while wasting both time and resources and leading to diminished goodwill by skeptics,” says Kerry Brown, transformation evangelist at Celonis, a process mining and process intelligence provider. The aim of automating processes is to improve how the business performs. That means drawing a direct line from the automation effort to a well-defined ROI. ... Data is arguably the most boring issue on IT’s plate. That’s because it requires a ton of effort to update, label, manage and store massive amounts of data and the job is never quite done. It may be boring work, but it is essential and can be fatal if left for later. “One of the most significant mistakes CIOs make when approaching automation is underestimating the importance of data quality. Automation tools are designed to process and analyze data at scale, but they rely entirely on the quality of the input data,” says Shuai Guan, co-founder and CEO at Thunderbit, an AI web scraper tool. ... "CIOs often fall into the trap of thinking automation is just about suppressing noise and reducing ticket volumes. While that’s one fairly common use case, automation can offer much more value when done strategically,” says Erik Gaston


Outmaneuvering Tariffs: Navigating Disruption with Data-Driven Resilience

The fact that tariffs are coming was expected – President Donald Trump campaigned promising tariffs – but few could have expected their severity (145% on Chinese imports, as of this writing) and their pace of change (prohibitively high “reciprocal” tariffs on 100+ countries, only to be temporarily rescinded days later). Also unpredictable were second-order effects such as stock and bond market reactions, affecting the cost of capital, and the impact on consumer demand, due to the changing expectations of inflation or concerns of job loss. ... Most organizations will have fragmented views of data, including views of all of the components that come from a given supplier or are delivered through a specific transportation provider. They may have a product-centric view that includes all suppliers that contribute all of the components of a given product. But this data often resides in a variety of supplier-management apps, procurement apps, demand forecasting apps, and other types of apps. Some may be consolidated into a data lakehouse or a cloud data warehouse to enable advanced analytics, but the time required by a data engineering team to build the necessary data pipelines from these systems is often multiple days or weeks, and such pipelines will usually only be implemented for scenarios that the business expects will be stable over time.


The state of intrusions: Stolen credentials and perimeter exploits on the rise, as phishing wanes

What’s worrying is that in over half of intrusions (57%) the victim organizations learned about the compromise of their networks and systems from a third-party rather than discovering them through internal means. In 14% of cases, organizations were notified directly by attackers, usually in the form of ransom notes, but 43% of cases involved external entities such as a cybersecurity company or law enforcement agencies. The average time attackers spent inside a network until being discovered last year was 11 days, a one-day increase over 2023, though still a major improvement versus a decade ago when the average discovery time was 205 days. Attacker dwell time, as Mandiant calls it, has steadily decreased over the years, which is a good sign ... In terms of ransomware, the most common infection vector observed by Mandiant last year were brute-force attacks (26%), such as password spraying and use of common default credentials, followed by stolen credentials and exploits (21% each), prior compromises resulting in sold access (15%), and third-party compromises (10%). Cloud accounts and assets were compromised through phishing (39%), stolen credentials (35%), SIM swapping (6%), and voice phishing (6%). Over two-thirds of cloud compromises resulted in data theft and 38% were financially motivated with data extortion, business email compromise, ransomware, and cryptocurrency fraud being leading goals.


Three Ways AI Can Weaken Your Cybersecurity

“Slopsquatting” is a fresh AI take on “typosquatting,” where ne’er-do-wells spread malware to unsuspecting Web travelers who happen to mistype a URL. With slopsquatting, the bad guys are spreading malware through software development libraries that have been hallucinated by GenAI. ... While it is still unclear whether the bad guys have weaponized slopsquatting yet, GenAI’s tendency to hallucinate software libraries is perfectly clear. Last month, researchers published a paper that concluded that GenAI recommends Python and JavaScript libraries that don’t exist about one-fifth of the time. ... Like the SQL injection attacks that plagued early Web 2.0 warriors who didn’t adequately validate database input fields, prompt injections involve the surreptitious injection of a malicious prompt into a GenAI-enabled application to achieve some goal, ranging from information disclosure and code execution rights. Mitigating these sorts of attacks is difficult because of the nature of GenAI applications. Instead of inspecting code for malicious entities, organizations must investigate the entirery of a model, including all of its weights. ... A form of adversarial AI attacks, data poisoning or data manipulation poses a serious risk to organizations that rely on AI. According to the security firm CrowdStrike, data poisoning is a risk to healthcare, finance, automotive, and HR use cases, and can even potentially be used to create backdoors.


AI Has Moved From Experimentation to Execution in Enterprise IT

According to the SOAS report, 94% of organisations are deploying applications across multiple environments—including public clouds, private clouds, on-premises data centers, edge computing, and colocation facilities—to meet varied scalability, cost, and compliance requirements. Consequently, most decision-makers see hybrid environments as critical to their operational flexibility. 91% cited adaptability to fluctuating business needs as the top benefit of adopting multiple clouds, followed by improved app resiliency (68%) and cost efficiencies (59%). A hybrid approach is also reflected in deployment strategies for AI workloads, with 51% planning to use models across both cloud and on-premises environments for the foreseeable future. Significantly, 79% of organisations recently repatriated at least one application from the public cloud back to an on-premises or co-location environment, citing cost control, security concerns, and predictability. ... “While spreading applications across different environments and cloud providers can bring challenges, the benefits of being cloud-agnostic are too great to ignore. It has never been clearer that the hybrid approach to app deployment is here to stay,” said Cindy Borovick, Director of Market and Competitive Intelligence,


Trying to Scale With a Small Team? Here's How to Drive Growth Without Draining Your Resources

To be an effective entrepreneur or leader, communication is key, and being able to prioritize initiatives that directly align with the overall strategic vision ensures that your lean team is working on projects that have the greatest impact. Integrate key frameworks such as Responsible, Accountable, Consulted, and Informed (RACI) and Objectives and Key Results (OKRs) to maintain transparency, focus and measure progress. By focusing efforts on high-impact activities, your lean team can achieve high success and significant results without the unnecessary strain usually attributable to early-stage organizations. ... Many think that agile methodologies are only for the fast-moving software development industry — but in reality, the frameworks are powerful tools for lean teams in any industry. Encouraging the right culture is key where quick pivots, regular genuine feedback loops and leadership that promotes continuous improvement are part of the everyday workflows. This agile mindset, when adopted early, helps teams rapidly respond to market changes and client issues. ... Trusting others builds rapport. Assigning clear ownership of tasks while allowing those team members the autonomy to execute the strategies creatively and efficiently, while also allowing them to fail, is how trust is created.


Effecting Culture Changes in Product Teams

Depending on the organization, the responsibility of successfully leading a culture shift among the product team could fall to various individuals – the CPO, VP of product development, product manager, etc. But regardless of the specific title, to be an effective leader, you can’t assume you know all the answers. Start by having one-to-one conversations with numerous members on the product/engineering team. Ask for their input and understand, from their perspective, what is working, what’s not working, and what ideas they have for how to accelerate product release timelines. After conducting one-to-one discussions, sit down and correlate the information. Where are the common denominators? Did multiple team members make the same suggestions? Identify the roadblocks that are slowing down the product team or standing in the way of delivering incremental value on a more regular basis. In many cases, tech leaders will find that their team already knows how to fix the issue – they just need permission to do things a bit differently and adjust company policies/procedures to better support a more accelerated timeline. Talking one-on-one with team members also helps resolve any misunderstandings around why the pace of work must change as the company scales and accumulates more customers. Product engineers often have a clear vision of what the end product should entail, and they want to be able to deliver on that vision.


Microsoft Confirms Password Spraying Attack — What You Need To Know

The password spraying attack exploited a command line interface tool called AzureChecker to “download AES-encrypted data that when decrypted reveals the list of password spray targets,” the report said. It then, to add salt to the now open wound, accepted an accounts.txt file containing username and password combinations used for the attack, as input. “The threat actor then used the information from both files and posted the credentials to the target tenants for validation,” Microsoft explained. The successful attack enabled the Storm-1977 hackers to then leverage a guest account in order to create a compromised subscription resource group and, ultimately, more than 200 containers that were used for cryptomining. ... Passwords are no longer enough to keep us safe online. That’s the view of Chris Burton, head of professional services at Pentest People, who told me that “where possible, we should be using passkeys, they’re far more secure, even if adoption is still patchy.” Lorri Janssen-Anessi, director of external cyber assessments at BlueVoyant is no less adamant when it comes to going passwordless. ... And Brian Pontarelli, CEO of FusionAuth, said that the teams who are building the future of passwords are the same ones that are building and managing the login pages of their apps. “Some of them are getting rid of passwords entirely,” Pontarelli said


The secret weapon for transformation? Treating it like a merger

Like an IMO, a transformation office serves as the conductor — setting the tempo, aligning initiatives and resolving portfolio-level tensions before they turn into performance issues. It defines the “music” everyone should be playing: a unified vision for experience, business architecture, technology design and most importantly, change management. It also builds connective tissue. It doesn’t just write the blueprint — it stays close to initiative or project leads to ensure adherence, adapts when necessary and surfaces interdependencies that might otherwise go unnoticed. ... What makes the transformation office truly effective isn’t just the caliber of its domain leaders — it’s the steering committee of cross-functional VPs from core business units and corporate functions that provides strategic direction and enterprise-wide accountability. This group sets the course, breaks ties and ensures that transformation efforts reflect shared priorities rather than siloed agendas. Together, they co-develop and maintain a multi-year roadmap that articulates what capabilities the enterprise needs, when and in what sequence. Crucially, they’re empowered to make decisions that span the legacy seams of the organization — the gray areas where most transformations falter. In this way, the transformation office becomes more than connective tissue; it becomes an engine for enterprise decision-making.


Legacy Modernization: Architecting Real-Time Systems Around a Mainframe

When traffic spikes hit our web portal, those requests would flow through to the mainframe. Unlike cloud systems, mainframes can't elastically scale to handle sudden load increases. This created a bottleneck that could overload the mainframe, causing connection timeouts. As timeouts increased, the mainframe would crash, leading to complete service outages with a large blast radius, hundreds of other applications which depend on the mainframe would also be impacted. This is a perfect example of the problems with synchronous connections to the mainframes. When the mainframes could be overwhelmed by a highly elastic resource like the web, the result could be failure in datastores, and sometimes that failure could result in all consuming applications failing. ... Change Data Capture became the foundation of our new architecture. Instead of batch ETLs running a few times daily, CDC streamed data changes from the mainframes in near real-time. This created what we called a "system-of-reference" - not the authoritative source of truth (the mainframe remains "system-of-record"), but a continuously updated reflection of it. The system of reference is not a proxy of the system of record, which is why our website was still live when the mainframe went down.

Daily Tech Digest - February 12, 2025


Quote for the day:

“If you don’t have a competitive advantage, don’t compete.” -- Jack Welch


Security Is Blocking AI Adoption: Is BYOC the Answer?

Enterprises face unique hurdles in adopting AI at scale. Sensitive data must remain within secure, controlled environments, avoiding public networks or shared infrastructures. Traditional SaaS models often fail to meet these stringent data sovereignty and compliance demands. Beyond this, organizations require granular control, comprehensive auditing and full transparency to trace every AI decision and data access. This ensures vendors cannot interact with sensitive data without explicit approval and documentation. These unmet needs create a significant gap, preventing regulated industries from deploying AI solutions while maintaining compliance and security. ... The concept of Bring Your Own Cloud (BYOC) isn’t new. It emerged as a middle ground between traditional SaaS and on-premises deployments, promising to combine the best of both worlds: the convenience of managed services with the control and security of on-premises infrastructure. However, its history in the industry has been marked by both successes and cautionary tales. Early BYOC implementations often failed to live up to their promises. Some vendors merely deployed their software into customer cloud accounts without proper architectural planning, resulting in what was essentially remotely managed on-premises environments. 


The Importance of Continuing Education in Data and Tech

Continuing education plays a vital role in workforce development and career advancement within the tech industries, where rapid technological advancements and evolving market demands necessitate a culture of lifelong learning. As businesses increasingly rely on sophisticated data analytics, artificial intelligence (AI), and cloud technologies, professionals in these fields must continuously update their skills to remain competitive. Continuing education offers a pathway for individuals to acquire new capabilities, adapt to emerging technologies, and gain proficiency in specialized areas that are in high demand. By engaging in ongoing learning opportunities, tech professionals can enhance their expertise, making them more valuable to their current employers and more attractive to potential future ones. ... Professional certifications and competency-based education have become significant avenues for career advancement in the data and tech field. As the landscape of technology rapidly evolves, organizations increasingly seek professionals who possess validated skills and up-to-date knowledge. Professional certifications serve as tangible proof of one’s expertise in specific areas such as data governance, analytics, cybersecurity, or cloud computing. These certifications, offered by leading industry bodies and tech companies, are designed to align with current industry standards and demands.


Agents, shadow AI and AI factories: Making sense of it all in 2025

“Agentic AI” promises “digital agents” that learn from us, and can perceive, reason problems out in multiple steps and then make autonomous decisions on our behalf. They can solve multilayered questions that require them to interact with many other agents, formulate answers and take actions. Consider forecasting agents in the supply chain predicting customer needs by engaging customer service agents, and then proactively adjusting warehouse stock by engaging inventory agents. Every knowledge worker will find themselves gaining these superhuman capabilities backed by a team of domain-specific task agent workers helping them tackle large complex jobs with less expended effort. ... However, the proliferation of generative, and soon agentic AI, presents a growing problem for IT teams. Maybe you’re familiar with “shadow IT,” where individual departments or users procure their own resources, without IT knowing. In today’s world we have “shadow AI,” and it’s hitting businesses on two fronts. ... Today’s enterprises create value through insights and answers driven by intelligence, setting them apart from their competitors. Just as past industrial revolutions transformed industries — think about steam, electricity, internet and later computer software — the age of AI heralds a new era where the production of intelligence is the core engine of every business. 


Is VMware really becoming the new mainframe?

“CIOs can start to unwind their dependence on VMware,” he says. “But they need to know it may not have any material reduction in their spend with Broadcom over multiple renewals. They’re going to have to get completely off Broadcom.” Still, Warrilow recommends that CIOs running VMware consider alternatives over the long term. They should also look for exit strategies for other market-dominant IT products they use, given that Broadcom has seen early success with VMware, he says. “The cautionary tale for CIOs is that this is just the beginning,” he says. “Every tech investment firm is going to be saying, ‘I want what Broadcom has with their share price.’  ... “The comparison works a bit, maybe from a stickiness perspective, because customers have built their applications and workload using virtualization technology on VMware,” he says. “When they have to do a mass refactoring of applications, it’s very, very hard.” But the analogy has its limitations because many users think of mainframes as a legacy technology, while VMware’s cloud-based products address future challenges, he adds. “The cloud is the future for running your AI workload,” Shenoy says. “Customers have trusted us for the last 20 to 25 years to run their business-critical applications, and the interesting part right now is we are seeing a lot of growth of these AI workloads and container workloads running on VMware.”


Deep Learning – a Necessity

It is essential in architecture that we realize that a skill set is not an arbitrary thing. It isn’t learn one skill and you are done. It also isn’t learn any skill from any background and you’re in. It is the application of all of the identified and necessary skills combined that makes a distinguished architect. It is also important to understand the purpose and context of mastery. Working in a startup is very different from working in a large corporation. Industry can change things significantly as well. Always remember that the profession’s purpose has to be paramount in the learning. For example, both doctors and lawyers have to deal with clients and need human interaction skills to be successful. Yet, the nature and implementation of these differ drastically. We will explore this point in a further article. However, do not underestimate the impact of changing the meaning of the profession while claiming similar skills. The current environment is rife with this kind of co-opting of the terminology and tools to alter the whole purpose of architecture fundamentally. ... In medicine and other professions, an individual studies and practices for 7+ years to become fully independent, and they never stop learning. This learning is tracked by both mentors and the profession. Because medicine is so essential to humans it is important that professionals are measured and constantly update and hone their competencies.


Crawl, then walk, before you run with AI agents, experts recommend

The best bet for percolating AI agents throughout the organization is to keep things as simple as possible. "Companies and employees that have already found ways to operationalize intelligent agents for simple tasks are best placed to exploit the next wave with agentic AI," said Benjamin Lee, professor of computer and information science at the University of Pennsylvania. "These employees would already be engaging generative AI for simple tasks and they would be manually breaking complex tasks into simpler tasks for the AI. Such employees would already be seeing productivity gains from using generative AI for these simple tasks." Rowan agreed that enterprises should adopt a crawl, walk, run approach: "Begin with a pilot program to explore the potential of multiagent systems in a controlled, measurable environment." "Most people say AI is at the toddler stage, whereas agentic AI is like a tween," said Ben Sapp, global practice lead of intelligence at Digital.ai. "It's functional and knows how to execute certain functions." Enterprises and their technology teams "should socialize the use of generative AI for simple tasks within their organizations," Lee continued. "They should have strategies for breaking complex tasks into simpler ones so that, when intelligent agents become a reality, the sources of productivity gains are transparent, easily understood, and trusted."


Growth of digital wallet use shaking up payment regulations and benefits delivery

Australian banks are calling on the government to pass legislation that accommodates payments with digital wallets within the country’s regulatory framework. A release from the Australian Banking Association (ABA) argues that with the country’s residents making $20 billion worth of payments across 500 million transactions each month with mobile wallets, all players within the payment ecosystem should be under the remit of the Reserve Bank of Australia. ... Digital wallets are by far the most popular method of making cross-border payments, according to a new report from Payments Cards & Mobile. The How Digital Wallets Are Transforming Cross-Border Transactions report shows digital wallets are chosen for international transactions by 42.1 percent. That makes them more people than the next two most popular methods, money transfer services (16.8 percent) and bank accounts (14.8 percent) combined. Transactions with digital wallets are much faster than wire transfers, are available to people who don’t possess bank accounts, and have lower fees than bank transfers, the report says. Interoperability remains a challenge, and regulations and infrastructure limitations could pose barriers to adoption, but the report authors only expect the dominance of digital wallets to increase in the years ahead.


My vision is to create a digital twin of our entire operations, from design and manufacturing to products and customers

We approach this transformation from three dimensions. First is empathy – truly understanding not just who our customers are, but their emotions. This is where the concept of creating a ‘digital twin’ of the customer comes in. Second is innovation – not just adopting new technologies but ensuring that our processes are lean, digitised, and seamless throughout the customer journey, from research to purchase, service, and brand loyalty. The goal is to provide a consistent and empathetic experience across all touchpoints.  ... The first challenge is identifying our customers. For example, if a distributor in one business also buys from another or if a consumer connects with one of our industrial projects, it’s hard to track. To address this, we launched a customer UID project, which has been in progress for months. It helps us identify customers across channels while keeping an eye on privacy and adhering to upcoming data protection regulations. The second part involves gathering all customer-related data in one place. Over the past three years, we unified all customer interactions into a single platform with a one CRM strategy, which was complex but essential. Now, with AI solutions like social listening combined with sentiment analysis, we can understand what our customers are saying about us and where we need to improve, both in India and globally. 


Will AI Chip Supply Dry Up and Turn Your Project Into a Costly Monster?

CIOs and other IT leaders face tremendous pressure to quickly develop GenAI strategies in the face of a potential supply shortage. With the cost of individual units, spending can easily reach into the multi-million-dollar range. But it wouldn’t be the first time companies have dealt with semiconductor shortages. During the COVID-19 pandemic, a spike in PC demand for remote work met with global shipping disruptions to create a chip drought that impacted everything from refrigerators to automobiles and PCs. “One thing we learned was the importance of supply chain resiliency, not being overly dependent on any one supplier and understanding what your alternatives are,” Hoecker says. “When we work with clients to make sure they have a more resilient supply chain, we consider a few things … One is making sure they rethink how much inventory do they want to keep for their most critical components so they can survive any potential shocks.” She adds, “Another is geographic resiliency, or understanding where your components come from and do you feel like you’re overly exposed to any one supplier or any one geography.” Nvidia’s GPUs, she notes, are harder to find alternatives for -- but other chips do have alternatives. “There are other places where you can dual-source or find more resiliency in your marketplace.”


WTF? Why the cybersecurity sector is overrun with acronyms

Imagine an organization is in the midst of a massive hack or security breach, and employees or clients are having to Google frantically to translate company emails, memos or crisis plans, slowing down the response. When these acronyms inevitably migrate into a cybersecurity company’s external marketing or communications efforts, they’re almost guaranteed to cause the general public to tune out news about issues and innovations that could have a far-reaching impact on how people live their lives and conduct their businesses. This is especially true as artificial intelligence (AI!) and machine learning (ML!) technologies expand and new acronyms emerge to keep pace with developments. Acronyms can also have unfortunate real-life connotations — point of sale, to name just one example. When shortened to POS, it can suggest something is… well, crappy. ... So, what’s behind the tendency to shorten terms to a jumble of often incomprehensible acronyms and abbreviations? “On the one hand, acronyms, abbreviations and jargon are used to achieve brevity, standardization and efficiency in communication, so if a profession is steeped in complex and technical language, it will likely be flowing with acronyms,” says Ian P. McCarthy, a professor of innovation and operations management at Simon Fraser University in Burnaby, British Columbia.

Daily Tech Digest - October 27, 2024

Who needs a humanoid robot when everything is already robotic?

The service sector will see a surge in delivery robots, streamlining last-mile package and food delivery logistics. Advanced cleaning robots will maintain both homes and commercial spaces. urgical robots performing minimally invasive procedures with high precision will benefit healthcare. Rehabilitation robots and exoskeletons will transform physical therapy and mobility, while robotic prosthetics will offer enhanced functionality to those who need them. At the microscopic level, nanorobots will revolutionize drug delivery and medical procedures. Agriculture will increasingly embrace harvesting and planting robots to automate crop management, with specialized versions for tasks like weeding and dairy farming. Autonomous vehicles and drone delivery systems will transform the transportation sector, while robotic parking solutions will optimize urban spaces. Military and defense applications will include reconnaissance drones, bomb disposal robots, and autonomous combat vehicles. Space exploration will continue to rely on advanced rovers, satellite-servicing robots, and assistants for astronauts on space stations. Underwater exploration robots and devices monitoring air and water quality will benefit environmental and oceanic research. 


Cybersecurity Isn't Easy When You're Trying to Be Green

Already, some green energy infrastructure has fallen prey to attackers. Charging stations for electric vehicles typically require connectivity, which makes them vulnerable to both compromise and disruption. In 2022, pro-Ukrainian hacktivists compromised chargers in Moscow to display messages of support for Ukraine. In 2019, a solar firm could no longer manage its 500 megawatts of wind and solar sites in the western US after a denial-of-service attack targeted an unpatched firewall, the FBI stated in a Private Industry Notification (PIN) in July. The risk could extend all the way to homeowners, who increasingly have adopted rooftop solar and need to be connected to be able to deliver their solar power and be credited. "This issue will only become more important as small solar systems continue to grow. When every house is a power plant, every house is a target," Morten Lund, of counsel for Foley & Lardner LLP, wrote in a brief directed at energy companies. "In many ways, the distributed nature of solar energy provides significant protection against catastrophic failures. But without sufficient protection at the project level, this strength quickly becomes a weakness."


A look at risk, regulation, and lock-in in the cloud

The threat here, if indeed it is a threat, is multifaceted. Firstly, financial implications can be significant. When a company heavily invests in a specific vendor’s ecosystem, the costs of migrating to a different provider, both in terms of money and resources, can be prohibitive. The reality is that any technology comes with a certain degree of lock-in. That is why I’m often amazed at enterprises that ask me for zero lock-in in any enterprise technology decision. It just does not exist. The question is how do we minimize the impact of the lock-in that any use of technology brings. This is something I explain extensively to enterprises. The risk is operational; dependencies on proprietary APIs and services might necessitate extensive application rewriting. ... Whether governmental regulation is a boon or a bane is a matter of perspective. On one side, it could enforce fairness, ensuring that no single provider exploits its position to the detriment of customers. Conversely, excessive regulation might stifle innovation and limit the aggressive evolution that characterizes the tech world. Also, we should consider that these regulations exist within one or a few countries, and as enterprises are now mostly international firms, that has less of the chilling effect that most expect.


Biometrics options expand, add more layers to secure financial services

The range of technologies being brought to bear against different fraud vectors also includes Herta’s biometrics being utilized by the EU’s EITHOS project to detect deepfakes, and age assurance and automated border control measures a pair of governments are looking into for contract opportunities. ... Mastercard is rolling out passkeys for payments in the Middle East and North Africa, following their launch in India. Starting with the noon Payments platform in the UAE, the Payment Passkey Service will by offered as a more secure alternative to OTPs at online checkouts. A Washington, D.C.-based think tank says America has a digital verification divide, due to the lack of documents possessed by low-income and marginalized people and the conflation of biometrics for ID verification with surveillance and law enforcement. Login.gov has helped less than it is supposed to so far, but evidence from ID.me suggests that the situation could be improved with biometrics. Panama has introduced a national digital ID and wallet for identity verification to access public and private services online. The digital ID is available to both citizens and permanent residents, and essentially digitizes the national ID card supplied by Mühlbauer and partners. 


AI Won’t Fix Your Software Delivery Problems

You can assess your personal productivity because it’s a feeling rather than a number. You don’t feel productive when dealing with busy work or handling constant interruptions. When you get a solid chunk of time to complete a task, you feel great. If an organization is interested in this kind of productivity, it should check in on employee satisfaction because people tend to be more satisfied when they can get things done. The State of DevOps report confirms this problem, as the high ratings for AI-driven productivity aren’t reducing toil work or improving software delivery performance, which we’ve long held to be a solid way for development teams to contribute to the organization’s goals. ... Given the intense focus on increasing the speed of coding, we’re likely seeing suboptimization on a massive scale. Writing code is rarely the bottleneck for feature development. Speeding up the code itself is less valuable if you aren’t catching the bugs it introduces with automated tests. It also fails to address the broader software delivery system or guarantee your features are useful to users. If you aren’t working at the constraint, your optimizations don’t improve throughput. In many cases, optimizing away from the constraint harms the end-to-end system.


The mainframe’s future in the age of AI

Running AI on mainframes as a trend is still in its infancy, but the survey suggests many companies do not plan to give up their mainframes even as AI creates new computing needs, says Petra Goude ... “AI can be assistive technology,” Dyer says. “I see it in terms of helping to optimize the code, modernize the code, renovate the code, and assist developers in maintaining that code.” ... “Many institutions are willing to resort to artificial intelligence to help improve outdated systems, particularly mainframes,” he says. “AI reduces the burden on several work phases, such as code rewriting or replacing databases, which streamlines the whole upgrading stage.” ... Many organizations have their mission-critical data residing on mainframes, and it may make sense to run AI models where that data resides, Dyer says. In some cases, that may be a better alternative than moving mission-critical data to other hardware, which may not be as secure or resilient, she adds. “You have both your customer data and then you have what I’ll call the operational data on the mainframe,” she says. “I can see the value of being able to develop and run your models directly right there, because you don’t have to move your data, you have very low latency, high throughput, all those things that you would want for certain types of AI applications.” 


How (and why) federated learning enhances cybersecurity

Federated learning’s popularity is rapidly increasing because it addresses common development-related security concerns. It is also highly sought after for its performance advantages. Research shows this technique can improve an image classification model’s accuracy by up to 20% — a substantial increase. ... Once the primary algorithm aggregates and weighs participants’ updates, it can be reshared for whatever application it was trained for. Cybersecurity teams can use it for threat detection. The advantage here is twofold — while threat actors are left guessing since they cannot easily exfiltrate data, professionals pool insights for highly accurate output. Federated learning is ideal for adjacent applications like threat classification or indicator of compromise detection. The AI’s large dataset size and extensive training build its knowledge base, curating expansive expertise. Cybersecurity professionals can use the model as a unified defense mechanism to protect broad attack surfaces. ML models — especially those that make predictions — are prone to drift over time as concepts evolve or variables become less relevant. With federated learning, teams could periodically update their model with varied features or data samples, resulting in more accurate, timely insights.


Augmented Reality's Healthcare Revolution

Many observers believe that AR's most immediate benefit will be in training both current and future healthcare professionals. "AR enables students to interact with virtual content in a real-world setting, providing contextualized learning experiences," Stegman says. Meanwhile, full virtual reality (VR), will offer a completely immersive training environment in which students can practice clinical skills without the risks associated with real patient care. ... As AR begins entering the healthcare mainstream, deep-pocketed large hospitals and specialized medical centers will most likely be the leading adopters, says SOTI's Anand. He reports that his firm's latest healthcare report found that 89% of US healthcare industry respondents agree that artificial intelligence simplifies tasks. "This gives a hint that healthcare organizations are already on the path to integrating advanced technologies," Anand notes. ... AR technology is rapidly evolving, and improvements in hardware (such as AR glasses and headsets), software, and integration with other medical technologies, are rapidly making AR more practical and effective. "As these technologies mature, they will become more accessible and affordable," Reitzel predicts.


Achieving peak cyber resilience

In a non-malicious, traditional disaster incident such as hardware failure or accidental deletion, the backup platform isn’t a target. Recovery is straightforward with a recent backup copy. You can quickly recover right back to the original location or an alternative location. In contrast, a cyberattack maliciously goes after anything and everything, making recovery complex. Backups are an especially attractive target for hackers because they represent an organization’s last line of defense. In a cyberattack scenario, the priority is containing the breach to stop further damage. Forensics teams must pinpoint how the attacker gained entry, find vulnerabilities and malware, and prevent reinfection by diagnosing which systems were potentially affected. Data decontamination is then needed to ensure threats aren’t reintroduced during recovery. Ransomware events can also necessitate coordination across IT disciplines, various business teams, legal, public, investor and government entities. Disaster recovery is likely something your organization deals with only infrequently. ... Cybercriminals have been enjoying the first-mover advantage in putting AI to work for their nefarious purposes. AI tools have allowed them to increase the frequency, speed and scale of their attacks. But now it’s time to fight fire with fire.


Who Are the AI Goliaths in the Banking Industry? A New Index Reveals a Growing Divide

In the Leadership pillar, banks have significantly increased their AI-related communications. The 50 Index banks published over 1,250 references to “AI” across annual reports, press releases, and company LinkedIn posts—representing a 59% increase year-over-year. This increase in “volume” was accompanied by an increase in “substance,” both across Investor Relations materials and in the engagement of Executive leaders across external media, industry conferences, and LinkedIn. As AI investments mature, the pressure is mounting for banks to demonstrate tangible returns. While 26 banks are now reporting outcomes from AI use cases, only 6 are disclosing financial impacts, and just two (DBS and JPMorgan Chase) are attempting to estimate total realized dollar outcomes across all AI investments. JPMorgan Chase, for instance, reported that the value they assign to their AI use cases is between $1 billion to $1.5 billion in fields such as customer personalization, trading, operational efficiencies, fraud detection, and credit decisioning. DBS, on the other hand, reported an economic value of SGD 370 million from its use of AI/ML in 2023, more than double the value from the previous year.



Quote for the day:

"The quality of leadership, more than any other single factor, determines the success or failure of an organization." -- Fred Fiedler & Martin Chemers