Showing posts with label fintech. Show all posts
Showing posts with label fintech. Show all posts

Daily Tech Digest - July 02, 2025


Quote for the day:

"Success is not the absence of failure; it's the persistence through failure." -- Aisha Tyle


How cybersecurity leaders can defend against the spur of AI-driven NHI

Many companies don’t have lifecycle management for all their machine identities and security teams may be reluctant to shut down old accounts because doing so might break critical business processes. ... Access-management systems that provide one-time-use credentials to be used exactly when they are needed are cumbersome to set up. And some systems come with default logins like “admin” that are never changed. ... AI agents are the next step in the evolution of generative AI. Unlike chatbots, which only work with company data when provided by a user or an augmented prompt, agents are typically more autonomous, and can go out and find needed information on their own. This means that they need access to enterprise systems, at a level that would allow them to carry out all their assigned tasks. “The thing I’m worried about first is misconfiguration,” says Yageo’s Taylor. If an AI agent’s permissions are set incorrectly “it opens up the door to a lot of bad things to happen.” Because of their ability to plan, reason, act, and learn AI agents can exhibit unpredictable and emergent behaviors. An AI agent that’s been instructed to accomplish a particular goal might find a way to do it in an unanticipated way, and with unanticipated consequences. This risk is magnified even further, with agentic AI systems that use multiple AI agents working together to complete bigger tasks, or even automate entire business processes. 


The silent backbone of 5G & beyond: How network APIs are powering the future of connectivity

Network APIs are fueling a transformation by making telecom networks programmable and monetisable platforms that accelerate innovation, improve customer experiences, and open new revenue streams.  ... Contextual intelligence is what makes these new-generation APIs so attractive. Your needs change significantly depending on whether you’re playing a cloud game, streaming a match, or participating in a remote meeting. Programmable networks can now detect these needs and adjust dynamically. Take the example of a user streaming a football match. With network APIs, a telecom operator can offer temporary bandwidth boosts just for the game’s duration. Once it ends, the network automatically reverts to the user’s standard plan—no friction, no intervention. ... Programmable networks are expected to have the greatest impact in Industry 4.0, which goes beyond consumer applications. ... 5G combined IOT and with network APIs enables industrial systems to become truly connected and intelligent. Remote monitoring of manufacturing equipment allows for real-time maintenance schedule adjustments based on machine behavior. Over a programmable, secure network, an API-triggered alert can coordinate a remote diagnostic session and even start remedial actions if a fault is found.


Quantum Computers Just Reached the Holy Grail – No Assumptions, No Limits

A breakthrough led by Daniel Lidar, a professor of engineering at USC and an expert in quantum error correction, has pushed quantum computing past a key milestone. Working with researchers from USC and Johns Hopkins, Lidar’s team demonstrated a powerful exponential speedup using two of IBM’s 127-qubit Eagle quantum processors — all operated remotely through the cloud. Their results were published in the prestigious journal Physical Review X. “There have previously been demonstrations of more modest types of speedups like a polynomial speedup, says Lidar, who is also the cofounder of Quantum Elements, Inc. “But an exponential speedup is the most dramatic type of speed up that we expect to see from quantum computers.” ... What makes a speedup “unconditional,” Lidar explains, is that it doesn’t rely on any unproven assumptions. Prior speedup claims required the assumption that there is no better classical algorithm against which to benchmark the quantum algorithm. Here, the team led by Lidar used an algorithm they modified for the quantum computer to solve a variation of “Simon’s problem,” an early example of quantum algorithms that can, in theory, solve a task exponentially faster than any classical counterpart, unconditionally.


4 things that make an AI strategy work in the short and long term

Most AI gains came from embedding tools like Microsoft Copilot, GitHub Copilot, and OpenAI APIs into existing workflows. Aviad Almagor, VP of technology innovation at tech company Trimble, also notes that more than 90% of Trimble engineers use Github Copilot. The ROI, he says, is evident in shorter development cycles, and reduced friction in HR and customer service. Moreover, Trimble has introduced AI into their transportation management system, where AI agents optimize freight procurement by dynamically matching shippers and carriers. ... While analysts often lament the difficulty of showing short-term ROI for AI projects, these four organizations disagree — at least in part. Their secret: flexible thinking and diverse metrics. They view ROI not only as dollars saved or earned, but also as time saved, satisfaction increased, and strategic flexibility gained. London says that Upwave listens for customer signals like positive feedback, contract renewals, and increased engagement with AI-generated content. Given the low cost of implementing prebuilt AI models, even modest wins yield high returns. For example, if a customer cites an AI-generated feature as a reason to renew or expand their contract, that’s taken as a strong ROI indicator. Trimble uses lifecycle metrics in engineering and operations. For instance, one customer used Trimble AI tools to reduce the time it took to perform a tunnel safety analysis from 30 minutes to just three.


How IT Leaders Can Rise to a CIO or Other C-level Position

For any IT professional who aspires to become a CIO, the key is to start thinking like a business leader, not just a technologist, says Antony Marceles, a technology consultant and founder of software staffing firm Pumex. "This means taking every opportunity to understand the why behind the technology, how it impacts revenue, operations, and customer experience," he explained in an email. The most successful tech leaders aren't necessarily great technical experts, but they possess the ability to translate tech speak into business strategy, Marceles says, adding that "Volunteering for cross-functional projects and asking to sit in on executive discussions can give you that perspective." ... CIOs rarely have solo success stories; they're built up by the teams around them, Marceles says. "Colleagues can support a future CIO by giving honest feedback, nominating them for opportunities, and looping them into strategic conversations." Networking also plays a pivotal role in career advancement, not just for exposure, but for learning how other organizations approach IT leadership, he adds. Don't underestimate the power of having an executive sponsor, someone who can speak to your capabilities when you’re not there to speak for yourself, Eidem says. "The combination of delivering value and having someone champion that value -- that's what creates real upward momentum."


SLMs vs. LLMs: Efficiency and adaptability take centre stage

SLMs are becoming central to Agentic AI systems due to their inherent efficiency and adaptability. Agentic AI systems typically involve multiple autonomous agents that collaborate on complex, multi-step tasks and interact with environments. Fine-tuning methods like Reinforcement Learning (RL) effectively imbue SLMs with task-specific knowledge and external tool-use capabilities, which are crucial for agentic operations. This enables SLMs to be efficiently deployed for real-time interactions and adaptive workflow automation, overcoming the prohibitive costs and latency often associated with larger models in agentic contexts. ... Operating entirely on-premises ensures that decisions are made instantly at the data source, eliminating network delays and safeguarding sensitive information. This enables timely interpretation of equipment alerts, detection of inventory issues, and real-time workflow adjustments, supporting faster and more secure enterprise operations. SLMs also enable real-time reasoning and decision-making through advanced fine-tuning, especially Reinforcement Learning. RL allows SLMs to learn from verifiable rewards, teaching them to reason through complex problems, choose optimal paths, and effectively use external tools. 


Quantum’s quandary: racing toward reality or stuck in hyperbole?

One important reason is for researchers to demonstrate their advances and show that they are adding value. Quantum computing research requires significant expenditure, and the return on investment will be substantial if a quantum computer can solve problems previously deemed unsolvable. However, this return is not assured, nor is the timeframe for when a useful quantum computer might be achievable. To continue to receive funding and backing for what ultimately is a gamble, researchers need to show progress — to their bosses, investors, and stakeholders. ... As soon as such announcements are made, scientists and researchers scrutinize them for weaknesses and hyperbole. The benchmarks used for these tests are subject to immense debate, with many critics arguing that the computations are not practical problems or that success in one problem does not imply broader applicability. In Microsoft’s case, a lack of peer-reviewed data means there is uncertainty about whether the Majorana particle even exists beyond theory. The scientific method encourages debate and repetition, with the aim of reaching a consensus on what is true. However, in quantum computing, marketing hype and the need to demonstrate advancement take priority over the verification of claims, making it difficult to place these announcements in the context of the bigger picture.


Ethical AI for Product Owners and Product Managers

As the product and customer information steward, the PO/PM must lead the process of protecting sensitive data. The Product Backlog often contains confidential customer feedback, competitive analysis, and strategic plans that cannot be exposed. This guardrail requires establishing clear protocols for what data can be shared with AI tools. A practical first step is to lead the team in a data classification exercise, categorizing information as Public, Internal, or Restricted. Any data classified for internal use, such as direct customer quotes, must be anonymized before being used in an AI prompt. ... AI is proficient at generating text but possesses no real-world experience, empathy, or strategic insight. This guardrail involves proactively defining the unique, high-value work that AI can assist but never replace. Product leaders should clearly delineate between AI-optimal tasks, creating first drafts of technical user stories, summarizing feedback themes, or checking for consistency across Product Backlog items and PO/PM-essential areas. These human-centric responsibilities include building genuine empathy through stakeholder interviews, making difficult strategic prioritization trade-offs, negotiating scope, resolving conflicting stakeholder needs, and communicating the product vision. By modeling this partnership and using AI as an assistant to prepare for strategic work, the PO/PM reinforces that their core value lies in strategy, relationships, and empathy.


Sharded vs. Distributed: The Math Behind Resilience and High Availability

In probability theory, independent events are events whose outcomes do not affect each other. For example, when throwing four dice, the number displayed on each dice is independent of the other three dice. Similarly, the availability of each server in a six-node application-sharded cluster is independent of the others. This means that each server has an individual probability of being available or unavailable, and the failure of one server is not affected by the failure or otherwise of other servers in the cluster. In reality, there may be shared resources or shared infrastructure that links the availability of one server to another. In mathematical terms, this means that the events are dependent. However, we consider the probability of these types of failures to be low, and therefore, we do not take them into account in this analysis.  ... Traditional architectures are limited by single-node failure risk. Application-level sharding compounds this problem because if any node goes down, its shard and therefore the total system becomes unavailable. In contrast, distributed databases with quorum-based consensus (like YugabyteDB) provide fault tolerance and scalability, enabling higher resilience and improved availability.


How FinTechs are turning GRC into a strategic enabler

The misconception that risk management and innovation exist in tension is one that modern FinTechs must move beyond. At its core, cybersecurity – when thoughtfully integrated – serves not as a brake but as an enabler of innovation. The key is to design governance structures that are both intelligent and adaptive (and resilient in itself). The foundation lies in aligning cybersecurity risk management with the broader business objective: enablement. This means integrating security thinking early in the innovation cycle, using standardized interfaces, expectations, and frameworks that don’t obstruct, but rather channel innovation safely. For instance, when risk statements are defined consistently across teams, decisions can be made faster and with greater confidence. Critically, it starts with the threat model. A well-defined, enterprise-level threat model is the compass that guides risk assessments and controls where they matter most. Yet many companies still operate without a clear articulation of their own threat landscape, leaving their enterprise risk strategies untethered from reality. Without this grounding, risk management becomes either overly cautious or blindly permissive, or a bit of both. We place a strong emphasis on bridging the traditional silos between GRC, IT Security, Red Teaming, and Operational teams.

Daily Tech Digest - June 29, 2025


Quote for the day:

“Great minds discuss ideas; average minds discuss events; small minds discuss people.” -- Eleanor Roosevelt


Who Owns End-of-Life Data?

Enterprises have never been more focused on data. What happens at the end of that data's life? Who is responsible when it's no longer needed? Environmental concerns are mounting as well. A Nature study warns that AI alone could generate up to 5 million metric tons of e-waste by 2030. A study from researchers at Cambridge University and the Chinese Academy of Sciences said top reason enterprises dispose of e-waste rather than recycling computers is the cost. E-waste can contain metals, including copper, gold, silver aluminum and rare earth elements, but proper handling is expensive. Data security is a concern as well as breach proofing doesn't get better than destroying equipment. ... End-of-life data management may sit squarely in the realm of IT, but it increasingly pulls in compliance, risk and ESG teams, the report said. Driven by rising global regulations and escalating concerns over data leaks and breaches, C-level involvement at every stage signals that end-of-life data decisions are being treated as strategically vital - not simply handed off. Consistent IT participation also suggests organizations are well-positioned to select and deploy solutions that work with their existing tech stack. That said, shared responsibility doesn't guarantee seamless execution. Multiple stakeholders can lead to gaps unless underpinned by strong, well-communicated policies, the report said.


How AI is Disrupting the Data Center Software Stack

Over the years, there have been many major shifts in IT infrastructure – from the mainframe to the minicomputer to distributed Windows boxes to virtualization, the cloud, containers, and now AI and GenAI workloads. Each time, the software stack seems to get torn apart. What can we expect with GenAI? ... Galabov expects severe disruption in the years ahead on a couple of fronts. Take coding, for example. In the past, anyone wanting a new industry-specific application for their business might pay five figures for development, even if they went to a low-cost region like Turkey. For homegrown software development, the price tag would be much higher. Now, an LLM can be used to develop such an application for you. GenAI tools have been designed explicitly to enhance and automate several elements of the software development process. ... Many enterprises will be forced to face the reality that their systems are fundamentally legacy platforms that are unable to keep pace with modern AI demands. Their only course is to commit to modernization efforts. Their speed and degree of investment are likely to determine their relevance and competitive positioning in a rapidly evolving market. Kleyman believes that the most immediate pressure will fall on data-intensive, analytics-driven platforms such as CRM and business intelligence (BI). 


AI Improves at Improving Itself Using an Evolutionary Trick

The best SWE-bench agent was not as good as the best agent designed by expert humans, which currently scores about 70 percent, but it was generated automatically, and maybe with enough time and computation an agent could evolve beyond human expertise. The study is a “big step forward” as a proof of concept for recursive self-improvement, said Zhengyao Jiang, a cofounder of Weco AI, a platform that automates code improvement. Jiang, who was not involved in the study, said the approach could made further progress if it modified the underlying LLM, or even the chip architecture. DGMs can theoretically score agents simultaneously on coding benchmarks and also specific applications, such as drug design, so they’d get better at getting better at designing drugs. Zhang said she’d like to combine a DGM with AlphaEvolve. ... One concern with both evolutionary search and self-improving systems—and especially their combination, as in DGM—is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned.


Data center costs surge up to 18% as enterprises face two-year capacity drought

Smart enterprises are adapting with creative strategies. CBRE’s Magazine emphasizes “aggressive and long-term planning,” suggesting enterprises extend capacity forecasts to five or 10 years, and initiate discussions with providers much earlier than before. Geographic diversification has become essential. While major hubs price out enterprises, smaller markets such as São Paulo saw pricing drops of as much as 20.8%, while prices in Santiago fell 13.7% due to shifting supply dynamics. Magazine recommended “flexibility in location as key, exploring less-constrained Tier 2 or Tier 3 markets or diversifying workloads across multiple regions.” For Gogia, “Tier-2 markets like Des Moines, Columbus, and Richmond are now more than overflow zones, they’re strategic growth anchors.” Three shifts have elevated these markets: maturing fiber grids, direct renewable power access, and hyperscaler-led cluster formation. “AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance.


Fintech’s AI Obsession Is Useless Without Culture, Clarity and Control

what does responsible AI actually mean in a fintech context? According to PwC’s 2024 Responsible AI Survey, it encompasses practices that ensure fairness, transparency, accountability and governance throughout the AI lifecycle. It’s not just about reducing model bias — it’s about embedding human oversight, securing data, ensuring explainability and aligning outputs with brand and compliance standards. In financial services, these aren’t "nice-to-haves" — they’re essential for scaling AI safely and effectively. Financial marketing is governed by strict regulations and AI-generated content can create brand and legal risks. ... To move AI adoption forward responsibly, start small. Low-risk, high-reward use cases let teams build confidence and earn trust from compliance and legal stakeholders. Deloitte’s 2024 AI outlook recommends beginning with internal applications that use non-critical data — avoiding sensitive inputs like PII — and maintaining human oversight throughout. ... As BCG highlights, AI leaders devote 70% of their effort to people and process — not just technology. Create a cross-functional AI working group with stakeholders from compliance, legal, IT and data science. This group should define what data AI tools can access, how outputs are reviewed and how risks are assessed.


Is Microsoft’s new Mu for you?

Mu uses a transformer encoder-decoder design, which means it splits the work into two parts. The encoder takes your words and turns them into a compressed form. The decoder takes that form and produces the correct command or answer. This design is more efficient than older models, especially for tasks such as changing settings. Mu has 32 encoder layers and 12 decoder layers, a setup chosen to fit the NPU’s memory and speed limits. The model utilizes rotary positional embeddings to maintain word order, dual-layer normalization to maintain stability, and grouped-query attention to use memory more efficiently. ... Mu is truly groundbreaking because it is the first SLM built to let users control system settings using natural language, running entirely on a mainstream shipping device. Apple’s iPhones, iPads, and Macs all have a Neural Engine NPU and run on-device AI for features like Siri and Apple Intelligence. But Apple does not have a small language model as deeply integrated with system settings as Mu. Siri and Apple Intelligence can change some settings, but not with the same range or flexibility. ... By processing data directly on the device, Mu keeps personal information private and responds instantly. This shift also makes it easier to comply with privacy laws in places like Europe and the US since no data leaves your computer.


Is It a Good Time to Be a Software Engineer?

AI may be rewriting the rules of software development, but it hasn’t erased the thrill of being a programmer. If anything, the machines have revitalised the joy of coding. New tools make it possible to code in natural language, ship prototypes in hours, and bypass tedious setup work. From solo developers to students, the process may feel more immediate or rewarding. Yet, this sense of optimism exists alongside an undercurrent of anxiety. As large language models (LLMs) begin to automate vast swathes of development, some have begun to wonder if software engineering is still a career worth betting on. ... Meanwhile, Logan Thorneloe, a software engineer at Google, sees this as a golden era for developers. “Right now is the absolute best time to be a software engineer,” he wrote on LinkedIn. He points out “development velocity” as the reason. Thorneleo believes AI is accelerating workflows, shrinking prototype cycles from months to days, and giving developers unprecedented speed. Companies that adapt to this shift will win, not by eliminating engineers, but by empowering them. More than speed, there’s also a rediscovered sense of fun. Programmers who once wrestled with broken documentation and endless boilerplate are rediscovering the creative satisfaction that first drew them to the field. 


Dumping mainframes for cloud can be a costly mistake

Despite industry hype, mainframes are not going anywhere. They quietly support the backbone of our largest banks, governments, and insurance companies. Their reliability, security, and capacity for massive transactions give mainframes an advantage that most public cloud platforms simply can’t match for certain workloads. ... At the core of this conversation is culture. An innovative IT organization doesn’t pursue technology for its own sake. Instead, it encourages teams to be open-minded, pragmatic, and collaborative. Mainframe engineers have a seat at the architecture table alongside cloud architects, data scientists, and developers. When there’s mutual respect, great ideas flourish. When legacy teams are sidelined, valuable institutional knowledge and operational stability are jeopardized. A cloud-first mantra must be replaced by a philosophy of “we choose the right tool for the job.” The financial institution in our opening story learned this the hard way. They had to overcome their bias and reconnect with their mainframe experts to avoid further costly missteps. It’s time to retire the “legacy versus modern” conflict and recognize that any technology’s true value lies in how effectively it serves business goals. Mainframes are part of a hybrid future, evolving alongside the cloud rather than being replaced by it. 


Why Modern Data Archiving Is Key to a Scalable Data Strategy

Organizations are quickly learning they can’t simply throw all data, new and old, at an AI strategy; instead, it needs to be accurate, accessible, and, of course, cost-effective. Without these requirements in place, it’s far from certain AI-powered tools can deliver the kind of insight and reliability businesses need. As part of the various data management processes involved, archiving has taken on a new level of importance. ... For organizations that need to migrate data, for example, archiving is used to identify which essential datasets, while enabling users to offload inactive data in the most cost-effective way. This kind of win-win can also be applied to cloud resources, where moving data to the most appropriate service can potentially deliver significant savings. Again, this contrasts with tiering systems and NAS gateways, which rely on global file systems to provide cloud-based access to local files. The challenge here is that access is dependent on the gateway remaining available throughout the data lifecycle because, without it, data recall can be interrupted or cease entirely. ... It then becomes practical to strike a much better balance across the typical enterprise storage technology stack, including long-term data preservation and compliance, where data doesn’t need to be accessed so often, but where reliability and security are crucial.


The Impact of Regular Training and Timely Security Policy Changes on Dev Teams

Constructive refresher training drives continuous improvement by reinforcing existing knowledge while introducing new concepts like AI-powered code generation, automated debugging and cross-browser testing in manageable increments. Teams that implement consistent training programs see significant productivity benefits as developers spend less time struggling with unfamiliar tools and more time automating tasks to focus on delivering higher value. ... Security policies that remain static as teams grow create dangerous blind spots, compromising both the team’s performance and the organization’s security posture. Outdated policies fail to address emerging threats like malware infections and often become irrelevant to the team’s current workflow, leading to workarounds and system vulnerabilities. ... Proactive security integration into development workflows represents a fundamental shift from reactive security measures to preventative strategies. This approach enables growing teams to identify and address security concerns early in the development process, reducing the cost and complexity of remediation. Cultivating a security-first culture becomes increasingly important as teams grow. This involves embedding security considerations into various stages of the development life cycle. Early risk identification in cloud infrastructure reduces costly breaches and improves overall team productivity.

Daily Tech Digest - May 29, 2025


Quote for the day:

"All progress takes place outside the comfort zone." -- Michael John Bobak


What Are Deepfakes? Everything to Know About These AI Image and Video Forgeries

Deepfakes rely on deep learning, a branch of AI that mimics how humans recognize patterns. These AI models analyze thousands of images and videos of a person, learning their facial expressions, movements and voice patterns. Then, using generative adversarial networks, AI creates a realistic simulation of that person in new content. GANs are made up of two neural networks where one creates content (the generator), and the other tries to spot if it's fake (the discriminator). The number of images or frames needed to create a convincing deepfake depends on the quality and length of the final output. For a single deepfake image, as few as five to 10 clear photos of the person's face may be enough. ... While tech-savvy people might be more vigilant about spotting deepfakes, regular folks need to be more cautious. I asked John Sohrawardi, a computing and information sciences Ph.D. student leading the DeFake Project, about common ways to recognize a deepfake. He advised people to look at the mouth to see if the teeth are garbled." Is the video more blurry around the mouth? Does it feel like they're talking about something very exciting but act monotonous? That's one of the giveaways of more lazy deepfakes." ... "Too often, the focus is on how to protect yourself, but we need to shift the conversation to the responsibility of those who create and distribute harmful content," Dorota Mani tells CNET


Beyond GenAI: Why Agentic AI Was the Real Conversation at RSA 2025

Contrary to GenAI, that primarily focuses on the divergence of information, generating new content based on specific instructions, SynthAI developments emphasize the convergence of information, presenting less but more pertinent content by synthesizing available data. SynthAI will enhance the quality and speed of decision-making, potentially making decisions autonomously. The most evident application lies in summarizing large volumes of information that humans would be unable to thoroughly examine and comprehend independently. SynthAI’s true value will be in aiding humans to make more informed decisions efficiently. ... Trust in AI also needs to evolve. This isn’t a surprise as AI, like all technologies, is going through the hype cycle and in the same way that cloud and automation suffered with issues around trust in the early stages of maturity, so AI is following a very similar pattern. It will be some time before trust and confidence are in balance with AI. ... Agentic AI encompasses tools that can understand objectives, make decisions, and act. These tools streamline processes, automate tasks, and provide intelligent insights to aid in quick decision making. In a use case involving repetitive processes, take a call center as an example, agentic AI can have significant value. 


The Privacy Challenges of Emerging Personalized AI Services

The nature of the search business will change substantially in this world of personalized AI services. It will evolve from a service for end users to an input into an AI service for end users. In particular, search will become a component of chatbots and AI agents, rather than the stand-alone service it is today. This merger has already happened to some degree. OpenAI has offered a search service as part of its ChatGPT deployment since last October. Google launched AI Overview in May of last year. AI Overview returns a summary of its search results generated by Google’s Gemini AI model at the top of its search results. When a user asks a question to ChatGPT, the chatbot will sometimes search the internet and provide a summary of its search results in its answer. ... The best way forward would not be to invent a sector-specific privacy regime for AI services, although this could be made to work in the same way that the US has chosen to put financial, educational, and health information under the control of dedicated industry privacy regulators. It might be a good approach if policymakers were also willing to establish a digital regulator for advanced AI chatbots and AI agents, which will be at the heart of an emerging AI services industry. But that prospect seems remote in today’s political climate, which seems to prioritize untrammeled innovation over protective regulation.


What CISOs can learn from the frontlines of fintech cybersecurity

For Shetty, the idea that innovation competes with security is a false choice. “They go hand in hand,” she says. User trust is central to her approach. “That’s the most valuable currency,” she explains. Lose it, and it’s hard to get back. That’s why transparency, privacy, and security are built into every step of her team’s work, not added at the end. ... Supply chain attacks remain one of her biggest concerns. Many organizations still assume they’re too small to be a target. That’s a dangerous mindset. Shetty points to many recent examples where attackers reached big companies by going through smaller suppliers. “It’s not enough to monitor your vendors. You also have to hold them accountable,” she says. Her team helps clients assess vendor cyber hygiene and risk scores, and encourages them to consider that when choosing suppliers. “It’s about making smart choices early, not reacting after the fact.” Vendor security needs to be an active process. Static questionnaires and one-off audits are not enough. “You need continuous monitoring. Your supply chain isn’t standing still, and neither are attackers.” ... The speed of change is what worries her most. Threats evolve quickly. The amount of data to protect grows every day. At the same time, regulators and customers expect high standards, and they should.


Tech optimism collides with public skepticism over FRT, AI in policing

Despite the growing alarm, some tech executives like OpenAI’s Sam Altman have recently reversed course, downplaying the need for regulation after previously warning of AI’s risks. This inconsistency, coupled with massive federal contracts and opaque deployment practices, erodes public trust in both corporate actors and government regulators. What’s striking is how bipartisan the concern has become. According to the Pew survey, only 17 percent of Americans believe AI will have a positive impact on the U.S. over the next two decades, while 51 percent express more concern than excitement about its expanding role. These numbers represent a significant shift from earlier years and a rare area of consensus between liberal and conservative constituencies. ... Bias in law enforcement AI systems is not simply a product of technical error; it reflects systemic underrepresentation and skewed priorities in AI design. According to the Pew survey, only 44 percent of AI experts believe women’s perspectives are adequately accounted for in AI development. The numbers drop even further for racial and ethnic minorities. Just 27 percent and 25 percent say the perspectives of Black and Hispanic communities, respectively, are well represented in AI systems.


6 rising malware trends every security pro should know

Infostealers steal browser cookies, VPN credentials, MFA (multi-factor authentication) tokens, crypto wallet data, and more. Cybercriminals sell the data that infostealers grab through dark web markets, giving attackers easy access to corporate systems. “This shift commoditizes initial access, enabling nation-state goals through simple transactions rather than complex attacks,” says Ben McCarthy, lead cyber security engineer at Immersive. ... Threat actors are systematically compromising the software supply chain by embedding malicious code within legitimate development tools, libraries, and frameworks that organizations use to build applications. “These supply chain attacks exploit the trust between developers and package repositories,” Immersive’s McCarthy tells CSO. “Malicious packages often mimic legitimate ones while running harmful code, evading standard code reviews.” ... “There’s been a notable uptick in the use of cloud-based services and remote management platforms as part of ransomware toolchains,” says Jamie Moles, senior technical marketing manager at network detection and response provider ExtraHop. “This aligns with a broader trend: Rather than relying solely on traditional malware payloads, adversaries are increasingly shifting toward abusing trusted platforms and ‘living-off-the-land’ techniques.”


How Constructive Criticism Can Improve IT Team Performance

Constructive criticism can be an excellent instrument for growth, both individually and on the team level, says Edward Tian, CEO of AI detection service provider GPTZero. "Many times, and with IT teams in particular, work is very independent," he observes in an email interview. "IT workers may not frequently collaborate with one another or get input on what they're doing," Tian states. ... When using constructive criticism, take an approach that focuses on seeking improvement with the poor result, Chowning advises. Meanwhile, use empathy to solicit ideas on how to improve on a poor result. She adds that it's important to ask questions, listen, seek to understand, acknowledge any difficulties or constraints, and solicit improvement ideas. ... With any IT team there are two key aspects of constructive criticism: creating the expectation and opportunity for performance improvement, and -- often overlooked -- instilling recognition in the team that performance is monitored and has implications, Chowning says. ... The biggest mistake IT leaders make is treating feedback as a one-way directive rather than a dynamic conversation, Avelange observes. "Too many IT leaders still operate in a command-and-control mindset, dictating what needs to change rather than co-creating solutions with their teams."


How AI will transform your Windows web browser

Google isn’t the only one sticking AI everywhere imaginable, of course. Microsoft Edge already has plenty of AI integration — including a Copilot icon on the toolbar. Click that, and you’ll get a Copilot sidebar where you can talk about the current web page. But the integration runs deeper than most people think, with more coming yet: Copilot in Edge now has access to Copilot Vision, which means you can share your current web view with the AI model and chat about what you see with your voice. This is already here — today. Following Microsoft’s Build 2025 developers’ conference, the company is starting to test a Copilot box right on Edge’s New Tab page. Rather than a traditional Bing search box in that area, you’ll soon see a Copilot prompt box so you can ask a question or perform a search with Copilot — not Bing. It looks like Microsoft is calling this “Copilot Mode” for Edge. And it’s not just a transformed New Tab page complete with suggested prompts and a Copilot box, either: Microsoft is also experimenting with “Context Clues,” which will let Copilot take into account your browser history and preferences when answering questions. It’s worth noting that Copilot Mode is an optional and experimental feature. ... Even the less AI-obsessed browsers of Mozilla Firefox and Brave are now quietly embracing AI in an interesting way.


No, MCP Hasn’t Killed RAG — in Fact, They’re Complementary

Just as agentic systems are all the rage this year, so is MCP. But MCP is sometimes talked about as if it’s a replacement for RAG. So let’s review the definitions. In his “Is RAG dead yet?” post, Kiela defined RAG as follows: “In simple terms, RAG extends a language model’s knowledge base by retrieving relevant information from data sources that a language model was not trained on and injecting it into the model’s context.” As for MCP (and the middle letter stands for “context”), according to Anthropic’s documentation, it “provides a standardized way to connect AI models to different data sources and tools.” That’s the same definition, isn’t it? Not according to Kiela. In his post, he argued that MCP complements RAG and other AI tools: “MCP simplifies agent integrations with RAG systems (and other tools).” In our conversation, Kiela added further (ahem) context. He explained that MCP is a communication protocol — akin to REST or SOAP for APIs — based on JSON-RPC. It enables different components, like a retriever and a generator, to speak the same language. MCP doesn’t perform retrieval itself, he noted, it’s just the channel through which components interact. “So I would say that if you have a vector database and then you make that available through MCP, and then you let the language model use it through MCP, that is RAG,” he continued.


AI didn’t kill Stack Overflow

Stack Overflow’s most revolutionary aspect was its reputation system. That is what elevated it above the crowd. The brilliance of the rep game allowed Stack Overflow to absorb all the other user-driven sites for developers and more or less kill them off. On Stack Overflow, users earned reputation points and badges for asking good questions and providing helpful answers. In the beginning, what was considered a good question or answer was not predetermined; it was a natural byproduct of actual programmers upvoting some exchanges and not others. ... For Stack Overflow, the new model, along with highly subjective ideas of “quality” opened the gates to a kind of Stanford Prison Experiment. Rather than encouraging a wide range of interactions and behaviors, moderators earned reputation by culling interactions they deemed irrelevant. Suddenly, Stack Overflow wasn’t a place to go and feel like you were part of a long-lived developer culture. Instead, it became an arena where you had to prove yourself over and over again. ... Whether the culture of helping each other will survive in this new age of LLMs is a real question. Is human helping still necessary? Or can it all be reduced to inputs and outputs? Maybe there’s a new role for humans in generating accurate data that feeds the LLMs. Maybe we’ll evolve into gardeners of these vast new tracts of synthetic data.

Daily Tech Digest - February 27, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



Breach Notification Service Tackles Infostealing Malware

Infostealers can amass massive quantities of credentials. To handle this glut, many cybercriminals create parsers to quickly ingest usernames and passwords for analysis, said Milivoj Rajić, head of threat intelligence at cybersecurity firm DynaRisk. The leaked internal communications of ransomware group Black Basta demonstrated this tactic, he said. Using a shared spreadsheet, the group identified organizations with emails present in infostealer logs, tested which access credentials worked, checked the organization's annual revenue and if its networks were protected by MFA. Using this information helped the ransomware group prioritize its targeting. Another measure of just how much data gets collected by infostealers: the Alien Txtbase records include 244 million passwords not already recorded as breached by Pwned Passwords. Hunt launched that free service in 2017, which anyone can query for free and anonymously, to help users never pick a password that's appeared in a known data breach, shortly after the U.S. National Institute for Standards and Technology began recommending that practice. Not all of the information contained in stealer logs being sold by criminals is necessarily legit. Some of it might be recycled from previous leaks or data dumps. Even so, Hunt said he was able to verify a random sample of the Alien Txtbase corpus with a "handful" of HIBP users he approached.


The critical role of strategic workforce planning in the age of AI

While some companies have successfully deployed strategic workforce planning in the past to reshape their workforces to meet future market requirements, there are also cautionary tales of organizations that have struggled with the transition to new technologies. For instance, the rapid innovation of smartphones left leading players such as Nokia behind. Periods of rapid technological change highlight the importance of predicting and responding to challenges with a dynamic talent planning model. Gen AI is not just another technological advancement affecting specific tasks; it represents a rewiring of how organizations operate and generate value. This transformation goes beyond automation, innovation, and productivity improvements to fundamentally alter the ratio of humans to technology in organizations. By having SWP in place, organizations can react more quickly and intentionally to these changes, monitoring leading and lagging indicators to stay ahead of the curve. This approach allows for identifying and developing new capabilities, ensuring that the workforce is prepared for the evolving demands these changes will bring. SWP gives a fact base to all talent decisions so that trade-offs can be explicitly discussed and strategic decisions can be made holistically—and with enterprise value top of mind. 


Cybersecurity in fintech: Protecting user data and preventing fraud

Fintech companies operate at the intersection of finance and technology, making them particularly vulnerable to cyber threats. These platforms process vast amounts of personal and financial data—from bank account details and credit card numbers to loan records and transaction histories. A single security breach can have devastating consequences, leading to financial losses, regulatory penalties, and reputational damage. Beyond individual risks, fintech platforms are interconnected within a larger financial ecosystem. A vulnerability in one system can cascade across multiple institutions, disrupting transactions, exposing sensitive data, and eroding trust. Given this landscape, cybersecurity in fintech is not just about preventing attacks—it’s about ensuring the integrity of the entire digital financial infrastructure. ... Governments and regulatory bodies worldwide recognise the critical role of cybersecurity in fintech. Frameworks like the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the U.S. set stringent standards for data privacy and security. Compliance is not just a legal necessity—it’s an opportunity for fintech companies to build trust with users. By adhering to global security best practices, fintech firms can differentiate themselves in an increasingly competitive market while ensuring customer data remains protected.


The Smart Entrepreneur's Guide to Thriving in Uncertain Times

If there's one certainty in business, it's change. The most successful entrepreneurs aren't just those who have great ideas — they are the ones who know how to adapt. Whether it's economic downturns, shifts in consumer behavior or emerging competition, the ability to navigate uncertainty is what separates sustainable businesses from those that struggle to survive. ... Instead of long-term strategies that assume stability, use quick experiments to validate new ideas and adjust quickly. When we launched new membership models at our office, we tested different pricing structures and adjusted based on user feedback within weeks rather than months. ... Digital engagement is changing. Entrepreneurs who optimize their messaging based on social media trends and consumer preferences gain a competitive edge. For example, when we noticed an increase in demand for remote work solutions, we adjusted our marketing efforts to highlight our virtual office plans. ... strong company culture that embraces change enables faster adaptation during challenging times. Jim Collins, in Good to Great, emphasizes that having the right people in the right seats is fundamental for long-term success. At Coworking Smart, we focused on hiring individuals who thrived in dynamic environments rather than just filling positions based on traditional job descriptions.


Risk Management for the IT Supply Chain

Who are your mission critical vendors? Do they present significant risks (for example, risk of a merger, or going out of business)? Where are your IT supply chain “weak links” (such as vendors whose products and services repeatedly fail). Are they impairing your ability to provide top-grade IT to the business? What countries do you operate in? Are there technology and support issues that could emerge in those locations? Do you annually send questionnaires to vendors that query them so you can ascertain that they are strong, reliable and trustworthy suppliers? Do you request your auditors periodically review IT supply chain vendors for resiliency, compliance and security? ... Most enterprises include security and compliance checkpoints on their initial dealings with vendors, but few check back with the vendors on a regular basis after the contracts are signed. Security and governance guidelines change from year to year. Have your IT vendors kept up? When was the last time you requested their latest security and governance audit reports from them? Verifying that vendors stay in step with your company’s security and governance requirements should be done annually. ... Although companies include their production supply chains in their corporate risk management plans, they don’t consistently consider the IT supply chain and its risks.


IT infrastructure: Inventory before AIOps

Even if the advantages are clear, the right story is also needed internally to initiate an introduction. Benedikt Ernst from the IBM spin-off Kyndryl sees a certain “shock potential,” especially in the financial dimension, which is ideally anticipated in advance: “The argumentation of costs is crucial because the introduction of AIOps is, of course, an investment in the first instance. Organizations need to ask themselves: How quickly is a problem detected and resolved today? And how does an accelerated resolution affect operating costs and downtime?” In addition, there is another aspect that he believes is too often overlooked: “Ultimately, the introduction of AIOps also reveals potential on the employee side. The fewer manual interventions in the infrastructure are necessary, the more employees can focus on things that really require their attention. For this reason, I see the use of open integration platforms as helpful in making automation and AIOps usable across different platforms.” Storm Reply’s Henckel even sees AIOps as a tool for greater harmony: “The introduction of AIOps also means an end to finger-pointing between departments. With all the different sources of error — database, server, operating system — it used to be difficult to pinpoint the cause of the error. AIOps provides detailed analysis across all areas and brings more harmony to infrastructure evaluation.”


Navigating Supply Chain Risk in AI Chips

The fragmented nature of semiconductor production poses significant challenges for supplier risk management. Beyond the risk posed by delays in delivery or production, which can disrupt operations, such a globalized and complex supply chain poses challenges from a regulatory angle. C chipmakers must take full responsibility for ensuring compliance at every level by thoroughly monitoring and vetting every entity in the supply chain for risks such as forced labor, sanctions violations, bribery, and corruption. ... Many companies are diversifying their supplier base, increasing local procurement efforts, and using predictive modeling to anticipate better demand to address the risk of disruption triggered by delays in delivery or operations. By leveraging advanced data analytics and securing multiple supply routes, businesses can better increase resilience to external shocks and mitigate the risk of supply chain delays. Additionally, firms can incorporate a “value at risk” model into supply chain and operational risk management frameworks. This approach quantifies the financial impact of potential supply chain disruptions, helping chipmakers prioritize the most critical risk areas. ... The AI chip supply chain is a cornerstone of modern innovation, but due to its global and interdependent nature, it is inherently complex. 


Charting the AI-fuelled evolution of embedded analytics

The idea behind embedded analytics is to negate a great deal of the friction around data insights. In theory, line-of-business users have been able to view relevant insights for a long time, by allowing them to import data into the self-service business intelligence (SSBI) tool of their choice. In practice, this disrupts their workflow and interrupts their chain of thought, so a lot of people choose not to make that switch. They’re even less likely to do so if they have to manually export and migrate the data to a different tool. That means they’re missing out on data insights, just when they could be the most valuable for their decisions. Embedded analytics delivers all the charts and insights alongside whatever the user is working on at the time – be it an accounting app, a CRM, a social media management platform or whatever else – which is far more useful. “It’s a lot more intuitive, a lot more functional if it’s in the same place,” says Perez. “Also, generally speaking, the people who use these types of business apps are non-technical, and so the more complicated you make it for them to get to the analysis, the less of it they’ll do.” ... So far, so impressive. But Perez emphasises that there are a number of barriers to embedded analytics utopia. Businesses need to bear these in mind as they seek to develop their own solutions or find providers who can deliver them.


Open source software vulnerabilities found in 86% of codebases

jQuery, a JavaScript library, was the most frequent source of vulnerabilities, as eight of the top 10 high-risk vulnerabilities were found there. Among scanned applications, 43% contained some version of jQuery — oftentimes, an outdated version. An XSS vulnerability affecting outdated versions of jQuery, called CVE-2020-11023, was the most frequently found high-risk vulnerability. McGuire remarks, “There’s also an interesting shift towards web-based and multi-tenant (SaaS) applications, meaning more high-severity vulnerabilities (81% of audited codebases). We also observed an overwhelming majority of high severity vulnerabilities belonging to jQuery. ... McGuire explains, “Embedded software providers are going to be increasingly focused on the quality, safety and reliability of the software they build. Looking at this year’s data, 79% of the codebases were using components whose latest versions had no development activity in the last two years. This means that these dependencies could become less reliable, so industries, like aerospace and medical devices should look to identify these in their own codebases and start moving on from them.” ... “Enterprise regulated organizations are being forced to align with numerous requirements, including providing SBOMs with their applications. If an SBOM isn’t accurate, it’s useless,” McGuire states. 


A 5-step blueprint for cyber resilience

Many claim to practice developer security operations, or DevSecOps, by testing software for security flaws at every stage. At least that's the theory. In reality, developers are under constant pressure to get software into production, and DevSecOps can be an impediment to meeting deadlines. "You hear all these people saying, 'Yes, we're doing DevSecOps,' but the reality is, a lot of people aren't," says Lanowitz. "If you're really focused on being secure by design, you're going to want to do things right from the beginning, meaning you're going to want to have your network architecture correct, your software architecture correct." ... "We have to be able to speak the language of the business," says Lanowitz. "Break down the silos that exist in the organization, get the cyber team and the business team talking, [and] align cybersecurity initiatives with overarching business initiatives." Again, executive leadership needs to point the way, but it often needs convincing. Compliance is a great place to start, because most industries have rules, laws, or insurance providers that mandate a basic level of cybersecurity. ... The more eyes you have on a cybersecurity problem, the more quickly a solution can be found. Because of this, even large companies rely on external managed service providers (MSPs), managed security service providers (MSSPs), managed detection and response (MDR) providers, consultants and advisors.

Daily Tech Digest - February 07, 2025


Quote for the day:

"Doing what you love is the cornerstone of having abundance in your life." -- Wayne Dyer


Data, creativity, AI, and marketing: where do we go from here?

While causes of inefficient data coordination vary, silos remain the most frequent offender. There is still a widespread tendency to collect and store data in isolated buckets that are often made all the more challenging by lingering reliance on manual processing — as underscored by the fact four in ten cross-industry employees cite structuring, preparing and manipulating information among their top data difficulties. Therefore, a sizable number of organizations are working with fragmented and inconsistent data that requires time-consuming wrangling and is often subject to human error. The obvious problem this poses is a lack of the comprehensive data to inform sound decisions. At the AI-assisted marketing level, faulty data has a high potential to jeopardise creative efforts; resulting in irrelevant ads that miss their mark for target audiences and brand goals and misguided strategic moves based on skewed analysis. Of course, there are no quick fixes to tackle these complications. But businesses can reach greater data maturity and efficacy by reconfiguring their orchestration methods. With a streamlined system that persistently delivers consolidated data, marketers will be equipped to extract key performance and consumer insights that steer refined and precise AI-enhanced activity.


How AI is transforming strategy development

Beyond these well-understood risks, gen AI presents five additional considerations for strategists. First, it elevates the importance of access to proprietary data. Gen AI is accelerating a long-term trend: the democratization of insights. It has never been easier to leverage off-the-shelf tools to rapidly generate insights that are the building blocks of any strategy. As the adoption of AI models spreads, so do the consequences of relying on commoditized insights. After all, companies that use generic inputs will produce generic outputs, which lead to generic strategies that, almost by definition, lead to generic performance or worse. As a result, the importance of curating proprietary data ecosystems (more on these below) that incorporate quantitative and qualitative inputs will only increase. Second, the proliferation of data and insights elevates the importance of separating signal from noise. This has long been a challenge, but gen AI compounds it. We believe that as the technology matures, it will be able to effectively pull out the signals that matter, but it is not there yet. Third, as the ease of insight generation grows, so does the value of executive-level synthesis. Business leaders—particularly those charged with making strategic decisions—cannot operate effectively if they are buried in data, even if that data is nothing but signal. 


Why Cybersecurity Is Everyone’s Responsibility

Ultimately, cybersecurity is everyone’s responsibility because the fallout affects us all when something goes wrong. When a company goes through a data breach – say it’s ransomware – a number of people are held to task, and even more are impacted. First, the CEO and CISO will rightly be held accountable. Next, security managers will bear their share of the blame and be scrutinized for how they handled the situation. Then, laws and lawmakers will be audited to see if the proper rules were in place. The organization will be investigated for compliance violations, and if found guilty, will pay regulatory fines, legal costs, and maybe lose professional licenses. If the company cannot recover from the reputational damage, revenue will be lost, and jobs will be cut. Lastly, and most importantly, the users who lost their data can likely be impacted for years, even a lifetime. Bank accounts and credit cards will need to be changed, identity theft will be a pressing risk, and in the case of healthcare data breaches, sensitive, unchangeable information could be leaked or used as blackmail against the victims. ... The burden of cybersecurity rests with us all. There is an old saying attributed to Dale Carnegie: “Here lies the body of William Jay, who died maintaining his right of way— He was right, dead right, as he sped along, but he’s just as dead as if he were wrong.”


Spy vs spy: Security agencies help secure the network edge

“Products designed with Secure by Design principles prioritize the security of customers as a core business requirement, rather than merely treating it as a technical feature,” the introductory web page said. “During the design phase of a product’s development lifecycle, companies should implement Secure by Design principles to significantly decrease the number of exploitable flaws before introducing them to the market for widespread use or consumption. Out-of-the-box, products should be secure with additional security features such as multi-factor authentication (MFA), logging, and single sign-on (SSO) available at no extra cost.” ... However, she doesn’t feel that lumping together internet connected firewalls, routers, IoT devices, and OT systems in an advisory is helpful to the community, and “neither is calling them ‘edge devices,’ because it assumes that enterprise IT is the center of the universe and the ‘edge’ is out there.” “That may be true for firewalls, routers, and VPN gateways, but not for OT systems,” she continued. ... Many are internet connected to support remote operations and maintenance, she noted, so “the goal there should be to give advice on how to remote into those systems securely, and the tone of the advisories should be targeted to the production realities where IT security tools and processes are not always a good idea.”


Will the end of Windows 10 accelerate CIO interest in AI PCs?

“The vision around AI PCs is that, over time, more of the models, starting with small language models, and then quantized large language models … more of those workloads will happen locally, faster, with lower latency, and you won’t need to be connected to the internet and it should be less expensive,” the IDC analyst adds. “You’ll pay a bit more for an AI PC but [the AI workload is] not on the cloud and then arguably there’s more profit and it’s more secure.” ... “It’s smart for CIOs to consider some early deployments of these to bring the AI closer to the employees and processes,” Melby says. “A side benefit is that it keeps the compute local and reduces cyber risk to a degree. But it takes a strategic view and precision targeting. The costs of AI PCs/laptops are at a premium right now, so we really need a compelling business case, and the potential for reduced cloud costs could help break loose those kinds of justifications.” Not all IT leaders are on board with running AI on PCs and laptops. “Unfortunately, there are many downsides to this approach, including being locked into the solution, upgrades becoming more difficult, and not being able to benefit from any incremental improvements,” says Tony Marron, managing director of Liberty IT at Liberty Mutual.


Self-sovereign identity could transform fraud prevention, but…

Despite these challenges, SSI has the potential to be a powerful tool in the fight against fraud. Consider the growing use of mobile driver’s licenses (mDLs). These digital credentials allow users to prove their identity quickly and securely without exposing unnecessary personal information. Unlike traditional forms of identification, which often reveal more data than needed, SSI-based credentials operate on the principle of minimal disclosure, only sharing the required details. This limits the amount of exploitable information in circulation and reduces identity theft risk. Another promising area is passwordless authentication. For years, we’ve talked about the death of the password, yet reliance on weak, easily compromised credentials persists. SSI could accelerate the transition to more secure authentication mechanisms, using biometrics and cryptographic certificates instead of passwords. By eliminating centralized repositories of login credentials, businesses can significantly reduce the risk of credential-stuffing attacks and phishing attempts. However, the likelihood of a fully realized SSI wallet that consolidates identity documents, payment credentials and other sensitive information remains low, at least in the near future. The convenience factor isn’t there yet, and without significant consumer demand, businesses have little motivation to push for mass adoption.


The Staging Bottleneck: Microservices Testing in FinTech

Two common scaling strategies exist: mocking dependencies, which sacrifices fidelity and risks failures in critical integrations, or duplicating staging environments, which is costly and complex due to compliance needs. Teams often resort to shared environments, causing bottlenecks, interference and missed bugs — slowing development and increasing QA overhead. ... By multiplexing the baseline staging setup, sandboxes provide tailored environments for individual engineers or QA teams without adding compliance risks or increasing maintenance burdens, as they inherit the same compliance and configuration frameworks as production. These environments allow teams to work independently while maintaining fidelity to production conditions. Sandboxes integrate seamlessly with external APIs and dependencies, replicating real-world scenarios such as rate limits, timeouts and edge cases. This enables robust testing of workflows and edge cases while preserving isolation to avoid disruptions across teams or systems. ... By adopting sandboxes, FinTech organizations can enable high-quality, efficient development cycles, ensuring compliance while unlocking innovation at scale. This paradigm shift away from monolithic staging environments toward dynamic, scalable sandboxes gives FinTech companies a critical competitive advantage.


From Code to Culture: Adapting Workplaces to the AI Era

As AI renovates industries, it also exposes a critical gap in workforce readiness. The skills required to excel in an AI-driven world are evolving rapidly, and many employees find their current capabilities misaligned with these new demands. In this context, reskilling is not just a response to technological disruption; it is a strategic necessity for ensuring long-term organisational resilience. Today’s workforce is broadening its skillset at an unprecedented pace. Professionals are acquiring 40% more diverse skills compared to five years ago, reflecting the growing need to adapt to the complexities of AI-integrated workplaces. AI literacy has emerged as a crucial area of focus, encompassing abilities like prompt engineering and proficiency with tools. ... Beyond its operational benefits, AI is reimagining innovation and strategic decision-making in a volatile business environment characterised by economic uncertainty and rapid technological shifts. However, organisations must tread carefully. AI is not a panacea, and its effectiveness depends on thoughtful implementation. Ethical considerations like data privacy, algorithmic bias, and the potential for job displacement must be addressed to ensure that AI augments rather than undermines human potential. Transparent communication about AI’s role in the workplace can foster trust and help employees understand its benefits.


CIOs and CISOs grapple with DORA: Key challenges, compliance complexities

“As often happens with such ambitious regulations, the path to compliance is particularly complex,” says Giuseppe Ridulfo, deputy head of the organization department and head of IS at Banca Etica. “This is especially true for smaller entities, such as Banca Etica, which find themselves having to face significant structural challenges. DORA, although having shared objectives, lacks a principle of proportionality that takes into account the differences between large institutions and smaller banks.” This is compounded for smaller organizations due to the prevalence of outsourcing for these firms, Ridulfo explains. “This operating model, which allows access to advanced technologies and skills, clashes with the stringent requirements of the regulation, in particular those that impose rigorous control over third-party suppliers and complex management of contracts relating to essential or important functions,” he says. ... The complexity of DORA, therefore, is not in the text itself, although substantial, but in the work it entails for compliance. As Davide Baldini, lawyer and partner of the ICT Legal Consulting firm, points out, “DORA is a very clear law, as it is a regulation, which is applied equally in all EU countries and contains very detailed provisions. 


True Data Freedom Starts with Data Integrity

Data integrity is essential to ensuring business continuity, and the movement of data poses a significant risk. A lack of pre-migration testing is the main cause of issues such as data corruption and data loss during the movement of data. These issues lead to unexpected downtime, reputational damage, and loss of essential information. As seen by this year’s global incident, one fault, no matter how small, can result in a significant negative impact on the business and its stakeholders. This incident sends a clear message – testing before implementation is essential. Without proper testing, organizations cannot identify potential issues and implement corrective measures. ... This includes testing for both functionality, or how well the system operates after migration, and economics, the cost-effectiveness of the system or application. Functionality testing ensures a system continues to meet expectations. Economics testing involves examining resource consumption, service costs and overall scalability to ascertain whether the solution is economically sustainable for the business. This is particularly important with cloud-based migrations. While organizations can manually conduct these audits, tools on the market can also help can conduct regular automated data integrity audits.