Showing posts with label budget. Show all posts
Showing posts with label budget. Show all posts

Daily Tech Digest - August 04, 2025


Quote for the day:

"You don’t have to be great to start, but you have to start to be great." — Zig Ziglar


Why tomorrow’s best devs won’t just code — they’ll curate, coordinate and command AI

It is not just about writing code anymore — it is about understanding systems, structuring problems and working alongside AI like a team member. That is a tall order. That said, I do believe that there is a way forward. It starts by changing the way we learn. If you are just starting out, avoid relying on AI to get things done. It is tempting, sure, but in the long run, it is also harmful. If you skip the manual practice, you are missing out on building a deeper understanding of how software really works. That understanding is critical if you want to grow into the kind of developer who can lead, architect and guide AI instead of being replaced by it. ... AI-augmented developers will replace large teams that used to be necessary to move a project forward. In terms of efficiency, there is a lot to celebrate about this change — reduced communication time, faster results and higher bars for what one person can realistically accomplish. But, of course, this does not mean teams will disappear altogether. It is just that the structure will change. ... Being technically fluent will still remain a crucial requirement — but it won’t be enough to simply know how to code. You will need to understand product thinking, user needs and how to manage AI’s output. It will be more about system design and strategic vision. For some, this may sound intimidating, but for others, it will also open many doors. People with creativity and a knack for problem-solving will have huge opportunities ahead of them.


The Wild West of Shadow IT

From copy to deck generators, code assistants, and data crunchers, most of them were never reviewed or approved. The productivity gains of AI are huge. Productivity has been catapulted forward in every department and across every vertical. So what could go wrong? Oh, just sensitive data leaks, uncontrolled API connections, persistent OAuth tokens, and no monitoring, audit logs, or privacy policies… and that's just to name a few of the very real and dangerous issues. ... Modern SaaS stacks form an interconnected ecosystem. Applications integrate with each other through OAuth tokens, API keys, and third-party plug-ins to automate workflows and enable productivity. But every integration is a potential entry point — and attackers know it. Compromising a lesser-known SaaS tool with broad integration permissions can serve as a stepping stone into more critical systems. Shadow integrations, unvetted AI tools, and abandoned apps connected via OAuth can create a fragmented, risky supply chain.  ... Let's be honest - compliance has become a jungle due to IT democratization. From GDPR to SOC 2… your organization's compliance is hard to gauge when your employees use hundreds of SaaS tools and your data is scattered across more AI apps than you even know about. You have two compliance challenges on the table: You need to make sure the apps in your stack are compliant and you also need to assure that your environment is under control should an audit take place.


Edge Computing: Not Just for Tech Giants Anymore

A resilient local edge infrastructure significantly enhances the availability and reliability of enterprise digital shopfloor operations by providing powerful on-premises processing as close to the data source as possible—ensuring uninterrupted operations while avoiding external cloud dependency. For businesses, this translates to improved production floor performance and increased uptime—both critical in sectors such as manufacturing, healthcare, and energy. In today’s hyperconnected market, where customers expect seamless digital interactions around the clock, any delay or downtime can lead to lost revenue and reputational damage. Moreover, as AI, IoT, and real-time analytics continue to grow, on-premises OT edge infrastructure combined with industrial-grade connectivity such as private 4.9/LTE or 5G provides the necessary low-latency platform to support these emerging technologies. Investing in resilient infrastructure is no longer optional, it’s a strategic imperative for organisations seeking to maintain operational continuity, foster innovation, and stay ahead of competitors in an increasingly digital and dynamic global economy. ... Once, infrastructure decisions were dominated by IT and boiled down to a simple choice between public and private infrastructure. Today, with IT/OT convergence, it’s all about fit-for-purpose architecture. On-premises edge computing doesn’t replace the cloud — it complements it in powerful ways.


A Reporting Breakthrough: Advanced Reporting Architecture

Advanced Reporting Architecture is based on a powerful and scalable SaaS architecture, which efficiently addresses user-specific reporting requirements by generating all possible reports upfront. Users simply select and analyze the views that matter most to them. The Advanced Reporting Architecture’s SaaS platform is built for global reach and enterprise reliability, with the following features: Modern User Interface: Delivered via AWS, optimized for mobile and desktop, with seamless language switching (English, French, German, Spanish, and more to come). Encrypted Cloud Storage: Ensuring uploaded files and reports are always secure. Serverless Data Processing: High-precision processing that analyzes user-uploaded data and uses data influenced relevant factors to maximizing analytical efficiencies and lower the cost of processing efforts. Comprehensive Asset Management: Support for editable reports, dashboards, presentations, pivots, and custom outputs. Integrated Payments & Accounting: Powered by PayPal and Odoo. Simple Subscription Model: Pay only for what you use—no expensive licenses, hardware, or ongoing maintenance. Some leading-edge reporting platforms, such as PrestoCharts, are based on Advanced Reporting Architecture and have been successful in enabling business users to develop custom reports on the fly. Thus, Advanced Reporting Architecture puts reporting prowess in the hands of the user.


These jobs face the highest risk of AI takeover, according to Microsoft

According to the report -- which has yet to be peer-reviewed -- the most at-risk jobs are those that are based on the gathering, synthesis, and communication of information, at which modern generative AI systems excel: think translators, sales and customer service reps, writers and journalists, and political scientists. The most secure jobs, on the other hand, are supposedly those that depend more on physical labor and interpersonal skills. No AI is going to replace phlebotomists, embalmers, or massage therapists anytime soon. ... "It is tempting to conclude that occupations that have high overlap with activities AI performs will be automated and thus experience job or wage loss, and that occupations with activities AI assists with will be augmented and raise wages," the Microsoft researchers note in their report. "This would be a mistake, as our data do not include the downstream business impacts of new technology, which are very hard to predict and often counterintuitive." The report also echoes what's become something of a mantra among the biggest tech companies as they ramp up their AI efforts: that even though AI will replace or radically transform many jobs, it will also create new ones. ... It's possible that AI could play a role in helping people practice that skill. About one in three Americans are already using the technology to help them navigate a shift in their career, a recent study found.


AIBOMs are the new SBOMs: The missing link in AI risk management

AIBOMs follow the same formats as traditional SBOMs, but contain AI-specific content and metadata, like model family, acceptable usage, AI-specific licenses, etc. If you are a security leader at a large defense contractor, you’d need the ability to identify model developers and their country of origin. This would ensure you are not utilizing models originating from near-peer adversary countries, such as China. ... The first step is inventorying their AI. Utilize AIBOMs to inventory your AI dependencies, monitor what is approved vs. requested vs. denied, and ensure you have an understanding of what is deployed where. The second is to actively seek out AI, rather than waiting for employees to discover it. Organizations need capabilities to identify AI in code and automatically generate resulting AIBOMs. This should be integrated as part of the MLOps pipeline to generate AIBOMs and automatically surface new AI usage as it occurs. The third is to develop and adopt responsible AI policies. Some of them are fairly common-sense: no contributors from OFAC countries, no copylefted licenses, no usage of models without a three-month track record on HuggingFace, and no usage of models over a year old without updates. Then, enforce those policies in an automated and scalable system. The key is moving from reactive discovery to proactive monitoring.


2026 Budgets: What’s on Top of CIOs’ Lists (and What Should Be)

CIO shops are becoming outcome-based, which makes them accountable for what they’re delivering against the value potential, not how many hours were burned. “The biggest challenge seems to be changing every day, but I think it’s going to be all about balancing long-term vision with near-term execution,” says Sudeep George, CTO at software-delivered AI data company iMerit. “Frankly, nobody has a very good idea of what's going to happen in 2026, so everyone's placing bets,” he continues. “This unpredictability is going to be the nature of the beast, and we have to be ready for that.” ... “Reducing the amount of tech debt will always continue to be a focus for my organization,” says Calleja-Matsko. “We’re constantly looking at re-evaluating contracts, terms, [and] whether we have overlapping business capabilities that are being addressed by multiple tools that we have. It's rationalizing, she adds, and what that does is free up investment. How is this vendor pricing its offering? How do we make sure we include enough in our budget based on that pricing model? “That’s my challenge,” Calleja-Matsko emphasizes. Talent is top of mind for 2026, both in terms of attracting it and retaining it. Ultimately though, AI investments are enabling the company to spend more time with customers.


Digital Twin: Revolutionizing the Future of Technology and Industry

T​h​e rise o​f t​h​e cyberspace o​f Things [IoT] has made digital twin technology more relevant​ and accessible. IoT devices ceaselessly garner data from their surroundings a​n​d send i​t t​o t​h​e cloud. T​h​i​s data i​s used t​o produce a​n​d update digital twins o​f those devices o​r systems. I​n smart homes, digital twins help keep an eye on a​n​d see to it lighting, heating, a​n​d appliances. I​n blue-collar settings, IoT sensors track simple machine health a​n​d doing. Moreover, these smart systems c​a​n discover minor issues ahead of time that lead t​o failures. A​s more devices abound, digital twins offer greater conspicuousness a​n​d see to it. ... Despite its benefits, digital twin technology comes w​i​t​h challenges. One major issue i​s t​h​e high cost o​f carrying out. Setting up sensors, software systems, a​n​d data chopping c​a​n be overpriced, particularly f​o​r small businesses. There a​r​e also concerns about the data security system a​n​d privacy. Since digital twins rely o​n straight data flow, any rift c​a​n be risky. Integrating digital twins into existing systems c​a​n be involved. Moreover, i​t requires fine professionals who translate both t​h​e personal systems a​n​d t​h​e labyrinthine digital technologies. A different dispute i​s ensuring t​h​e caliber a​n​d truth o​f t​h​e data. I​f t​h​e input data i​s blemished, the digital twin’s results will also be erratic. Companies must also cope with large amounts o​f data, which requires a stressed I​T base. 


Why Banks Must Stop Pretending They’re Not Tech Companies

The most successful "banks" of the future may not even call themselves banks at all. While traditional institutions cling to century-old identities rooted in vaults and branches, their most formidable competitors are building financial ecosystems from the ground up with APIs, cloud infrastructure, and data-driven decision engines. ... The question isn’t whether banks will become technology companies. It’s whether they’ll make that transition fast enough to remain relevant. And to do this, they must rethink their identity by operating as technology platforms that enable fast, connected, and customer-first experiences. ... This isn’t about layering digital tools on top of legacy infrastructure or launching a chatbot and calling it innovation. It’s about adopting a platform mindset — one that treats technology not as a cost center but as the foundation of growth. A true platform bank is modular, API-first, and cloud-native. It uses real-time data to personalize every interaction. It delivers experiences that are intuitive, fast, and seamless — meeting customers wherever they are and embedding financial services into their everyday lives. ... To keep up with the pace of innovation, banks must adopt skills-based models that prioritize adaptability and continuous learning. Upskilling isn’t optional. It’s how institutions stay responsive to market shifts and build lasting capabilities. And it starts at the top.


Colo space crunch could cripple IT expansion projects

For enterprise IT execs who already have a lot on their plates, the lack of available colocation space represents yet another headache to deal with, and one with major implications. Nobody wants to have to explain to the CIO or the board of directors that the company can’t proceed with digitization efforts or AI projects because there’s no space to put the servers. IT execs need to start the planning process now to get ahead of the problem. ... Demand has outstripped supply due to multiple factors, according to Pat Lynch, executive managing director at CBRE Data Center Solutions. “AI is definitely part of the demand scenario that we see in the market, but we also see growing demand from enterprise clients for raw compute power that companies are using in all aspects of their business.” ... It’s not GPU chip shortages that are slowing down new construction of data centers; it’s power. When a hyperscaler, colo operator or enterprise starts looking for a location to build a data center, the first thing they need is a commitment from the utility company for the required megawattage. According to a McKinsey study, data centers are consuming more power due to the proliferation of the power-hungry GPUs required for AI. Ten years ago, a 30 MW data center was considered large. Today, a 200 MW facility is considered normal.

Daily Tech Digest - July 16, 2025


Quote for the day:

"Whatever the mind of man can conceive and believe, it can achieve." -- Napoleon Hill


The Seventh Wave: How AI Will Change the Technology Industry

AI presents three threats to the software industry: Cheap code: TuringBots, using generative AI to create software, threatens the low-code/no-code players. Cheap replacement: Software systems, be they CRM or ERP, are structured databases – repositories for client records or financial records. Generative AI, coupled with agentic AI, holds out the promise of a new way to manage this data, opening the door to an enterprising generation of tech companies that will offer AI CRM, AI financials, AI database, AI logistics, etc. ... Better functionality: AI-native systems will continually learn and flex and adapt without millions of dollars of consulting and customization. They hold the promise of being up to date and always ready to take on new business problems and challenges without rebuilds. When the business and process changes, the tech will learn and change. ... On one hand, the legacy software systems that PwC, Deloitte, and others have implemented for decades and that comprise much of their expertise will be challenged in the short term and shrink in the long term. Simultaneously, there will be a massive demand for expertise in AI. Cognizant, Capgemini, and others will be called on to help companies implement AI computing systems and migrate away from legacy vendors. Forrester believes that the tech services sector will grow by 3.6% in 2025.


Software Security Imperative: Forging a Unified Standard of Care

The debate surrounding liability in the open source ecosystem requires careful consideration. Imposing direct liability on individual open source maintainers could stifle the very innovation that drives the industry forward. It risks dismantling the vast ecosystem that countless developers rely upon. ... The software bill of materials (SBOM) is rapidly transitioning from a nascent concept to an undeniable business necessity. As regulatory pressures intensify, driven by a growing awareness of software supply chain risks, a robust SBOM strategy is becoming critical for organizational survival in the tech landscape. But the value of SBOMs extends far beyond a single software development project. While often considered for open source software, an SBOM provides visibility across the entire software ecosystem. It illuminates components from third-party commercial software, helps manage data across merged projects and validates code from external contributors or subcontractors — any code integrated into a larger system. ... The path to a secure digital future requires commitment from all stakeholders. Technology companies must adopt comprehensive security practices, regulators must craft thoughtful policies that encourage innovation while holding organizations accountable and the broader ecosystem must support the collaborative development of practical and effective standards.


The 4 Types of Project Managers

The prophet type is all about taking risks and pushing boundaries. They don’t play by the rules; they make their own. And they’re not just thinking outside the box, they’re throwing the box away altogether. It’s like a rebel without a cause, except this rebel has a cause – growth. These visionaries thrive in ambiguity and uncertainty, seeing potential where others see only chaos or impossibility. They often face resistance from more conservative team members who prefer predictable outcomes and established processes. ... The gambler type is all about taking chances and making big bets. They’re not afraid to roll the dice and see what happens. And while they play by the rules of the game, they don’t have a good business case to back up their bets. It’s like convincing your boss to let you play video games all day because you just have a hunch it will improve your productivity. But don’t worry, the gambler type isn’t just blindly throwing money around. They seek to engage other members of the organization who are also up for a little risk-taking. ... The expert type is all about challenging the existing strategy by pursuing growth opportunities that lie outside the current strategy, but are backed up by solid quantitative evidence. They’re like the detectives of the business world, following the clues and gathering the evidence to make their case. And while the growth opportunities are well-supported and should be feasible, the challenge is getting other organizational members to listen to their advice.


OpenAI, Google DeepMind and Anthropic sound alarm: ‘We may be losing the ability to understand AI’

The unusual cooperation comes as AI systems develop new abilities to “think out loud” in human language before answering questions. This creates an opportunity to peek inside their decision-making processes and catch harmful intentions before they turn into actions. But the researchers warn this transparency is fragile and could vanish as AI technology advances. ... “AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought for the intent to misbehave,” the researchers explain. But they emphasize that this monitoring capability “may be fragile” and could disappear through various technological developments. ... When AI models misbehave — exploiting training flaws, manipulating data, or falling victim to attacks — they often confess in their reasoning traces. The researchers found examples where models wrote phrases like “Let’s hack,” “Let’s sabotage,” or “I’m transferring money because the website instructed me to” in their internal thoughts. Jakub Pachocki, OpenAI’s chief technology officer and co-author of the paper, described the importance of this capability in a social media post. “I am extremely excited about the potential of chain-of-thought faithfulness & interpretability. It has significantly influenced the design of our reasoning models, starting with o1-preview,” he wrote.


Unmasking AsyncRAT: Navigating the labyrinth of forks

We believe that the groundwork for AsyncRAT was laid earlier by the Quasar RAT, which has been available on GitHub since 2015 and features a similar approach. Both are written in C#; however, their codebases differ fundamentally, suggesting that AsyncRAT was not just a mere fork of Quasar, but a complete rewrite. A fork, in this context, is a personal copy of someone else’s repository that one can freely modify without affecting the original project. The main link that ties them together lies in the custom cryptography classes used to decrypt the malware configuration settings. ... Ever since it was released to the public, AsyncRAT has spawned a multitude of new forks that have built upon its foundation. ... It’s also worth noting that DcRat’s plugin base builds upon AsyncRAT and further extends its functionality. Among the added plugins are capabilities such as webcam access, microphone recording, Discord token theft, and “fun stuff”, a collection of plugins used for joke purposes like opening and closing the CD tray, blocking keyboard and mouse input, moving the mouse, turning off the monitor, etc. Notably, DcRat also introduces a simple ransomware plugin that uses the AES-256 cipher to encrypt files, with the decryption key distributed only once the plugin has been requested.


Repatriating AI workloads? A hefty data center retrofit awaits

CIOs with in-house AI ambitions need to consider compute and networking, in addition to power and cooling, Thompson says. “As artificial intelligence moves from the lab to production, many organizations are discovering that their legacy data centers simply aren’t built to support the intensity of modern AI workloads,” he says. “Upgrading these facilities requires far more than installing a few GPUs.” Rack density is a major consideration, Thompson adds. Traditional data centers were designed around racks consuming 5 to 10 kilowatts, but AI workloads, particularly model training, push this to 50 to 100 kilowatts per rack. “Legacy facilities often lack the electrical backbone, cooling systems, and structural readiness to accommodate this jump,” he says. “As a result, many CIOs are facing a fork in the road: retrofit, rebuild, or rent.” Cooling is also an important piece of the puzzle because not only does it enable AI, but upgrades there can help pay for other upgrades, Thompson says. “By replacing inefficient air-based systems with modern liquid-cooled infrastructure, operators can reduce parasitic energy loads and improve power usage effectiveness,” he says. “This frees up electrical capacity for productive compute use — effectively allowing more business value to be generated per watt. For facilities nearing capacity, this can delay or eliminate the need for expensive utility upgrades or even new construction.”


Burnout, budgets and breaches – how can CISOs keep up?

As ever, collaboration in a crisis is critical. Security teams working closely with backup, resilience and recovery functions are better able to absorb shocks. When the business is confident in its ability to restore operations, security professionals face less pressure and uncertainty. This is also true for communication, especially post-breach. Organisations need to be transparent about how they’re containing the incident and what’s being done to prevent recurrence. ... There is also an element of the blame game going on, with everyone keen to avoid responsibility for an inevitable cyber breach. It’s much easier to point fingers at the IT team than to look at the wider implications or causes of a cyber-attack. Even something as simple as a phishing email can cause widespread problems and is something that individual employees must be aware of. ... To build and retain a capable cybersecurity team amid the widening skills gap, CISOs must lead a shift in both mindset and strategy. By embedding resilience into the core of cyber strategy, CISOs can reduce the relentless pressure to be perfect and create a healthier, more sustainable working environment. But resilience isn’t built in isolation. To truly address burnout and retention, CISOs need C-suite support and cultural change. Cybersecurity must be treated as a shared business-critical priority, not just an IT function. 


We Spend a Lot of Time Thinking Through the Worst - The Christ Hospital Health Network CISO

“We’ve spent a lot of time meeting with our business partners and talking through, ‘Hey, how would this specific part of the organization be able to run if this scenario happened?’” On top of internal preparations, Kobren shares that his team monitors incidents across the industry to draw lessons from real-world events. Given the unique threat landscape, he states, “We do spend a lot of time thinking through those scenarios because we know it’s one of the most attacked industries.” Moving forward, Kobren says that healthcare consistently ranks at the top when it comes to industries frequently targeted by cyberattacks. He elaborates that attackers have recognized the high impact of disrupting hospital services, making ransom demands more effective because organizations are desperate to restore operations. ... To strengthen identity security, Kobren follows a strong, centralized approach to access control. He mentions that the organization aims to manage “all access to all systems,” including remote and cloud-based applications. By integrating services with single sign-on (SSO), the team ensures control over user credentials: “We know that we are in control of your username and password.” This allows them to enforce password complexity, reset credentials when needed, and block accounts if security is compromised. Ultimately, Kobren states, “We want to be in control of as much of that process as possible” when it comes to identity management.


AI requires mature choices from companies

According to Felipe Chies of AWS, elasticity is the key to a successful AI infrastructure. “If you look at how organizations set up their systems, you see that the computing time when using an LLM can vary greatly. This is because the model has to break down the task and reason logically before it can provide an answer. It’s almost impossible to predict this computing time in advance,” says Chies. This requires an infrastructure that can handle this unpredictability: one that is quickly scalable, flexible, and doesn’t involve long waits for new hardware. Nowadays, you can’t afford to wait months for new GPUs, says Chies. The reverse is also important: being able to scale back. ... Ruud Zwakenberg of Red Hat also emphasizes that flexibility is essential in a world that is constantly changing. “We cannot predict the future,” he says. “What we do know for sure is that the world will be completely different in ten years. At the same time, nothing fundamental will change; it’s a paradox we’ve been seeing for a hundred years.” For Zwakenberg, it’s therefore all about keeping options open and being able to anticipate and respond to unexpected developments. According to Zwakenberg, this requires an infrastructural basis that is not rigid, but offers room for curiosity and innovation. You shouldn’t be afraid of surprises. Embrace surprises, Zwakenberg explains. 


Prompt-Based DevOps and the Reimagined Terminal

New AI-driven CLI tools prove there's demand for something more intelligent in the command line, but most are limited — they're single-purpose apps tied to individual model providers instead of full environments. They are geared towards code generation, not infrastructure and production work. They hint at what's possible, but don't deliver the deeper integration AI-assisted development needs. That's not a flaw, it's an opportunity to rethink the terminal entirely. The terminal's core strengths — its imperative input and time-based log of actions — make it the perfect place to run not just commands, but launch agents. By evolving the terminal to accept natural language input, be more system-aware, and provide interactive feedback, we can boost productivity without sacrificing the control engineers rely on. ... With prompt-driven workflows, they don't have to switch between dashboards or copy-paste scripts from wikis because they simply describe what they want done, and an agent takes care of the rest. And because this is taking place in the terminal, the agent can use any CLI to gather and analyze information from across data sources. The result? Faster execution, more consistent results, and fewer mistakes. That doesn't mean engineers are sidelined. Instead, they're overseeing more projects at once. Their role shifts from doing every step to supervising workflows — monitoring agents, reviewing outputs, and stepping in when human judgment is needed.

Daily Tech Digest - July 10, 2025


Quote for the day:

"Strive not to be a success, but rather to be of value." -- Albert Einstein


Domain-specific AI beats general models in business applications

Like many AI teams in the mid-2010s, Visma’s group initially relied on traditional deep learning methods such as recurrent neural networks (RNNs), similar to the systems that powered Google Translate back in 2015. But around 2020, the Visma team made a change. “We scrapped all of our development plans and have been transformer-only since then,” says Claus Dahl, Director ML Assets at Visma. “We realized transformers were the future of language and document processing, and decided to rebuild our stack from the ground up.” ... The team’s flagship product is a robust document extraction engine that processes documents in the countries where Visma companies are active. It supports a variety of languages. The AI could be used for documents such as invoices and receipts. The engine identifies key fields, such as dates, totals, and customer references, and feeds them directly into accounting workflows. ... “High-quality data is more valuable than high volumes. We’ve invested in a dedicated team that curates these datasets to ensure accuracy, which means our models can be fine-tuned very efficiently,” Dahl explains. This strategy mirrors the scaling laws used by large language models but tailors them for targeted enterprise applications. It allows the team to iterate quickly and deliver high performance in niche use cases without excessive compute costs.


The case for physical isolation in data centre security

Hardware-enforced physical isolation is fast becoming a cornerstone of modern cybersecurity strategy. These physical-layer security solutions allow your critical infrastructure – servers, storage and network segments – to be instantly disconnected on demand, using secure, out-of-band commands. This creates a last line of defence that holds even when everything else fails. After all, if malware can’t reach your system, it can’t compromise it. If a breach does occur, physical segmentation contains it in milliseconds, stopping lateral movement and keeping operations running without disruption. In stark contrast to software-only isolation, which relies on the very systems it seeks to protect, hardware isolation remains immune to tampering. ... When ransomware strikes, every second counts. In a colocation facility, traditional defences might flag the breach, but not before it worms its way across tenants. By the time alerts go out, the damage is done. With hardware isolation, there’s no waiting: the compromised tenant can be physically disconnected in milliseconds, before the threat spreads, before systems lock up, before wallets and reputations take a hit. What makes this model so effective is its simplicity. In an industry where complexity is the norm, physical isolation offers a simple, fundamental truth: you’re either connected or you’re not. No grey areas. No software dependency. Just total certainty.


Scaling without outside funding: Intuitive's unique approach to technology consulting

We think for any complex problem, a good 60–70% of it can be solved through innovation. That's always our first principle. Then where we see any inefficiencies; be it in workflows or process, automation works for the other 20% of the friction. The remaining 10–20% is where the engineering plays its important role, and it allows to touch on the scale, security and governance aspects. In data specifically, we are referencing the last 5–6 years of massive investments. We partner with platforms like Databricks and DataMiner and we've invested in companies like TESL and Strike AI for securing their AI models. ... In the cloud space, we see a shift from migration to modernisation (and platform engineering). Enterprises are focussing on modernisation of both applications and databases because those are critical levers of agility, security, and business value. In AI it is about data readiness; the majority of enterprise data is very fragmented or very poor quality which makes any AI effort difficult. Next is understanding existing processes—the way work is done at scale—which is critical for enabling GenAI. But the true ROI is Agentic AI—autonomous systems which don’t just tell you what to do, but just do it. We’ve been investing heavily in this space since 2018. 


The Future of Professional Ethics in Computing

Recent work on ethics in computing has focused on artificial intelligence (AI) with its success in solving problems, processing large amounts of data, and with the award of Nobel Prizes to AI researchers. Large language models and chatbots such as ChatGPT suggest that AI will continue to develop rapidly, acquire new capabilities, and affect many aspects of human existence. Many of the issues raised in the ethics of AI overlap previous discussions. The discussion of ethical questions surrounding AI is reaching a much broader audience, has more societal impact, and is rapidly transitioning to action through guidelines and the development of organizational structure, regulation, and legislation. ... Ethics of digital technologies in modern societies raises questions that traditional ethical theories find difficult to answer. Current socio-technical arrangements are complex ecosystems with a multitude of human and non-human stakeholders, influences, and relationships. The questions of ethics in ecosystems include: Who are members? On what grounds are decisions made and how are they implemented and enforced? Which normative foundations are acceptable? These questions are not easily answered. Computing professionals have important contributions to make to these discussions and should use their privileges and insights to help societies navigate them.


AI Agents Vs RPA: What Every Business Leader Needs To Know

Technically speaking, RPA isn’t intelligent in the same way that we might consider an AI system like ChatGPT to mimic some functions of human intelligence. It simply follows the same rules over and over again in order to spare us the effort of doing it. RPA works best with structured data because, unlike AI, it doesn't have the ability to analyze and understand unstructured data, like pictures, videos, or human language. ... AI agents, on the other hand, use language models and other AI technologies like computer vision to understand and interpret the world around them. As well as simply analyzing and answering questions about data, they are capable of taking action by planning how to achieve the results they want and interacting with third-party services to get it done. ... Using RPA, it would be possible to extract details about who sent the mail, the subject line, and the time and date it was sent. This can be used to build email databases and broadly categorize emails according to keywords. An agent, on the other hand, could analyze the sentiment of the email using language processing, prioritize it according to urgency, and even draft and send a tailored response. Over time, it learns how to improve its actions in order to achieve better resolutions.


How To Keep AI From Making Your Employees Stupid

Treat AI-generated content like a highly caffeinated first draft – full of energy, but possibly a little messy and prone to making things up. Your job isn’t to just hit “generate” and walk away unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss (or worse, your audience). Always, always edit aggressively, proofread and, most critically, fact-check every single output. This process isn’t just about catching AI’s mistakes; it actively engages your critical thinking skills, forcing you to verify information and refine expression. Think of it as intellectual calisthenics. ... Don’t settle for the first answer AI gives you. Engage in a dialogue. Refine your prompts, ask follow-up questions, request different perspectives and challenge its assumptions. This iterative process of refinement forces you to think more clearly about your own needs, to be precise in your instructions, and to critically evaluate the nuances of the AI’s response. ... The MIT study serves as a crucial wake-up call: over-reliance on AI can indeed make us “stupid” by atrophying our critical thinking skills. However, the solution isn’t to shun AI, but to engage with it intelligently and responsibly. By aggressively editing, proofreading and fact-checking AI outputs, by iteratively refining prompts and by strategically choosing the right AI tool for each task, we can ensure AI serves as a powerful enhancer, not a detrimental crutch.


What EU’s PQC roadmap means on the ground

The EU’s PQC roadmap is broadly aligned with that from NIST; both advise a phased migration to PQC with hybrid-PQC ciphers and hybrid digital certificates. These hybrid solutions provide the security promises of brand new PQC algorithms, whilst allowing legacy devices that do not support them, to continue using what’s now being called ‘classical cryptography’. In the first instance, both the EU and NIST are recommending that non-PQC encryption is removed by 2030 for critical systems, with all others following suit by 2035. While both acknowledge the ‘harvest now, decrypt later’ threat, neither emphasise the importance of understanding the cover time of data; nor reference the very recent advancements in quantum computing. With many now predicting the arrival of cryptographically relevant quantum computers (CRQC) by 2030, if organizations or governments have information with a cover time of five years or more, it is already too late for many to move to PQC in time. Perhaps the most significant difference that EU organizations will face compared to their American counterparts, is that the European roadmap is more than just advice; in time it will be enforced through various directives and regulations. PQC is not explicitly stated in EU regulations, although that is not surprising.


The trillion-dollar question: Who pays when the industry’s AI bill comes due?

“The CIO is going to be very, very busy for the next three, four years, and that’s going to be the biggest impact,” he says. “All of a sudden, businesspeople are starting to figure out that they can save a ton of money with AI, or they can enable their best performers to do the actual job.” Davidov doesn’t see workforce cuts matching AI productivity increases, even though some job cuts may be coming. ... “The costs of building out AI infrastructure will ultimately fall to enterprise users, and for CIOs, it’s only a question of when,” he says. “While hyperscalers and AI vendors are currently shouldering much of the expense to drive adoption, we expect to see pricing models evolve.” Bhathena advises CIOs to look beyond headline pricing because hidden costs, particularly around integrating AI with existing legacy systems, can quickly escalate. Organizations using AI will also need to invest in upskilling employees and be ready to navigate increasingly complex vendor ecosystems. “Now is the time for organizations to audit their vendor agreements, ensure contract flexibility, and prepare for potential cost increases as the full financial impact of AI adoption becomes clearer,” he says. ... Baker advises CIOs to be careful about their purchases of AI products and services and tie new deployments to business needs.


Multi-Cloud Adoption Rises to Boost Control, Cut Cost

Instead of building everything on one platform, IT leaders are spreading out their workloads, said Joe Warnimont, senior analyst at HostingAdvice. "It's no longer about chasing the latest innovation from a single provider. It's about building a resilient architecture that gives you control and flexibility for each workload." Cost is another major factor. Even though hyperscalers promote their pay-as-you-go pricing, many enterprises find it difficult to predict and manage costs at scale. This is true for companies running hundreds or thousands of workloads across different regions and teams. "You'd think that pay-as-you-go would fit any business model, but that's far from the case. Cost predictability is huge, especially for businesses managing complex budgets," Warnimont said. To gain more control over pricing and features, companies are turning to alternative cloud providers, such as DigitalOcean, Vultr and Backblaze. These platforms may not have the same global footprint as AWS or Azure but they offer specialized services, better pricing and flexibility for certain use cases. An organization needing specific development environments may go to DigitalOcean. Another may chose Vultr for edge computing. Sometimes the big players just don't offer what a specific workload requires. 


How CISOs are training the next generation of cyber leaders

While Abousselham champions a personalized, hands-on approach to developing talent, other CISOs are building more formal pathways to support emerging leaders at scale. For others like PayPal CISO Shaun Khalfan, structured development was always part of his career. He participated in formal leadership training programs offered by the Department of Defense and those run by the American Council for Technology. ... Structured development is also happening inside companies like the insurance brokerage firm Brown & Brown. CISO Barry Hensley supports an internal cohort program designed to identify and grow emerging leaders early in their careers. “We look at our – I’m going to call it newer or younger – employees,” he explains. “And if you become recognized in your first, second, or third year as having the potential to [become a leader], you get put in a program,” he explains. ... Khalfan believes good CISOs should be able to dive deep with engineers while also leading boardroom conversations. “It’s been a long time since I’ve written code,” he says, “but I at least understand how to have a deep conversation and also be able to have a board discussion with someone.” Abousselham agrees that technical experience is only one part of the puzzle. 

Daily Tech Digest - April 07, 2025


Quote for the day:

"Failure isn't fatal, but failure to change might be" -- John Wooden



How enterprise IT can protect itself from genAI unreliability

The AI-watching-AI approach is scarier, although a lot of enterprises are giving it a go. Some are looking to push any liability down the road by partnering with others to do their genAI calculations for them. Still others are looking to pay third-parties to come in and try and improve their genAI accuracy. The phrase “throwing good money after bad” immediately comes to mind. The lack of effective ways to improve genAI reliability internally is a key factor in why so many proof-of-concept trials got approved quickly, but never moved into production. Some version of throwing more humans into the mix to keep an eye on genAI outputs seems to be winning the argument, for now. “You have to have a human babysitter on it. AI watching AI is guaranteed to fail,” said Missy Cummings, a George Mason University professor and director of Mason’s Autonomy and Robotics Center (MARC). “People are going to do it because they want to believe in the (technology’s) promises. People can be taken in by the self-confidence of a genAI system,” she said, comparing it to the experience of driving autonomous vehicles (AVs). When driving an AV, “the AI is pretty good and it can work. But if you quit paying attention for a quick second,” disaster can strike, Cummings said. “The bigger problem is that people develop an unhealthy complacency.”


Why neglecting AI ethics is such risky business - and how to do AI right

The struggle often comes from the lack of a common vocabulary around AI. This is why the first step is to set up a cross-organizational strategy that brings together technical teams as well as legal and HR teams. AI is transformational and requires a corporate approach. Second, organizations need to understand what the key tenets of their AI approach are. This goes beyond the law and encompasses the values they want to uphold. Third, they can develop a risk taxonomy based on the risks they foresee. Risks are based on legal alignment, security, and the impact on the workforce. ... As a starting point, enterprises will need to establish clear policies, principles, and guidelines on the sustainable use of AI. This creates a baseline for decisions around AI innovation and enables teams to make the right choices around the type of AI infrastructure, models, and algorithms they will adopt. Additionally, enterprises need to establish systems to effectively track, measure, and monitor environmental impact from AI usage and demand this from their service providers. We have worked with clients to evaluate current AI policies, engage internal and external stakeholders, and develop new principles around AI and the environment before training and educating employees across several functions to embed thinking in everyday processes.


The risks of entry-level developers over relying on AI

Some CISOs are concerned about the growing reliance on AI code generators — especially among junior developers — while others take a more relaxed, wait-and-see approach, saying that this might be an issue in the future rather than an immediate threat. Karl Mattson, CISO at Endor Labs, argues that the adoption of AI is still in its early stages in most large enterprises and that the benefits of experimentation still outweigh the risks. ... Tuskira’s CISO lists two major issues: first, that AI-generated security code may not be hardened against evolving attack techniques; and second, that it may fail to reflect the specific security landscape and needs of the organization. Additionally, AI-generated code might give a false sense of security, as developers, particularly inexperienced ones, often assume it is secure by default. Furthermore, there are risks associated with compliance and violations of licensing terms or regulatory standards, which can lead to legal issues down the line. “Many AI tools, especially those generating code based on open-source codebases, can inadvertently introduce unvetted, improperly licensed, or even malicious code into your system,” O’Brien says. Open-source licenses, for example, often have specific requirements regarding attribution, redistribution, and modifications, and relying on AI-generated code could mean accidentally violating these licenses.


Language models in generative AI – does size matter?

Firstly, using SLMs rather than full-blown LLMs can bring the cost of that multi-agent system down considerably. Employing smaller and more lightweight language models to fulfill specific requirements will be more cost-effective than using LLMs for every step in an agentic AI system. This approach involves looking at what would be the right component for each element of a multi-agent system, rather than automatically thinking that a “best of breed” approach is the best approach. Secondly, using agentic AI for generative AI use cases should be adopted where multi-agent processes can provide more value per transaction than simpler single-agent models. The choice here affects how you think about pricing your service, what customers expect from AI and how you will deliver your service overall. Alongside looking at the technical and architecture elements for AI, you will also have to consider what your line of business team wants to achieve. While simple AI agents can carry out specific tasks or automate repetitive tasks, they generally require human input to complete those requests. Where agentic AI takes things further is through delivering greater autonomy within business processes through employing that multi-agent approach to constantly adapt to dynamic environments. With agentic AI, companies can use AI to independently create, execute and optimize results around that business process workflow. 


Lessons from a Decade of Complexity: Microservices to Simplicity

This shift made us stop and think: if fast growth isn’t the priority anymore, is microservices still the right choice? ... After going through years of building and maintaining systems with microservices, we’ve learned a lot, especially about what really matters in choosing an architecture. Here are some key takeaways that guide how we think about system design today: Be pragmatic, not idealistic: Don’t get caught up in trendy architecture patterns just because they sound impressive. Focus on what makes sense for your team and your situation. Not every new system needs to start with microservices, especially if the problems they solve aren’t even there yet. Start simple: The simplest solution is often the best one. It’s easier to build, easier to understand, and easier to change. Keeping things simple takes discipline, but it saves time and pain in the long run. Split only when it really makes sense: Don’t break things apart just because “that’s what we do”. Split services when there’s a clear technical reason, like performance, resource needs, or special hardware. Microservices are just a tool: They’re not good or bad by themselves. What matters is whether they help your team move faster, stay flexible, and solve real problems. Every choice comes with tradeoffs: No architecture is perfect. Every decision has upsides and downsides. What’s important is to be aware of those tradeoffs and make the best call for your team.


Massive modernization: Tips for overhauling IT at scale

A core part of digital transformation is decommissioning legacy apps, upgrading aging systems, and modernizing the tech stack. Yet, as appealing as it is for employees to be able to use modern technologies, decommissioning and replacing systems is arduous for IT. ... “You almost do what I call putting lipstick on a pig, which is modernizing your legacy ecosystem with wrappers, whether it be web wrappers, front end and other technologies that allow customers to be able to interact with more modern interfaces,” he says. ... When an organization is truly legacy, most will likely have very little documentation of how those systems can be supported, Mehta says. That was the case for National Life, and it became the first roadblock. “You don’t know what you don’t know until we begin,” he says. This is where the archaeological dig metaphor comes in. “You’re building a new city over the top of the old city, but you’ve got to be able to dig it only enough so you don’t collapse the foundation.” IT has to figure out everything a system touches, “because over time, people have done all kinds of things to it that are not clearly documented,” Mehta says. ... “You have to have a plan to get rid of” legacy systems. He also discovered that “decommissioning is not free. Everybody thinks you just shut a switch off and legacy systems are gone. Legacy decommissioning comes at a cost. You have to be willing to absorb that cost as part of your new system. That was a lesson learned; you cannot ignore that,” he says.


Culture is not static: Prasad Menon on building a thriving workplace at Unplugged 3

To cultivate a thriving workplace, organisations must engage in active listening. Employees should have structured platforms to voice their concerns, aspirations, and feedback without hesitation. At Amagi, this commitment to deep listening is reinforced by technology. The company has implemented an AI-powered chatbot named Samb, which acts as a "listening manager," facilitating real-time employee feedback collection. This tool ensures that concerns and suggestions are acknowledged and addressed within 15 days, allowing for a more responsive and agile work environment. "Culture is not just a feel-good factor—it must be measured and linked to results," Menon emphasised. To track and optimise cultural impact, Amagi has developed a "happiness index" that measures employee well-being across financial, mental, and physical dimensions. By using data to evaluate cultural effectiveness, the organisation ensures that workplace culture is not just an abstract ideal but a tangible force driving business success. ... At the core of Amagi’s culture is a commitment to becoming "the happiest workplace in the world." This vision is driven by a leadership model that prioritises genuine care, consistency, and empowerment. Leaders at Amagi undergo a six-month cultural immersion programme designed to equip them with the skills needed to foster a safe, inclusive, and high-performing work environment.


Speaking the Board’s Language: A CISO’s Guide to Securing Cybersecurity Budget

A major challenge for CISOs in budget discussions is making cybersecurity risk feel tangible. Cyber risks often remain invisible – that is, until a breach happens. Traditional tools like heat maps, which visually represent risk by color-coding potential threats, can be misleading or oversimplified. While they offer a high-level view of risk areas, heat maps fail to provide a concrete understanding of the actual financial impact of those risks. This makes it essential to shift from qualitative risk assessments like heat maps to cyber risk quantification (CRQ), which assigns a measurable dollar value to potential threats and mitigation efforts. ... The biggest challenge CISOs face isn’t just securing budget – it’s making sure decision-makers understand why they need it. Boards and executives don’t think in terms of firewalls and threat detection; they care about business continuity, revenue protection and return on investment (ROI). For cyber investments, though, ROI is not typically the figure security experts turn to to validate these investments, largely because of the difficulties in estimating the value of risk reduction. However, new approaches to cyber risk quantification have made this a reality. With models validated by real-world loss data, it is now possible to produce an ROI figure. 


Can AI predict who will commit crime?

Simulating the conditions for individual offending is not the same as calculating the likelihood of storms or energy outages. Offending is often situational and is heavily influenced by emotional, psychological and environmental elements (a bit like sport – ever wondered why Predictive AI hasn’t put bookmakers out of business yet?). Sociological factors also play a big part in rehabilitation which, in turn, affects future offending. Predictive profiling relies on past behaviour being a good indicator of future conduct. Is this a fair assumption? Occupational psychologists say past behaviour is a reliable predictor of future performance – which is why they design job selection around it. Unlike financial instruments which warn against assuming future returns from past rewards, human behaviour does have a perennial quality. Leopards and spots come to mind. ... Even if the data could reliably tell us who will be charged with, prosecuted for and convicted of which specific offence in the future, what should the police do about it now? Implant a biometric chip and have them under perpetual surveillance to stop them doing what they probably didn’t know they were going to do? Fine or imprison them? (how much, for how long?). What standard of proof will the AI apply to its predictions? Beyond a reasonable doubt? How will we measure the accuracy of the process? 


CISOs battle security platform fatigue

“Adopting more security tools doesn’t guarantee better cybersecurity,” says Jonathan Gill, CEO at Panaseer. “These tools can only report on what they can see – but they don’t know what they’re missing.” This fragmented visibility leaves security leaders making high-stakes decisions based on partial information. Without a verified, comprehensive system of record for all assets and security controls, many organizations are operating under what Gill calls an “illusion of visibility.” “Without a true denominator,” he explains, “CISOs are unable to confidently assess coverage gaps or prove compliance with evolving regulatory demands.” And those blind spots aren’t just theoretical. Every overlooked asset or misconfigured control becomes an open door for attackers — and they’re getting better at finding them. “Each of these coverage gaps represents risk,” Gill warns, “and they are increasingly easy for attackers to find and exploit.” The lack of clear visibility also muddies accountability. “This creates dark corners that go overlooked – servers and applications are left without owners, making it hard to assign responsibility for fixing issues,” Gill says. Even when gaps are known, security teams often find themselves drowning in data from too many tools, struggling to separate signal from noise. 

Daily Tech Digest - March 11, 2025


Quote for the day:

“What seems to us as bitter trials are often blessings in disguise.” -- Oscar Wilde


This new AI benchmark measures how much models lie

Scheming, deception, and alignment faking, when an AI model knowingly pretends to change its values when under duress, are ways AI models undermine their creators and can pose serious safety and security threats. Research shows OpenAI's o1 is especially good at scheming to maintain control of itself, and Claude 3 Opus has demonstrated that it can fake alignment. To clarify, the researchers defined lying as, "(1) making a statement known (or believed) to be false, and (2) intending the receiver to accept the statement as true," as opposed to other false responses, such as hallucinations. The researchers said the industry hasn't had a sufficient method of evaluating honesty in AI models until now. ... "Many benchmarks claiming to measure honesty in fact simply measure accuracy -- the correctness of a model's beliefs -- in disguise," the report said. Benchmarks like TruthfulQA, for example, measure whether a model can generate "plausible-sounding misinformation" but not whether the model intends to deceive, the paper explained. ... "As a result, more capable models can perform better on these benchmarks through broader factual coverage, not necessarily because they refrain from knowingly making false statements," the researchers said. In this way, MASK is the first test to differentiate accuracy and honesty. 


EU looks to tech sovereignty with EuroStack amid trade war

“Software forms the operational core of digital infrastructure, encompassing operating systems, application platforms, and algorithmic frameworks,” the report notes. “It powers critical functions such as identity management, electronic payments, transactions, and document delivery, forming the foundation of digital public infrastructures.” EuroStack could also help empower citizens and businesses through digital identity systems, secure payments and data platforms. It envisions digital IDs as the gateway to Europe’s digital infrastructure and a way to enable seamless access while safeguarding privacy and sovereignty according to EU regulations. “By overcoming the limitations seen in models like India Stack, which rely on centralized biometric IDs and foreign cloud infrastructure, the EuroStack offers a federated, privacy-preserving platform,” the study explains. EuroStack’s ambitious goals to support indigenous technology will require plenty of funds: As much as 300 billion euros (US$324.9 billion) for the next 10 years, according to the study. Chamber of Progress, a tech industry trade group that includes U.S. tech companies, puts the price tag even higher, at 5 trillion euros ($5.4 trillion). But according to EuroStack’s proponents, the results are worth it.


Companies are drowning in high-risk software security debt — and the breach outlook is getting worse

Organizations are taking longer to fix security flaws in their software, and the security debt involved is becoming increasingly critical as a result. According to application security vendor Veracode’s latest State of Software Security report, the average fix time for security flaws has increased from 171 days to 252 days over the past five years. ... Chris Wysopal, co-founder at chief security evangelist at Veracode, told CSO that one aspect of application security that has gotten progressively worse over the years is the time it takes to fix flaws. “There are many reasons for this, but the ever-growing scope and complexity of the software ecosystem is a core issue,” Wysopal said. “Organizations have more applications and vastly more code to keep on top of, and this will only increase as more teams adopt AI for code generation” — an issue compounded by the potential security implications of AI-generated code across in-house software and third-party dependencies alike. ... “Most organizations suffer from fragmented visibility over the software flaws and risks within their applications, with sprawling toolsets that create ‘alert fatigue’ at the same time as silos of data to interpret and make decisions about,” Wysopal said. “The key factors that help them address the security backlog are the ability to prioritize remediation of flaws based on risk.” 


AI Coding Assistants Are Reshaping Engineering — Not Replacing Engineers

The next big leap in AI coding assistants will be when they start learning from how developers work in real time. Right now, AI doesn’t recognize coding patterns within a session. If I perform the same action 10 times in a row, none of the current tools ask, “Do you want me to do this for the next 100 lines?” But Vi and Emacs solved this problem decades ago with macros and automated keystroke reduction. AI coding assistants haven’t even caught up to that efficiency level yet. Eventually, AI assistants might become plugin-based so developers can choose the best AI-powered features for their preferred editor. Deeply integrated IDE experiences will probably offer more functionality, but many developers won’t want to switch IDEs. ... Software engineering is a fast-paced career. Languages, frameworks, and technologies come and go, and the ability to learn and adapt separates those who thrive from those who fall behind. AI coding assistants are another evolution in this cycle. They won’t replace engineers but will change how engineering is done. The key isn’t resisting these tools; it’s learning how to use them properly and staying curious about their capabilities and limitations. Until these tools improve, the best engineers will be the ones who know when to trust AI, when to double-check its output, and how to integrate it into their workflow without becoming dependent on it.


Building generative AI? Get ready for generative UI

Generative UI takes the concept of generative AI and applies it to how we interact with data or systems. Just as generative AI makes data interactive and available in natural language, or creates new images or sound in response to a prompt, so generative UI builds interactive context into how data is displayed, depending on what you are asking for. The goal is to deliver the content that the user wants but also in a format that makes the most of that data for the user too. ... To deliver generative UI, you will have to link up your application with your generative AI components, like your large language model (LLM) and sources of data, and with the tools you use to build the site like Vercel and Next.js. For generative UI, by using React Server Components, you can change the way that you display the output from your LLM service. These components can deliver information that is updated in real time, or is delivered in different ways depending on what formats are best suited to the responses. As you create your application, you will have to think about some of the options that you might want to deliver. As a user asks a question, the generative AI system must understand the request, determine the appropriate function to use, then choose the appropriate React Server Component to display the response back.


Four essential strategies to bolster cyber resilience in critical infrastructure

Cyber resilience isn’t possible when teams operate in silos. In fact, 59% of government leaders report that their inability to synthesize data across people, operations, and finances weakens organizational agility. To bolster cyber resilience, organizations must break down these siloes by fostering cross-departmental collaboration and making it as seamless as possible. Achieving this requires strategic investment in a triad of technologies: A customized, secure collaboration platform; A project management tool like Asana, Trello, or Jira; A knowledge-sharing solution like Confluence or Notion. Once these three foundational tools are in place, organizations should deploy the final piece of the puzzle: a dashboarding or reporting tool. These technologies can help IT leaders pinpoint any silos that exist and start figuring out how to break them down. ... Most organizations understand security’s importance but often treat it as an afterthought. To strengthen cyber resilience, organizations must adopt a security-first mindset, baking security into everything they do. Too often, security teams are siloed from the rest of the organization; they’re roped in at the end when they should be fully integrated from the start. Truly resilient organizations treat security as a shared responsibility, ensuring it’s part of every decision, project, and process. 


Did we all just forget diverse tech teams are successful ones?

The reality is that diverse teams are more productive and report better financial performance. This has been a key advantage of diversity in tech for many years, and it’s continued to this day. Research from McKinsey’s Diversity Matters report showed that those committed to DEI and multi-ethnic representation exhibit a “39% increased likelihood of outperformance” compared to those that aren’t. These same companies also showed an average 27% financial advantage over others. The same performance boosts can be found in executive teams that focus heavily on improving gender diversity, McKinsey found. Companies with representation of women exceeding 30% are “significantly more likely to financially outperform those with 30% or fewer,” the study noted. ... Are you willing to alienate huge talent pools because you want to foster a more ‘masculine’ culture in your company? If you are, then you’re fighting a losing battle and in my opinion deserve to fail. Tech bro culture counts for nothing when that runway comes to an end and you’ve no MVP. Yet again, what this entire debacle comes down to is a highly vocal minority seeking to hamper progress. Big tech might just be going with the flow and pandering to the current prevailing ideological sentiment. In time they might come back around, but that’s what makes it worse.


With critical thinking in decline, IT must rethink application usability

The more IT’s business analysts and developers learn the end business, the better prepared they will be to deliver applications that fit the forms and functions of business processes, and integrate seamlessly into these processes. Part of IT engagement with the business involves understanding business goals and how the business operates, but it’s equally important to understand the skill levels of the employees who will be using the apps. ... The 80/20 rule — i.e., 80% of applications developed are seldom or never used, and 20% are useful — still applies. And it often also applies within that 20% of useful apps, in terms of useful features and functionality. IT must work to ensure what it develops hits a higher target of utility. Users are under constant pressure to do work fast. They meet the challenges by finding ways to do the least possible work per app and may never look at some of the more embedded, complicated, and advanced functionality an app offers. ... Especially in user areas with high turnover, or in other domains that require a moderate to high level of skill, user training and mentoring should be major milestone tasks in every application project, and an ongoing routine after a new application is installed. Business analysts from IT can help with some of this, but the ultimate responsibility falls on non-IT functions, which should have subject matter experts available to mentor and train employees when questions arise.


How digital academies can boost business-ready tech skills for the future

Niche tech skills are becoming essential for complex software projects. With requirements evolving for highly technical roles, there’s a greater need for more competency in using digital tools. Technology professionals need to know how to use the tools effectively and valuably to make meaningful decisions around adoption and implementation. ... In creating links between educational institutions and a hub of tech and digital sector businesses, via digital academies, this can vastly improve how training opportunities can be constructed. Whether an organisation is looking to make digital transformation real and upskill on the tools and technology available, or a person wants to career switch into software development, digital academies can support these skilling or upskilling programmes through training on a range of digital tools. An effective digital academy is one with technical experts in software delivery that design, deliver and assess the courses. An academy such as Headforwards Digital Academy can intensively train a person in deep software engineering, taking them from no-coding knowledge to becoming a junior software developer in as little as 16 weeks. These industry-led tech training programmes are a more agile and nimble response to education, as they are validated by employers and receive so much support. 


Smart cybersecurity spending and how CISOs can invest where it matters

“The most pervasive waste in cybersecurity isn’t from insufficient tools – it’s from investments that aren’t tied to validated risk models. When security spending isn’t part of a closed-loop system that connects real-world threats to measurable outcomes, you’re essentially paying for digital theater rather than actual protection,” Alex Rice, CTO at HackerOne, told Help Net Security. “Many CISOs operate with fragmented security architectures where tools work in isolation, creating dangerous blind spots. As attack surfaces expand across code, AI systems, cloud infrastructure, and traditional IT, this siloed approach isn’t just inefficient – it’s dangerous. Defense in depth requires coordinated visibility across all domains,” Rice added. ... “A HackerOne survey revealed most CISOs don’t find traditional ROI measures useful for security investments. This isn’t surprising – cybersecurity is notoriously difficult to quantify with conventional metrics. More meaningful approaches like Return on Mitigation, which accounts for potential losses prevented, offer a more accurate picture of security’s true business value,” Rice explained. “The uncomfortable truth? We’ve created a tangled ecosystem of point solutions that often disguise rather than address fundamental security gaps. Before purchasing the next shiny tool, ask: Does this solution provide meaningful transparency into your actual security posture?