Showing posts with label low code. Show all posts
Showing posts with label low code. Show all posts

Daily Tech Digest - June 21, 2025


Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins


AI in Disaster Recovery: Mapping Technical Capabilities to Real Business Value

Despite its promise, AI introduces new challenges, including security risks and trust deficits. Threat actors leverage the same AI advancements, targeting systems with more precision and, in some cases, undermining AI-driven defenses. In the Zerto–IDC survey mentioned earlier, for instance, only 41% of respondents felt that AI is “very” or “somewhat” trustworthy; 59% felt that it is “not very” or “not at all” trustworthy. To mitigate these risks, organizations must adopt AI responsibly. For example, combining AI-driven monitoring with robust encryption and frequent model validation ensures that AI systems deliver consistent and secure performance. Furthermore, organizations should emphasize transparency in AI operations to maintain trust among stakeholders. Successful AI deployment in DR/CR requires cross-functional alignment between ITOps and management. Misaligned priorities can delay response times during crises, exacerbating data loss and downtime. Additionally, the ongoing IT skills shortage is still very much underway, with a different recent IDC study predicting that 9 out of 10 organizations will feel an impact by 2026, at a cost of $5.5 trillion in potential delays, quality issues, and revenue loss across the economy. Integrating AI-driven automation can partially mitigate these impacts by optimizing resource allocation and reducing dependency on manual intervention.


The Quantum Supply Chain Risk: How Quantum Computing Will Disrupt Global Commerce

Whether its API’s, middleware, firmware embedded devices or operational technology, they’re all built on the same outdated encryption and systems of trust. One of the biggest threats from quantum computing will be on all this unseen machinery that powers global digital trade. These systems handle the backend of everything from routing to cargo to scheduling deliveries and clearing large shipments, but they were never designed to withstand the threat of quantum. Attackers will be able to break in quietly — injecting malicious code into control software, ERP systems or impersonating suppliers to communicate malicious information and hijack digital workflows. Quantum computing won’t necessarily affect the industries on its own, but it will corrupt the systems that power the global economy. ... Some of the most dangerous attacks are being staged today, with many nation-states and bad actors storing encrypted data, from procurement orders to shipping records. When quantum computers are finally able to break those encryption schemes, attackers will be able to decrypt them in what’s coined a Harvest Now Decrypt Later (HNDL) attack. These attacks, although retroactive in nature, represent one of the biggest threats to the integrity of cross-border commerce. Global trade depends on digital provenance or handling goods and proving where they came from. 


Securing OT Systems: The Limits of the Air Gap Approach

Aside from susceptibility to advanced techniques, tactics, and procedures (TTPs) such as thermal manipulation and magnetic fields, more common vulnerabilities associated with air-gapped environments include factors such as unpatched systems going unnoticed, lack of visibility into network traffic, potentially malicious devices coming on the network undetected, and removable media being physically connected within the network. Once the attack is inside OT systems, the consequences can be disastrous regardless of whether there is an air gap or not. However, it is worth considering how the existence of the air gap can affect the time-to-triage and remediation in the case of an incident. ... This incident reveals that even if a sensitive OT system has complete digital isolation, this robust air gap still cannot fully eliminate one of the greatest vulnerabilities of any system—human error. Human error would still hold if an organization went to the extreme of building a faraday cage to eliminate electromagnetic radiation. Air-gapped systems are still vulnerable to social engineering, which exploits human vulnerabilities, as seen in the tactics that Dragonfly and Energetic Bear used to trick suppliers, who then walked the infection right through the front door. Ideally, a technology would be able to identify an attack regardless of whether it is caused by a compromised supplier, radio signal, or electromagnetic emission. 


How to Lock Down the No-Code Supply Chain Attack Surface

A core feature of no-code development, third-party connectors allow applications to interact with cloud services, databases, and enterprise software. While these integrations boost efficiency, they also create new entry points for adversaries. ... Another emerging threat involves dependency confusion attacks, where adversaries exploit naming collisions between internal and public software packages. By publishing malicious packages to public repositories with the same names as internally used components, attackers could trick the platform into downloading and executing unauthorized code during automated workflow executions. This technique allows adversaries to silently insert malicious payloads into enterprise automation pipelines, often bypassing traditional security reviews. ... One of the most challenging elements of securing no-code environments is visibility. Security teams struggle with asset discovery and dependency tracking, particularly in environments where business users can create applications independently without IT oversight. Applications and automations built outside of IT governance may use unapproved connectors and expose sensitive data, since they often integrate with critical business workflows. 


Securing Your AI Model Supply Chain

Supply chain Levels for Software Artifacts (SLSA) is a comprehensive framework designed to protect the integrity of software artifacts, including AI models. SLSA provides a set of standards and practices to secure the software supply chain from source to deployment. By implementing SLSA, organizations can ensure that their AI models are built and maintained with the highest levels of security, reducing the risk of tampering and ensuring the authenticity of their outputs. ... Sigstore is an open-source project that aims to improve the security and integrity of software supply chains by providing a transparent and secure way to sign and verify software artifacts. Using cryptographic signatures, Sigstore ensures that AI models and other software components are authentic and have not been tampered with. This system allows developers and organizations to trace the provenance of their AI models, ensuring that they originate from trusted sources. ... The most valuable takeaway for ensuring model authenticity is the implementation of robust verification mechanisms. By utilizing frameworks like SLSA and tools like Sigstore, organizations can create a transparent and secure supply chain that guarantees the integrity of their AI models. This approach helps build trust with stakeholders and ensures that the models deployed in production are reliable and free from malicious alterations.


Data center retrofit strategies for AI workloads

AI accelerators are highly sensitive to power quality. Sub-cycle power fluctuations can cause bit errors, data corruption, or system instability. Older uninterruptible power supply (UPS) systems may struggle to handle the dynamic loads AI can produce, often involving three MW sub-cycle swings or more. Updating the electrical distribution system (EDS) is an opportunity that includes replacing dated UPS technology, which often cannot handle the dynamic AI load profile, redesigning power distribution for redundancy, and ensuring that power supply configurations meet the demands of high-density computing. ... With the high cost of AI downtime, risk mitigation becomes paramount. Energy and power management systems (EPMS) are capable of high-resolution waveform capture, which allows operators to trace and address electrical anomalies quickly. These systems are essential for identifying the root cause of power quality issues and coordinating fast response mechanisms. ... No two mission-critical facilities are the same regarding space, power, and cooling. Add the variables of each AI deployment, and what works for one facility may not be the best fit for another. That said, there are some universal truths about retrofitting for AI. You will need engineers who are well-versed in various equipment configurations, including cooling and electrical systems connected to the network. 


Is it time for a 'cloud reset'? New study claims public and private cloud balance is now a major consideration for companies across the world

Enterprises often still have some kind of a cloud-first policy, he outlined, but they have realized they need some form of private cloud too, typically due to the fact that some workloads do not meet the needs, mainly around cost, complexity and compliance. However the problem is that because public cloud has taken priority, infrastructure has not grown in the right way - so increasingly, Broadcom’s conversations are now with customers realizing they need to focus on both public and private cloud, and some on-prem, Baguley says, as they're realizing, “we need to make sure we do it right, we're doing it in a cost-effective way, and we do it in a way that's actually going to be strategically sensible for us going forward.” "In essence - they've realised they need to build something on-prem that can not only compete with public cloud, but actually be better in various categories, including cost, compliance and complexity.” ... In order to help with these concerns, Broadcom has released VMware Cloud Foundation (VCF) 9.0, the latest edition of its platform to help customers get the most out of private cloud. Described by Baguely as, “the culmination of 25 years work at VMware”, VCF 9.0 offers users a single platform with one SKU - giving them improved visibility while supporting all applications with a consistent experience across the private cloud environment.


Cloud in the age of AI: Six things to consider

This is an issue impacting many multinational organizations, driving the growth for regional- and even industry clouds. These offer specific tailored compliance, security, and performance options. As organizations try to architect infrastructure that supports their future states, with a blend of cloud and on-prem, data sovereignty is an increasingly large issue. I hear a lot from IT leaders about how they must consider local and regional regulations, which adds a consideration to the simple concept of migration to the cloud. ... Sustainability was always the hidden cost of connected computing. Hosting data in the cloud consumes a lot of energy. Financial cost is most top of mind when IT leaders talk about driving efficiency through the cloud right now. It’s also at the root of a lot of talk about moving to the edge and using AI-infused end user devices. But expect sustainability to become an increasingly important factor in cloud: geo political instability, the cost of energy, and the increasing demands of AI will see to that. ... The AI PC pitch from hardware vendors is that organizations will be able to build small ‘clouds’ of end user devices. Specific functions and roles will work on AI PCs and do their computing at the edge. The argument is compelling: better security and efficient modular scalability. Not every user or function needs all capabilities and access to all data.


Creating a Communications Framework for Platform Engineering

When platform teams focus exclusively on technical excellence while neglecting a communication strategy, they create an invisible barrier between the platform’s capability and its business impact. Users can’t adopt what they don’t understand, and leadership won’t invest in what they can’t measure. ... To overcome engineers’ skepticism of new tools that may introduce complexity, your communication should clearly articulate how the platform simplifies their work. Highlight its ability to reduce cognitive load, minimize context switching, enhance access to documentation and accelerate development cycles. Present these advantages as concrete improvements to daily workflows, rather than abstract concepts. ... Tap into the influence of respected technical colleagues who have contributed to the platform’s development or were early adopters. Their endorsements are more impactful than any official messaging. Facilitate opportunities for these champions to demonstrate the platform’s capabilities through lightning talks, recorded demos or pair programming sessions. These peer-to-peer interactions allow potential users to observe practical applications firsthand and ask candid questions in a low-pressure environment.


Why data sovereignty is an enabler of Europe’s digital future

Data sovereignty has broad reaching implications with potential impact on many areas of a business extending beyond the IT department. One of the most obvious examples is for the legal and finance departments, where GDPR and similar legislation require granular control over how data is stored and handled. The harsh reality is that any gaps in compliance could result in legal action, substantial fines and subsequent damage to longer term reputation. Alongside this, providing clarity on data governance increasingly factors into trust and competitive advantage, with customers and partners keen to eliminate grey areas around data sovereignty. ... One way that many companies are seeking to gain more control and visibility of their data is by repatriating specific data sets from public cloud environments over to on-premise storage or private clouds. This is not about reversing cloud technology; instead, repatriation is a sound way of achieving compliance with local legislation and ensuring there is no scope for questions over exactly where data resides. In some instances, repatriating data can improve performance, reduce cloud costs and it can also provide assurance that data is protected from foreign government access. Additionally, on-premise or private cloud setups can offer the highest levels of security from third-party risks for the most sensitive or proprietary data.

Daily Tech Digest - March 23, 2025


Quote for the day:

"Law of Leadership: A successful team with 100 members has 100 leaders." -- Lance Secretan


Citizen Development: The Wrong Strategy for the Right Problem

The latest generation of citizen development offenders are the low-code and no-code platforms that promise to democratize software development by enabling those without formal programming education to build applications. These platforms fueled enthusiasm around speedy app development — especially among business users — but their limitations are similar to the generations of platforms that came before. ... Don't get me wrong — the intentions behind citizen development come from a legitimate place. More often than not, IT needs to deliver faster to keep up with the business. But these tools promise more than they can deliver and, worse, usually result in negative unintended consequences. Think of it as a digital house of cards, where disparate apps combine to create unscalable systems that can take years and/or millions of dollars to fix. ... Struggling to keep up with business demands is a common refrain for IT teams. Citizen development has attempted to bridge the gap, but it typically creates more problems than solutions. Rather than relying on workarounds and quick fixes that potentially introduce security risks and inefficiency — and certainly rather than disintermediating IT — businesses should embrace the power of GenAI to support their developers and ultimately to make IT more responsive and capable.


Researchers Test a Blockchain That Only Quantum Computers Can Mine

The quantum blockchain presents a path forward for reducing the environmental cost of digital currencies. It also provides a practical incentive for deploying early quantum computers, even before they become fully fault-tolerant or scalable. In this architecture, the cost of quantum computing — not electricity — becomes the bottleneck. That could shift mining centers away from regions with cheap energy and toward countries or institutions with advanced quantum computing infrastructure. The researchers also argue that this architecture offers broader lessons. ... “Beyond serving as a proof of concept for a meaningful application of quantum computing, this work highlights the potential for other near-term quantum computing applications using existing technology,” the researchers write. ... One of the major limitations, as mentioned, is cost. Quantum computing time remains expensive and limited in availability, even as energy use is reduced. At present, quantum PoQ may not be economically viable for large-scale deployment. As progress continues in quantum computing, those costs may be mitigated, the researchers suggest. D-Wave machines also use quantum annealing — a different model from the quantum computing platforms pursued by companies like IBM and Google. 


Enterprise Risk Management: How to Build a Comprehensive Framework

Risk objects are the human capital, physical assets, documents and concepts (e.g., “outsourcing”) that pose risk to an organization. Stephen Hilgartner, a Cornell University professor, once described risk objects as “sources of danger” or “things that pose hazards.” The basic idea is that any simple action, like driving a car, has associated risk objects – such as the driver, the car and the roads. ... After the risk objects have been defined, the risk management processes of identification, assessment and treatment can begin. The goal of ERM is to develop a standardized system that not only acknowledges the risks and opportunities in every risk object but also assesses how the risks can impact decision-making. For every risk object, hazards and opportunities must be acknowledged by the risk owner. Risk owners are the individuals managerially accountable for the risk objects. These leaders and their risk objects establish a scope for the risk management process. Moreover, they ensure that all risks are properly managed based on approved risk management policies. To complete all aspects of the risk management process, risk owners must guarantee that risks are accurately tied to the budget and organizational strategy.


Choosing consequence-based cyber risk management to prioritize impact over probability, redefine industrial security

Nonetheless, the biggest challenge for applying consequence-based cyber risk management is the availability of holistic information regarding cyber events and their outcomes. Most companies struggle to gauge the probable damage of attacks based on inadequate historical data or broken-down information systems. This has led to increased adoption of analytics and threat intelligence technologies to enable organizations to simulate the ‘most likely’ outcome of cyber-attacks and predict probable situations. ... “A winning strategy incorporates prevention and recovery. Proactive steps like vulnerability assessments, threat hunting, and continuous monitoring reduce the likelihood and impact of incidents,” according to Morris. “Organizations can quickly restore operations when incidents occur with robust incident response plans, disaster recovery strategies, and regular simulation exercises. This dual approach is essential, especially amid rising state-sponsored cyberattacks.” ... “To overcome data limitations, organizations can combine diverse data sources, historical incident records, threat intelligence feeds, industry benchmarks, and expert insights, to build a well-rounded picture,” Morris detailed. “Scenario analysis and qualitative assessments help fill in gaps when quantitative data is sparse. Engaging cross-functional teams for continuous feedback ensures these models evolve with real-world insights.”


The CTO vs. CMO AI power struggle - who should really be in charge?

An argument can be made that the CTO should oversee everything technical, including AI. Your CTO is already responsible for your company's technology infrastructure, data security, and system reliability, and AI directly impacts all these areas. But does that mean the CTO should dictate what AI tools your creative team uses? Does the CTO understand the fundamentals of what makes good content or the company's marketing objectives? That sounds more like a job for your creative team or your CMO. On the other hand, your CMO handles everything from brand positioning and revenue growth to customer experiences. But does that mean they should decide what AI tools are used for coding or managing company-wide processes or even integrating company data? You see the problem, right? ... Once a tool is chosen, our CTO will step in. They perform their due diligence to ensure our data stays secure, confidential information isn't leaked, and none of our secrets end up on the dark web. That said, if your organization is large enough to need a dedicated Chief AI Officer (CAIO), their role shouldn't be deciding AI tools for everyone. Instead, they're a mediator who connects the dots between teams. 


Why Cyber Quality Is the Key to Security

To improve security, organizations must adopt foundational principles and assemble teams accountable for monitoring safety concerns. Cyber resilience and cyber quality are two pillars that every institution — especially at-risk ones — must embrace. ... Do we have a clear and tested cyber resilience plan to reduce the risk and impact of cyber threats to our business-critical operations? Is there a designated team or individual focused on cyber resilience and cyber quality? Are we focusing on long-term strategies, targeted at sustainable and proactive solutions? If the answer to any of these questions is no, something needs to change. This is where cyber quality comes in. Cyber quality is about prioritization and sustainable long-term strategy for cyber resilience, and is focused on proactive/preventative measures to ensure risk mitigation. This principle is not a marked checkbox on controls that show very little value in the long run. ... Technology alone doesn't solve cybersecurity problems — people are the root of both the challenges and the solutions. By embedding cyber quality into the core of your operations, you transform cybersecurity from a reactive cost center into a proactive enabler of business success. Organizations that prioritize resilience and proactive governance will not only mitigate risks but thrive in the digital age. 


ISO 27001: Achieving data security standards for data centers

Achieving ISO 27001 certification is not an overnight process. It’s a journey that requires commitment, resources, and a structured approach in order to align the organization’s information security practices with the standard’s requirements. The first step in the process is conducting a comprehensive risk assessment. This assessment involves identifying potential security risks and vulnerabilities in the data center’s infrastructure and understanding the impact these risks might have on business operations. This forms the foundation for the ISMS and determines which security controls are necessary. ... A crucial, yet often overlooked, aspect of ISO 27001 compliance is the proper destruction of data. Data centers are responsible for managing vast amounts of sensitive information and ensuring that data is securely sanitized when it is no longer needed is a critical component of maintaining information security. Improper data disposal can lead to serious security risks, including unauthorized access to confidential information and data breaches. ... Whether it's personal information, financial records, intellectual property, or any other type of sensitive data, the potential risks of improper disposal are too great to ignore. Data breaches and unauthorized access can result in significant financial loss, legal liabilities, and reputational damage.


Understanding code smells and how refactoring can help

Typically, code smells stem from a failure to write source code in accordance with necessary standards. In other cases, it means that the documentation required to clearly define the project's development standards and expectations was incomplete, inaccurate or nonexistent. There are many situations that can cause code smells, such as improper dependencies between modules, an incorrect assignment of methods to classes or needless duplication of code segments. Code that is particularly smelly can eventually cause profound performance problems and make business-critical applications difficult to maintain. It's possible that the source of a code smell may cause cascading issues and failures over time. ... The best time to refactor code is before adding updates or new features to an application. It is good practice to clean up existing code before programmers add any new code. Another good time to refactor code is after a team has deployed code into production. After all, developers have more time than usual to clean up code before they're assigned a new task or a project. One caveat to refactoring is that teams must make sure there is complete test coverage before refactoring an application's code. Otherwise, the refactoring process could simply restructure broken pieces of the application for no gain. 


Handling Crisis: Failure, Resilience And Customer Communication

Failure is something leaders want to reduce as much as they can, and it’s possible to design products with graceful failure in mind. It’s also called graceful degradation and can be thought of as a tolerance to faults or faulting. It can mean that core functions remain usable as parts or connectivity fails. You want any failure to cause as little damage or lack of service as possible. Think of it as a stopover on the way to failing safely: When our plane engines fail, we want them to glide, not plummet. ... Resilience requires being on top of it all: monitoring, visibility, analysis and meeting and exceeding the SLAs your customers demand. For service providers, particularly in tech, you can focus on a full suite of telemetry from the operational side of the business and decide your KPIs and OKRs. You can also look at your customers’ perceptions via churn rate, customer lifetime value, Net Promoter Score and so on. ... If you are to cope with the speed and scale of potential technical outages, this is essential. Accuracy, then speed, should be your priorities when it comes to communicating about outages. The more of both, the better, but accuracy is the most important, as it allows customers to make informed choices as they manage the impact on their own businesses.


Approaches to Reducing Technical Debt in Growing Projects

Technical debt, also known as “tech debt,” refers to the extra work developers incur by taking shortcuts or delaying necessary code improvements during software development. Though sometimes these shortcuts serve a short-term goal — like meeting a tight release deadline — accumulating too many compromises often results in buggy code, fragile systems, and rising maintenance costs. ... Massive rewrites can be risky and time-consuming, potentially halting your roadmap. Incremental refactoring offers an alternative: focus on high-priority areas first, systematically refining the codebase without interrupting ongoing user access or new feature development. ... Not all parts of your application contribute to technical debt equally. Concentrate on elements tied directly to core functionality or user satisfaction, such as payment gateways or account management modules. Use metrics like defect density or customer support logs to identify “hotspots” that accumulate excessive technical debt. ... Technical debt often creeps in when teams skip documentation, unit tests, or code reviews to meet deadlines. A clear “definition of done” helps ensure every feature meets quality standards before it’s marked complete.

Daily Tech Digest - January 27, 2025


Quote for the day:

"Your problem isn't the problem. Your reaction is the problem." -- Anonymous


Revolutionizing Investigations: The Impact of AI in Digital Forensics

One of the most significant challenges in modern digital forensics, both in the corporate sector and law enforcement, is the abundance of data. Due to increasing digital storage capacities, even mobile devices today can accumulate up to 1TB of information. ... Digital forensics started benefiting from AI features a few years ago. The first major development in this regard was the implementation of neural networks for picture recognition and categorization. This powerful tool has been instrumental for forensic examiners in law enforcement, enabling them to analyze pictures from CCTV and seized devices more efficiently. It significantly accelerated the identification of persons of interest and child abuse victims as well as the detection of case-related content, such as firearms or pornography. ... No matter how advanced, AI operates within the boundaries of its training, which can sometimes be incomplete or imperfect. Large language models, in particular, may produce inaccurate information if their training data lacks sufficient detail on a given topic. As a result, investigations involving AI technologies require human oversight. In DFIR, validating discovered evidence is standard practice. It is common to use multiple digital forensics tools to verify extracted data and manually check critical details in source files. 


Is banning ransomware payments key to fighting cybercrime?

Implementing a payment ban is not without challenges. In the short term, retaliatory attacks are a real possibility as cybercriminals attempt to undermine the policy. However, given the prevalence of targets worldwide, I believe most criminal gangs will simply focus their efforts elsewhere. The government’s resolve would certainly be tested if payment of a ransom was seen as the only way to avoid public health data being leaked, energy networks being crippled, or preventing a CNI organization from going out of business. In such cases, clear guidelines as well as technical and financial support mechanisms for affected organizations are essential. Policy makers must develop playbooks for such scenarios and run education campaigns that raise awareness about the policy’s goals, emphasizing the long-term benefits of standing firm against ransom demands. That said, increased resilience—both technological and organizational—are integral to any strategy. Enhanced cybersecurity measures are critical, in particular a zero trust strategy that reduces an organization’s attack surface and stops hackers from being able to move laterally in the network. The U.S. federal government has already committed to move to zero trust architectures.


Building a Data-Driven Culture: Four Key Elements

Why is building a data-driven culture incredibly hard? Because it calls for a behavioral change across the organization. This work is neither easy nor quick. To better appreciate the scope of this challenge, let’s do a brief thought exercise. Take a moment to reflect on these questions: How involved are your leaders in championing and directly following through on data-driven initiatives? Do you know whether your internal stakeholders are all equipped and empowered to use data for all kinds of decisions, strategic or tactical? Does your work environment make it easy for people to come together, collaborate with data, and support one another when they’re making decisions based on the insights? Does everyone in the organization truly understand the benefits of using data, and are success stories regularly shared internally to inspire people to action? If your answers to these questions are “I’m not sure” or “maybe,” you’re not alone. Most leaders assume in good faith that their organizations are on the right path. But they struggle when asked for concrete examples or data-backed evidence to support these gut-feeling assumptions. The leaders’ dilemma becomes even more clear when you consider that the elements at the core of the four questions above — leadership intervention, data empowerment, collaboration, and value realization — are inherently qualitative. Most organizational metrics or operational KPIs don’t capture them today. 


How CIOs Should Prepare for Product-Led Paradigm Shift

Scaling product centricity in an organization is like walking a tightrope. Leaders must drive change while maintaining smooth operations. This requires forming cross-functional teams, outcome-based evaluation and navigating multiple operating models. As a CIO, balancing change while facing the internal resistance of a risk-averse, siloed business culture can feel like facing a strong wind on a high wire. ...The key to overcoming this is to demonstrate the benefits of a product-centric approach incrementally, proving its value until it becomes the norm. To prevent cultural resistance from derailing your vision for a more agile enterprise, leverage multiple IT operating models with a service or value orientation to meet the ambitious expectations of CEOs and boards. Engage the C-suite by taking a holistic view of how democratized IT can be used to meet stakeholder expectations. Every organization has a business and enterprise operating model to create and deliver value. A business model might focus on manufacturing products that delight customers, requiring the IT operating model to align with enterprise expectations. This alignment involves deciding whether IT will merely provide enabling services or actively partner in delivering external products and services.


CISOs gain greater influence in corporate boardrooms

"As the role of the CISO grows more complex and critical to organisations, CISOs must be able to balance security needs with business goals, culture, and articulate the value of security investments." She highlights the importance of strong relationships across departments and stakeholders in bolstering cybersecurity and privacy programmes. The study further discusses the positive impact of having board members with a cybersecurity background. These members foster stronger relationships with security teams and have more confidence in their organisation's security stance. For instance, boards with a CISO member report higher effectiveness in setting strategic cybersecurity goals and communicating progress, compared to boards without such expertise. CISOs with robust board relationships report improved collaboration with IT operations and engineering, allowing them to explore advanced technologies like generative AI for enhanced threat detection and response. However, gaps persist in priority alignment between CISOs and boards, particularly around emerging technologies, upskilling, and revenue growth. Expectations for CISOs to develop leadership skills add complexity to their role, with many recognising a gap in business acumen, emotional intelligence, and communication. 


Researchers claim Linux kernel tweak could reduce data center energy use by 30%

Researchers at the University of Waterloo's Cheriton School of Computer Science, led by Professor Martin Karsten and including Peter Cai, identified inefficiencies in network traffic processing for communications-heavy server applications. Their solution, which involves rearranging operations within the Linux networking stack, has shown improvements in both performance and energy efficiency. The modification, presented at an industry conference, increases throughput by up to 45 percent in certain situations without compromising tail latency. Professor Karsten likened the improvement to optimizing a manufacturing plant's pipeline, resulting in more efficient use of data center CPU caches. Professor Karsten collaborated with Joe Damato, a distinguished engineer at Fastly, to develop a non-intrusive kernel change consisting of just 30 lines of code. This small but impactful modification has the potential to reduce energy consumption in critical data center operations by as much as 30 percent. Central to this innovation is a feature called IRQ (interrupt request) suspension, which balances CPU power usage with efficient data processing. By reducing unnecessary CPU interruptions during high-traffic periods, the feature enhances network performance while maintaining low latency during quieter times.


GitHub Desktop Vulnerability Risks Credential Leaks via Malicious Remote URLs

While the credential helper is designed to return a message containing the credentials that are separated by the newline control character ("\n"), the research found that GitHub Desktop is susceptible to a case of carriage return ("\r") smuggling whereby injecting the character into a crafted URL can leak the credentials to an attacker-controlled host. "Using a maliciously crafted URL it's possible to cause the credential request coming from Git to be misinterpreted by Github Desktop such that it will send credentials for a different host than the host that Git is currently communicating with thereby allowing for secret exfiltration," GitHub said in an advisory. A similar weakness has also been identified in the Git Credential Manager NuGet package, allowing for credentials to be exposed to an unrelated host. ... "While both enterprise-related variables are not common, the CODESPACES environment variable is always set to true when running on GitHub Codespaces," Ry0taK said. "So, cloning a malicious repository on GitHub Codespaces using GitHub CLI will always leak the access token to the attacker's hosts." ... In response to the disclosures, the credential leakage stemming from carriage return smuggling has been treated by the Git project as a standalone vulnerability (CVE-2024-52006, CVSS score: 2.1) and addressed in version v2.48.1.


The No-Code Dream: How to Build Solutions Your Customer Truly Needs

What's excellent about no-code is that you can build a platform that won't require your customers to be development professionals — but will allow customization. That's the best approach: create a blank canvas for people, and they will take it from there. Whether it's surveys, invoices, employee records, or something completely different, developers have the tools to make it visually appealing to your customers, making it more intuitive for them. I also want to break the myth that no code doesn't allow effective data management. It is possible to create a no-code platform that will empower users to perform complex mathematical operations seamlessly and to support managing interrelated data. This means users' applications will be more robust than their competitors and produce more meaningful insights. ... As a developer, I am passionate about evolving tech and our industry's challenges. I am also highly aware of people's concerns over the security of many no-code solutions. Security is a critical component of any software; no-code solutions are no exception. One-off custom software builds do not typically undergo the same rigorous security testing as widely used commercial software due to the high cost and time involved. This leaves them vulnerable to security breaches.


Digital Operations at Turning Point as Security and Skills Concerns Mount

The development of appropriate skills and capabilities has emerged as a critical challenge, ranking as a pressing concern in advancing digital operations. The talent shortage is most acute in North America and the media industry, where fierce competition for skilled professionals coincides with accelerating digital transformation initiatives. Organizations face a dual challenge: upskilling existing staff while competing for scarce talent in an increasingly competitive market. The report suggests this skills gap could potentially slow the adoption of new technologies and hamper operational advancement if not adequately addressed. "The rapid evolution of how AI is being applied to many parts of jobs to be done is unmatched," Armandpour said. "Raising awareness, educating, and fostering a rich learning environment for all employees is essential." ... "Service outages today can have a much greater impact due to the interdependencies of modern IT architectures, so security is especially critical," Armandpour said. "Organizations need to recognize security as a critical business imperative that helps power operational resilience, customer trust, and competitive advantage." What sets successful organizations apart is the prioritization of defining robust security requirements upfront and incorporating security-by-design into product development cycles. 


Is ChatGPT making us stupid?

In fact, one big risk right now is how dependent developers are becoming on LLMs to do their thinking for them. I’ve argued that LLMs help senior developers more than junior developers, precisely because more experienced developers know when an LLM-driven coding assistant is getting things wrong. They use the LLM to speed up development without abdicating responsibility for that development. Junior developers can be more prone to trusting LLM output too much and don’t know when they’re being given good code or bad. Even for experienced engineers, however, there’s a risk of entrusting the LLM to do too much. For example, Mike Loukides of O’Reilly Media went through their learning platform data and found developers show “less interest in learning about programming languages,” perhaps because developers may be too “willing to let AI ‘learn’ the details of languages and libraries for them.” He continues, “If someone is using AI to avoid learning the hard concepts—like solving a problem by dividing it into smaller pieces (like quicksort)—they are shortchanging themselves.” Short-term thinking can yield long-term problems. As noted above, more experienced developers can use LLMs more effectively because of experience. If a developer offloads learning for quick-fix code completion at the long-term cost of understanding their code, that’s a gift that will keep on taking.

Daily Tech Digest - December 11, 2024

Low-tech solutions to high-tech cybercrimes

The growing quality of deepfakes, including real-time deepfakes during live video calls, invites scammers, criminals, and even state-sponsored attackers to convincingly bypass security measures and steal identities for all kinds of nefarious purposes. AI-enabled voice cloning has already proved to be a massive boon for phone-related identity theft. AI enables malicious actors to bypass face recognition. protection And AI-powered bots are being deployed to intercept and use one-time passwords in real time. More broadly, AI can accelerate and automate just about any cyberattack. ... Once established (not in writing… ), the secret word can serve as a fast, powerful way to instantly identify someone. And because it’s not digital or stored anywhere on the Internet, it can’t be stolen. So if your “boss” or your spouse calls you to ask you for data or to transfer funds, you can ask for the secret word to verify it’s really them. ... Farrow emphasizes a simple way to foil spyware: reboot your phone every day. He points out that most spyware is purged with a reboot. So rebooting every day makes sure that no spyware remains on your phone. He also stresses the importance of keeping your OS and apps updated to the latest version.


7 Essential Trends IT Departments Must Tackle In 2025

Taking responsibility for cybersecurity will remain a key function of IT departments in 2025 as organizations face off against increasingly sophisticated and frequent attacks. Even as businesses come to understand that everyone from the boardroom to the shop floor has a part to play in preventing attacks, IT teams will inevitably be on the front line, with the job of securing networks, managing update and installation schedules, administering access protocols and implementing zero-trust measures. ... In 2025, AIOps are critical to enabling businesses to benefit from real-time resource optimization, automated decision-making and predictive incident resolution. This should empower the entire workforce, from marketing to manufacturing, to focus on innovation and high-value tasks rather than repetitive technical work best left to machines. ... with technology functions playing an increasingly integral role in business growth, other C-level roles have emerged to take on some of the responsibilities. As well as Chief Data Officers (CDOs) and Chief Information Security Officers (CISOs), it’s increasingly common for organizations to appoint Chief AI Officers (CIAOs), and as the role of technology in organizations continues to evolve, more C-level positions are likely to become critical.


Passkey adoption by Australian govt, banks drives wider passwordless authentication

“A key change has been to the operation of the security protocols that underpin passkeys and passwordless authentication. As this has improved over time, it has engendered more trust in the technology among technology teams and organisations, leading to increased adoption and use.” “At the same time, users have become more comfortable with biometrics to authenticate to digital services.” Implementation and enablement have also improved, leveraging templates and no-code, drag-and-drop orchestration to “allow administrators to swiftly design, test and deploy various out-of-the-box passwordless registration and authentication experiences for diverse customer identity types, all at scale, with minimal manual setup.” ... Banks are among the major drivers of passkey adoption in Australia. According to an article in the Sydney Morning Herald, National Australia Bank (NAB) chief security officer Sandro Bucchianeri says passwords are “terrible” – and on the way out. ... Specific questions pertaining to passkeys include, “Do you agree or disagree with including use of a passkey as an alternative first-factor identity authentication process?” and “Does it pose any security or fraud risks? If so, please describe these in detail.”


Why crisis simulations fail and how to fix them

Communication gaps are particularly common between technical leadership and business executives. These teams work in silos, which often causes misalignment and miscommunication. Technical staff use jargon that executives don’t fully understand, while business priorities may be unclear to the technical team. As a result, it becomes difficult to discern what requires immediate attention and communication versus what constitutes noise. This slows down critical decisions. Now throw in third-party vendors or MSPs, and this just amplifies the confusion and adds to the chaos. Role confusion is an interesting challenge. Crisis management playbooks typically have roles assigned to tasks, but no detail on what these roles mean. I have seen teams come into an exercise confident about the name of their role, but no idea what the role means in terms of actual execution. Many times, teams don’t even know that a role exists within the team or who owns it. A fitting example is a “crisis simulation secretary” — someone tasked with recording the notes for the meetings, scheduling the calls, making sure everyone has the correct numbers to dial in, etc. This may seem trivial, but it is a critical role, as you do not want to waste precious minutes trying to dial into a call. 


What CIOs are in for with the EU’s Data Act

There are many things the CIO will have to perform in light of Data Act provisions. In the meantime, as explained by Perugini, CIOs must do due diligence on the data their companies collect from connected devices and understand where they are in the value chain — whether they are the owners, users, or recipients. “If the company produces a connected industrial machine and gives it to a customer and then maintains the machine, it finds itself collecting the data as the owner,” she says. “If the company is a customer of the machine, it’s a user and co-generates the data. But if it’s a company that acquires the data of the machine, it’s a recipient because the user or the manufacturer has allowed it to make them available or participates in a data marketplace. CIOs can also see if there’s data generated by others on the market that can be used for internal analysis, and procure it. Any use or exchange of data must be regulated by an agreement between the interested parties with contracts.” The CIO will also have to evaluate contracts with suppliers, ensuring terms are compliant, and negotiate with suppliers to access data in a direct and interoperable way. Plus, the CIO has to evaluate whether the company’s IT infrastructure is suitable to guarantee interoperability and security of data as per GDPR. 


How slowing down can accelerate your startup’s growth

WIn startup culture, there’s a pervasive pressure to say “yes” to every opportunity, to grow at all costs. But I’ve learned that restraint is an underrated virtue in business. At Aloha, we had to make tough choices to stay on the path of sustainable growth. We focused on our core mission and turned down attractive but potentially distracting opportunities that would have taken resources away from what mattered most. ... One of the most persistent traps for startups is the “growth at all costs” mindset. Top-line growth can be impressive, but if it’s achieved without a path to profitability, it’s a house of cards. When I joined Aloha, we refocused our efforts on creating a financially sustainable business. This meant dialing back on some of our expansion plans to ensure we were growing within our means. ... In a world that worships speed, it takes courage to slow down. It’s not easy to resist the siren call of hypergrowth. But when you do, you create the conditions for a business that can weather storms, adapt to change, and keep thriving. Building a company on these principles doesn’t mean abandoning growth—it means ensuring that growth is meaningful and sustainable. Slow and steady may not be glamorous, but it works. 


Why business teams must stay out of application development

Citizen development is when non-tech users build business applications using no-code/low-code platforms, which automate code generation. Imagine that you need a simple leave application tool within the organization. Enterprises can’t afford to deploy their busy and expensive professional resources to build an internal tool. So, they go the citizen development way. ... Proponents of citizen development argue that the apps built with low-code platforms are highly customizable. What they mean is that they have the ability to mix and match elements and change colors. For enterprise apps, this is all in a day’s work. True customizability comes from real editable code that empowers developers to hand-code parts to handle complex and edge cases. Business users cannot build these types of features because low-code platforms themselves are not designed to handle this. ... Finally, the most important loophole that citizen development creates is security. A vast majority of security attacks happen due to human error, such as phishing scams, downloading ransomware, or improper credential management. In fact, IBM found that there has been a 71% increase this year in cyberattacks that used stolen or compromised credentials.


The rise of observability: A new era in IT Operations

Observability empowers organisations to not just detect that a problem exists, but to understand why it’s happening and how to resolve it. It’s the difference between knowing that a car has broken down and having a detailed diagnostic report that pinpoints the exact issue and suggests an effective repair. The transition from monitoring to observability is not without its challenges. Some organisations find themselves struggling with legacy systems and entrenched processes that resist change. Observability represents a shift from traditional IT operations, requiring a new mindset and skill set. However, the benefits of implementing observability practices far outweigh the initial challenges. While there may be concerns about skill gaps, modern observability platforms are designed to be user-friendly and accessible to team members at all levels. ... Implementing observability results in clear, measurable benefits, especially around improved service reliability. Because teams can identify and resolve issues quickly and proactively, downtime is minimised or eradicated. Enhanced reliability leads to better customer experiences, which is a crucial differentiator in a competitive market where user satisfaction is key.


5 Trends Reshaping the Data Landscape

With increased interest in generative AI and predictive AI, as well as supporting traditional analytical workloads, “we’re seeing a pretty massive increase of data sprawl across industries,” he observed. “They track with the realization among many of our customers that they’ve created a lot of different versions of the truth and silos of data which have different systems, both on-prem and in the cloud.” ... If a data team “can’t get the data where it needs to go, they’re not going to be able to analyze it in an efficient, secure way,” he said. “Leaders have to think about scale in new ways. There are so many systems downstream that consume data. Scaling these environments as the data is growing in many cases by almost double-digit percentages year over year is becoming unwieldy.” A proactive approach is to address these costs and silos through streamlining and simplification on a single common platform, Kethireddy urged, noting Ocient’s approach to “take the path to reducing the amount of hardware and cloud instances it takes to analyze compute-intensive workloads. We focus on minimizing costs associated with the system footprint and energy consumption.”


Serverless Computing: The Future of Programming and Application Deployment Innovations

Serverless computing enhances automated scaling for handling workload by shifting developers' focus on code development by adding and removing instances from serverless functions. This approach leads cloud providers to automate the distribution of incoming traffic from interconnected multiple instances in serverless functions. The scalability nature of serverless computing emphasizes that developers should build applications for handling large volumes of traffic with an effective cloud infrastructure environment. On the other hand, serverless functions assist in limited time within the range of milliseconds to several minutes by optimization of the application code in performance management. ... Cloud providers integrated security features of encryption and access control in infrastructure in cloud services. This measure applied automated security updates and patches in infrastructure with rapid prototype creation. However, serverless computing issues in cloud infrastructure reflect cloud services negatively. The time is taken to respond for the first time when a serverless function has been initiated. The constraints of a serverless architecture reflect a limited function lifecycle, which drastically affects its performance.
 


Quote for the day:

"If you want to be successful prepare to be doubted and tested." -- @PilotSpeaker

Daily Tech Digest - September 25, 2024

When technical debt strikes the security stack

“Security professionals are not immune from acquiring their own technical debt. It comes through a lack of attention to periodic reviews and maintenance of security controls,” says Howard Taylor, CISO of Radware. “The basic rule is that security rapidly decreases if it is not continuously improved. The time will come when a security incident or audit will require an emergency collection of the debt.” ... The paradox of security technical debt is that many departments concurrently suffer from both solution debt that causes gaps in coverage or capabilities, as well as rampant tool sprawl that eats up budget and makes it difficult to effectively use tools. ... “Detection engineering is often a large source of technical debt: over the years, a great detection engineering team can produce many great detections, but the reliability of those detections can start to fade as the rest of the infrastructure changes,” he says. “Great detections become less reliable over time, the authors leave the company, and the detection starts to be ignored. This leads to waste of energy and very often cost.” ... Role sprawl is another common scenario that contributes significantly to security debt, says Piyush Pandey, CEO at Pathlock.


Google Announces New Gmail Security Move For Millions

From the Gmail perspective, the security advisor will include a security sandbox where all email attachments will be scanned for malicious software employing a virtual environment to safely analyze said files. Google said the tool can “delay message delivery, allow customization of scan rules, and automatically move suspicious messages to the spam folder.” Gmail also gets enhanced safe browsing which gives additional protection by scanning incoming messages for malicious content before it is actually delivered. ... A Google spokesperson told me that the AI Geminin app is to get enterprise-grade security protections in core services now. With availability from October 15, for customers running on a Workspace Business, Enterprise, or Frontline plan, Google said that “with all of the core Workspace security and privacy controls in place, companies have the tools to deploy AI securely, privately and responsibly in their organizations in the specific way that they want it.” The critical components of this security move include ensuring Gemini is subject to the same privacy, security, and compliance policies as the rest of the Workspace core services, such as Gmail and Dos. 


The Next Big Interconnect Technology Could Be Plastic

e-Tube technology is a new, scalable interconnect platform that uses radio wave transmission over a dielectric waveguide made of – drumroll – common plastic material such as low-density polyethylene (LDPE). While waveguide theory has been studied for many years, only a few organizations have applied the technology for mainstream data interconnect applications. Because copper and optical interconnects are historically entrenched technologies, most research has focused on extending copper life or improving energy and cost efficiency of optical solutions. But now there is a shift toward exploring the e-Tube option that delivers a combination of benefits that copper and optical cannot, including energy-efficiency, low latency, cost-efficiency and scalability to multi-terabit network speeds required in next-gen data centers. The key metrics for data center cabling are peak throughput, energy efficiency, low latency, long cable reach and cost that enables mass deployment. Across these metrics, e-Tube technology provides advantages compared to copper and optical technologies. Traditionally, copper-based interconnects have been considered an inexpensive and reliable choice for short-reach data center applications, such as top-of-rack switch connections. 


From Theory to Action: Building a Strong Cyber Risk Governance Framework

Setting your risk appetite is about more than just throwing a number out there. It’s about understanding the types of risks you face and translating them into specific, measurable risk tolerance statements. For example, “We’re willing to absorb up to $1 million in cyber losses annually but no more.” Once you have that in place, you’ll find decision-making becomes much more straightforward. ... If your current cybersecurity budget isn't sufficient to handle your stated risk appetite, you may need to adjust it. One of the best ways to determine if your budget aligns with your risk appetite is by using loss exceedance curves (LECs). These handy charts allow you to visualize the forecasted likelihood and impact of potential cyber events. They help you decide where to invest more in cybersecurity and perhaps where even to cut back. ... One thing that a lot of organizations miss in their cyber risk governance framework is the effective use of cyber insurance. Here's the trick: cyber insurance shouldn’t be used to cover routine losses. Doing so will only lead to increased premiums. Instead, it should be your safety net for the larger, more catastrophic incidents – the kinds that keep executives awake at night.


Is Prompt Engineering Dead? How To Scale Enterprise GenAI Adoption

If you pick a model that is a poor fit for your use case, it will not be good at determining the context of the question and will fail at retrieving a reference point for the response. In those situations, the lack of reference data needed for providing an accurate response contributes to a hallucination. While there are many situations where you would prefer the model to give no response at all rather than fabricate one, what happens if there is no exact answer available is that the model will take some data points that it thinks are contextually relevant to the query and return an inaccurate answer. ... To leverage LLMs effectively at an enterprise scale, businesses need to understand their limitations. Prompt engineering and RAG can improve accuracy, but LLMs must be tightly limited in domain knowledge and scope. Each LLM should be trained for a specific use case, using a specific dataset with data owners providing feedback. This ensures no chance of confusing the model with information from different domains. The training process for LLMs differs from traditional machine learning, requiring human oversight and quality assurance by data owners.


AI disruption in Fintech: The dawn of smarter financial solutions

Financial institutions face diverse fraud challenges, from identity theft to fund transfer scams. Manual analysis of countless daily transactions is impractical. AI-based systems are empowering Fintechs to analyze data, detect anomalies, and flag suspicious activities. AI is monitoring transactions, filtering spam, and identifying malware. It can recognise social engineering patterns and alert users to potential threats. While fraudsters also use AI for sophisticated scams, financial institutions can leverage AI to identify synthetic content and distinguish between trustworthy and untrustworthy information. ... AI is transforming fintech customer service, enhancing retention and loyalty. It provides personalised, consistent experiences across channels, anticipating needs and offering value-driven recommendations. AI-powered chatbots handle common queries efficiently, allowing human agents to focus on complex issues. This technology enables 24/7 support across various platforms, meeting customer expectations for instant access. AI analytics predict customer needs based on financial history, transaction patterns, and life events, enabling targeted, timely offers. 


CIOs Go Bold

In business, someone who is bold is an individual who exudes confidence and assertiveness and is business savvy. However, there is a fine line between being assertive and confident in a way that is admired and being perceived as overbearing and hard to work with. ... If your personal CIO goals include being bolder, the first step is for you to self-assess. Then, look around. You probably already know individuals in the organization or colleagues in the C-suite who are perceived as being bold shakers and movers. What did they do to acquire this reputation? ... To get results from the ideas you propose, the outcomes of your ideas must solve strategic goals and/or pain points in the business. Consequently, the first rule of thumb for CIOs is to think beyond the IT box. Instead, ask questions like how an IT solution can help solve a particular challenge for the business. Digitalization is a prime example. Early digitalization projects started out with missions such as eliminating paper by digitalizing information and making it more searchable and accessible. Unfortunately, being able to search and access data was hard to quantify in terms of business results. 


What does the Cyber Security and Resilience Bill mean for businesses?

The Bill aims to strengthen the UK’s cyber defences by ensuring that critical infrastructure and digital services are secure by protecting those services and supply chains. It’s expected to share common ground with NIS2 but there are also some elements that are notably absent. These differences could mean the Bill is not quite as burdensome as its European counterpart but equally, it runs the risk of making it not as effective. ... The problem now is that many businesses will be looking at both sets of regulations and scratching their heads in confusion. Should they assume that the Bill will follow the trajectory of NIS2 and make preparations accordingly or should they assume it will continue to take a lighter touch and one that may not even apply to them? There’s no doubt that NIS2 will introduce a significant compliance burden with one report suggesting it will cost upwards of 31.2bn euros per year. Then there’s the issue of those that will need to comply with both sets of regulations i.e. those entities that either supply to customers or have offices on the continent. They will be looking for the types of commonalities we’ve explored here in order to harmonise their compliance efforts and achieve economies of scale. 


3 Key Practices for Perfecting Cloud Native Architecture

As microservices proliferate, managing their communication becomes increasingly complex. Service meshes like Istio or Linkerd offer a solution by handling service discovery, load balancing, and secure communication between services. This allows developers to focus on building features rather than getting bogged down by the intricacies of inter-service communication. ... Failures are inevitable in cloud native environments. Designing microservices with fault isolation in mind helps prevent a single service failure from cascading throughout the entire system. By implementing circuit breakers and retry mechanisms, organizations can enhance the resilience of their architecture, ensuring that their systems remain robust even in the face of unexpected challenges. ... Traditional CI/CD pipelines often become bottlenecks during the build and testing phases. To overcome this, modern CI/CD tools that support parallel execution should be leveraged. ... Not every code change necessitates a complete rebuild of the entire application. Organizations can significantly speed up the pipeline while conserving resources by implementing incremental builds and tests, which only recompile and retest the modified portions of the codebase.


Copilots and low-code apps are creating a new 'vast attack surface' - 4 ways to fix that

"In traditional application development, apps are carefully built throughout the software development lifecycle, where each app is continuously planned, designed, implemented, measured, and analyzed," they explain. "In modern business application development, however, no such checks and balances exists and a new form of shadow IT emerges." Within the range of copilot solutions, "anyone can build and access powerful business apps and copilots that access, transfer, and store sensitive data and contribute to critical business operations with just a couple clicks of the mouse or use of natural language text prompts," the study cautions. "The velocity and magnitude of this new wave of application development creates a new and vast attack surface." Many enterprises encouraging copilot and low-code development are "not fully embracing that they need to contextualize and understand not only how many apps and copilots are being built, but also the business context such as what data the app interacts with, who it is intended for, and what business function it is meant to accomplish."



Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki