Showing posts with label design. Show all posts
Showing posts with label design. Show all posts

Daily Tech Digest - May 03, 2025


Quote for the day:

"It is during our darkest moments that we must focus to see the light." -- Aristotle Onassis



Why agentic AI is the next wave of innovation

AI agents have become integral to modern enterprises, not just enhancing productivity and efficiency, but unlocking new levels of value through intelligent decision-making and personalized experiences. The latest trends indicate a significant shift towards proactive AI agents that anticipate user needs and act autonomously. These agents are increasingly equipped with hyper-personalization capabilities, tailoring interactions based on individual preferences and behaviors. ... According to NVIDIA, when Azure AI Agent Service is paired with NVIDIA AgentIQ, an open-source toolkit, developers can now profile and optimize teams of AI agents in real time to reduce latency, improve accuracy, and drive down compute costs. ... “The launch of NVIDIA NIM microservices in Azure AI Foundry offers a secure and efficient way for Epic to deploy open-source generative AI models that improve patient care, boost clinician and operational efficiency, and uncover new insights to drive medical innovation,” says Drew McCombs, vice president, cloud and analytics at Epic. “In collaboration with UW Health and UC San Diego Health, we’re also researching methods to evaluate clinical summaries with these advanced models. Together, we’re using the latest AI technology in ways that truly improve the lives of clinicians and patients.”


Businesses intensify efforts to secure data in cloud computing

Building a robust security strategy begins with understanding the delineation between the customer's and the provider's responsibilities. Customers are typically charged with securing network controls, identity and access management, data, and applications within the cloud, while the CSP maintains the core infrastructure. The specifics of these responsibilities depend on the service model and provider in question. The importance of effective cloud security has grown as more organisations shift away from traditional on-premises infrastructure. This shift brings new regulatory expectations relating to data governance and compliance. Hybrid and multicloud environments offer businesses unprecedented flexibility, but also introduce complexity, increasing the challenge of preventing unauthorised access. ... Attackers are adjusting their tactics accordingly, viewing cloud environments as potentially vulnerable targets. A well-considered cloud security plan is regarded as essential for reducing breaches or damage, improving compliance, and enhancing customer trust, even if it cannot eliminate all risks. According to the statement, "A well-thought-out cloud security plan can significantly reduce the likelihood of breaches or damage, enhance compliance, and increase customer trust—even though it can never completely prevent attacks and vulnerabilities."


Safeguarding the Foundations of Enterprise GenAI

Implementing strong identity security measures is essential to mitigate risks and protect the integrity of GenAI applications. Many identities have high levels of access to critical infrastructure and, if compromised, could provide attackers with multiple entry points. It is important to emphasise that privileged users include not just IT and cloud teams but also business users, data scientists, developers and DevOps engineers. A compromised developer identity, for instance, could grant access to sensitive code, cloud functions, and enterprise data. Additionally, the GenAI backbone relies heavily on machine identities to manage resources and enforce security. As machine identities often outnumber human ones, securing them is crucial. Adopting a Zero Trust approach is vital, extending security controls beyond basic authentication and role-based access to minimise potential attack surfaces. To enhance identity security across all types of identities, several key controls should be implemented. Enforcing strong adaptive multi-factor authentication (MFA) for all user access is essential to prevent unauthorised entry. Securing access to credentials, keys, certificates, and secrets—whether used by humans, backend applications, or scripts—requires auditing their use, rotating them regularly, and ensuring that API keys or tokens that cannot be automatically rotated are not permanently assigned.


The new frontier of API governance: Ensuring alignment, security, and efficiency through decentralization

To effectively govern APIs in a decentralized landscape, organizations must embrace new principles that foster collaboration, flexibility and shared responsibility. Optimized API governance is not about abandoning control, rather about distributing it strategically while still maintaining overarching standards and ensuring critical aspects such as security, compliance and quality. This includes granting development teams with autonomy to design, develop and manage their APIs within clearly defined boundaries and guidelines. This encourages innovation while fostering ownership and allows each team to optimize their APIs to their specific needs. This can be further established by a shared responsibility model amongst teams where they are accountable for adhering to governance policies while a central governing body provides the overarching framework, guidelines and support. This operating model can be further supported by cultivating a culture of collaboration and communication between central governance teams and development teams. The central government team can have a representative from each development team and have clear channels for feedback, shared documentation and joint problem-solving scenarios. Implementing governance policies as code, leveraging tools and automation make it easier to enforce standards consistently and efficiently across the decentralized environment. 


Banking on innovation: Engineering excellence in regulated financial services

While financial services regulations aren’t likely to get simpler, banks are finding ways to innovate without compromising security. "We’re seeing a culture change with our security office and regulators," explains Lanham. "As cloud tech, AI, and LLMs arrive, our engineers and security colleagues have to upskill." Gartner's 2025 predictions say GenAI is shifting data security to protect unstructured data. Rather than cybersecurity taking a gatekeeper role, security by design is built into development processes. "Instead of saying “no”, the culture is, how can we be more confident in saying “yes”?" notes Lanham. "We're seeing a big change in our security posture, while keeping our customers' safety at the forefront." As financial organizations carefully tread a path through digital and AI transformation, the most successful will balance innovation with compliance, speed with security, and standardization with flexibility. Engineering excellence in financial services needs leaders who can set a clear vision while balancing tech potential with regulations. The path won’t be simple, but by investing in simplification, standardization and a shared knowledge and security culture, financial services engineering teams can drive positive change for millions of banking customers.


‘Data security has become a trust issue, not just a tech issue’

Data is very messy and data ecosystems are very complex. Every organisation we speak to has data across multiple different types of databases and data stores for different use cases. As an industry, we need to acknowledge the fact that no organisation has an entirely homogeneous data stack, so we need to support and plug into a wide variety of data ecosystems, like Databricks, Google and Amazon, regardless of the tooling used for data analytics, for integration, for quality, for observability, for lineage and the like. ... Cloud adoption is causing organisations to rethink their traditional approach to data. Most use cloud data services to provide a shortcut to seamless data integration, efficient orchestration, accelerated data quality and effective governance. In reality, most organisations will need to adopt a hybrid approach to address their entire data landscape, which typically spans a wide variety of sources that span both cloud and on premises. ... Data security has become a trust issue, not just a tech issue. With AI, hybrid cloud and complex supply chains, the attack surface is massive. We need to design with security in mind from day one – think secure coding, data-level controls and zero-trust principles. For AI, governance is critical, and it too needs to be designed in and not an afterthought. That means tracking where data comes from, how models are trained, and ensuring transparency and fairness.


Secure by Design vs. DevSecOps: Same Security Goal, Different Paths

Although the "secure by design" initiative offers limited guidance on how to make an application secure by default, it comes closer to being a distinct set of practices than DevSecOps. The latter is more of a high-level philosophy that organizations must interpret on their own; in contrast, secure by design advocates specific practices, such as selecting software architectures that mitigate the risk of data leakage and avoiding memory management practices that increase the chances of the execution of malicious code by attackers. ... Whereas DevSecOps focuses on all stages of the software development life cycle, the secure by design concept is geared mainly toward software design. It deals less with securing applications during and after deployment. Perhaps this makes sense because so long as you start with a secure design, you need to worry less about risks once your application is fully developed — although given that there's no way to guarantee an app can't be hacked, DevSecOps' holistic approach to security is arguably the more responsible one. ... Even if you conclude that secure by design and DevSecOps mean basically the same thing, one notable difference is that the government sector has largely driven the secure by design initiative, while DevSecOps is more popular within private industry.


Immutable by Design: Reinventing Business Continuity and Disaster Recovery

Immutable backups create tamper-proof copies of data, protecting it from cyber threats, accidental deletion, and corruption. This guarantees that critical data can be quickly restored, allowing businesses to recover swiftly from disruptions. Immutable storage provides data copies that cannot be manipulated or altered, ensuring data remains secure and can quickly be recovered from an attack. In addition to immutable backup storage, response plans must be continually tested and updated to combat the evolving threat landscape and adapt to growing business needs. The ultimate test of a response plan ensures data can be quickly and easily restored or failed over, depending on the event. Activating a second site in the case of a natural disaster or recovering systems without making any ransomware payments in the case of an attack. This testing involves validating the reliability of backup systems, recovery procedures, and the overall disaster recovery plan to minimize downtime and ensure business continuity. ... It can be challenging for IT teams trying to determine the perfect fit for their ecosystem, as many storage vendors claim to provide immutable storage but are missing key features. As a rule of thumb, if "immutable" data can be overwritten by a backup or storage admin, a vendor, or an attacker, then it is not a truly immutable storage solution. 


Neurohacks to outsmart stress and make better cybersecurity decisions

In cybersecurity where clarity and composure are essential, particularly during a data breach or threat response, these changes can have high-stakes consequences. “The longer your brain is stuck in this high-stress state, the more of those changes you will start to see and burnout is just an extreme case of chronic stress on the brain,” Landowski says. According to her, the tipping point between healthy stress and damaging chronic stress usually comes after about eight to 12 weeks, but it varies between individuals. “If you know about some of the things you can do to reduce the impact of stress on your body, you can potentially last a lot longer before you see any effects, whereas if you’re less resilient, or if your genes are more susceptible to stress, then it could be less.” ... working in cybersecurity, particularly as a hacker, is often about understanding how people think and then spotting the gaps. That same shift in understanding — tuning into how the brain works under different conditions — can help cybersecurity leaders make better decisions and build more resilient teams. As Cerf highlights, he works with organizations to identify these optimal operating states, testing how individuals and entire teams respond to stress and when their brains are most effective. “The brain is not just a solid thing,” Cerf says.


Beyond Safe Models: Why AI Governance Must Tackle Unsafe Ecosystems

Despite the evident risks of unsafe deployment ecosystems, the prevailing approach to AI governance still heavily emphasizes pre-deployment interventions—such as alignment research, interpretability tools, and red teaming—aimed at ensuring that the model itself is technically sound. Governance initiatives like the EU AI Act, while vital, primarily place obligations on providers and developers to ensure compliance through documentation, transparency, and risk management plans. However, the governance of what happens after deployment when these models enter institutions with their own incentives, infrastructures, and oversight receives comparatively less attention. For example, while the EU AI Act introduces post-market monitoring and deployer obligations for high-risk AI systems, these provisions remain limited in scope. Monitoring primarily focuses on technical compliance and performance, with little attention to broader institutional, social, or systemic impacts. Deployer responsibilities are only weakly integrated into ongoing risk governance and focus primarily on procedural requirements—such as record-keeping and ensuring human oversight—rather than assessing whether the deploying institution has the capacity, incentives, or safeguards to use the system responsibly. 

Daily Tech Digest - January 07, 2025

With o3 having reached AGI, OpenAI turns its sights toward superintelligence

One of the challenges of achieving AGI is defining it. As of yet, researchers and the broader industry do not have a concrete description of what it will be and what it will be able to do. The general consensus, though, is that AGI will possess human-level intelligence, be autonomous, have self-understanding, and will be able to “reason” and perform tasks that it was not trained to do. ... Going beyond AGI, “superintelligence” is generally understood to be AI systems that far surpass human intelligence. “With superintelligence, we can do anything else,” Altman wrote. “Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.” He added, “this sounds like science fiction right now, and somewhat crazy to even talk about it.” However, “we’re pretty confident that in the next few years, everyone will see what we see,” he said, emphasizing the need to act “with great care” while still maximizing benefit. ... OpenAI set out to build AGI from its founding in 2015, when the concept of AGI, as Altman put it to Bloomberg, was “nonmainstream.” “We wanted to figure out how to build it and make it broadly beneficial,” he wrote in his blog post. 


Bridging the execution gap – why AI is the new frontier for corporate strategy

Imagine a future where leadership teams are not constrained by outdated processes but empowered by intelligent systems. In this world, CEOs use AI to visualise their entire organisation’s alignment, ensuring every department contributes to strategic goals. Middle managers leverage real-time insights to adapt plans dynamically, while employees understand how their work drives the company’s mission forward. Such an environment fosters resilience, innovation, and engagement. By turning strategy into a living, breathing entity, organisations can adapt to challenges and seize opportunities faster than ever before. The road to this future is not without challenges. Leaders must embrace cultural change, invest in the right technologies, and commit to continuous learning. But the rewards – a thriving, agile organisation capable of navigating the complexities of the modern business landscape – are well worth the effort. The execution gap has plagued organisations for decades, but the tools to overcome it are now within reach. AI is more than a technological advancement; it is the key to unlocking the full potential of corporate strategy. By embracing adaptability and leveraging AI’s transformative capabilities, businesses can ensure their strategies do not just survive but thrive in the face of change.


Google maps the future of AI agents: Five lessons for businesses

Google argues that AI agents represent a fundamental departure from traditional language models. While models like GPT-4o or Google’s Gemini excel at generating single-turn responses, they are limited to what they’ve learned from their training data. AI agents, by contrast, are designed to interact with external systems, learn from real-time data and execute multi-step tasks. “Knowledge [in traditional models] is limited to what is available in their training data,” the paper notes. “Agents extend this knowledge through the connection with external systems via tools.” This difference is not just theoretical. Imagine a traditional language model tasked with recommending a travel itinerary. ... At the heart of an AI agent’s capabilities is its cognitive architecture, which Google describes as a framework for reasoning, planning and decision-making. This architecture, known as the orchestration layer, allows agents to process information in cycles, incorporating new data to refine their actions and decisions. Google compares this process to a chef preparing a meal in a busy kitchen. The chef gathers ingredients, considers the customer’s preferences and adapts the recipe as needed based on feedback or ingredient availability. Similarly, an AI agent gathers data, reasons about its next steps and adjusts its actions to achieve a specific goal.


AI agents will change work forever. Here's how to embrace that transformation

The business world is full of orthodoxies, beliefs that no one questions because they are thought to be "just the way things are". One such orthodoxy is the phrase: "Our people are the difference". A simple Google search can attest to its popularity. Some companies use this orthodoxy as their official or unofficial tagline, a tribute to their employees that they hope sends the right message internally and externally. They hope their employees feel special and customers take this orthodoxy as proof of their human goodness. Other firms use this orthodoxy as part of their explanation of what makes their company different. It's part of their corporate story. It sounds nice, caring, and positive. The only problem is that this orthodoxy is not true. ... Another way to put this is that individual employees are not fixed assets. They do not behave the same way in all conditions. In most cases, employees are adaptable and can absorb and respond to change. The environment, conditions, and potential for relationships cause this capacity to express itself. So, on the one hand, one company's employees are the same as any other company's employees in the same industry. They move from company to company, read the same magazines, attend similar conventions, and learn the same strategies and processes.


Gen AI is transforming the cyber threat landscape by democratizing vulnerability hunting

Identifying potential vulnerabilities is one thing, but writing exploit code that works against them requires a more advanced understanding of security flaws, programming, and the defense mechanisms that exist on the targeted platforms. ... This is one area where LLMs could make a significant impact: bridging the knowledge gap between junior bug hunters and experienced exploit writers. Even generating new variations of existing exploits to bypass detection signatures in firewalls and intrusion prevention systems is a notable development, as many organizations don’t deploy available security patches immediately, instead relying on their security vendors to add detection for known exploits until their patching cycle catches up. ... “AI tools can help less experienced individuals create more sophisticated exploits and obfuscations of their payloads, which aids in bypassing security mechanisms, or providing detailed guidance for exploiting specific vulnerabilities,” NiÈ›escu said. “This, indeed, lowers the entry barrier within the cybersecurity field. At the same time, it can also assist experienced exploit developers by suggesting improvements to existing code, identifying novel attack vectors, or even automating parts of the exploit chain. This could lead to more efficient and effective zero-day exploits.”


GDD: Generative Driven Design

The independent and unidirectional relationship between agentic platform/tool and codebase that defines the Doctor-Patient strategy is also the greatest limiting factor of this strategy, and the severity of this limitation has begun to present itself as a dead end. Two years of agentic tool use in the software development space have surfaced antipatterns that are increasingly recognizable as “bot rot” — indications of poorly applied and problematic generated code. Bot rot stems from agentic tools’ inability to account for, and interact with, the macro architectural design of a project. These tools pepper prompts with lines of context from semantically similar code snippets, which are utterly useless in conveying architecture without a high-level abstraction. Just as a chatbot can manifest a sensible paragraph in a new mystery novel but is unable to thread accurate clues as to “who did it”, isolated code generations pepper the codebase with duplicated business logic and cluttered namespaces. With each generation, bot rot reduces RAG effectiveness and increases the need for human intervention. Because bot rotted code requires a greater cognitive load to modify, developers tend to double down on agentic assistance when working with it, and in turn rapidly accelerate additional bot rotting.


Someone needs to make AI easy

Few developers did a better job of figuring out how to effectively use AI than Simon Willison. In his article “Things we learned about LLMs in 2024,” he simultaneously susses out how much happened in 2024 and why it’s confusing. For example, we’re all told to aggressively use genAI or risk falling behind, but we’re awash in AI-generated “slop” that no one really wants to read. He also points out that LLMs, although marketed as the easy path to AI riches for all who master them, are actually “chainsaws disguised as kitchen knives.” He explains that “they look deceptively simple to use … but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.” If anything, this quagmire got worse in 2024. Incredibly smart people are building incredibly sophisticated systems that leave most developers incredibly frustrated by how to use them effectively.  ... Some of this stems from the inability to trust AI to deliver consistent results, but much of it derives from the fact that we keep loading developers up with AI primitives (similar to cloud primitives like storage, networking, and compute) that force them to do the heavy lifting of turning those foundational building blocks into applications.


Making the most of cryptography, now and in the future

The mathematicians and cryptographers who have worked on these NIST algorithms expect them to last a long time. Thousands of people have already tried to poke holes into them and haven’t yet made any meaningful progress toward defeating them. So, they are “probably” OK for the time being. But as much as we would like to, we cannot mathematically rule out that they cannot be broken. This means that for commercial enterprises looking to migrate to new cryptography, they should be braced to change again and again — whether that is in five years, 10 years, or 50 years. ... Up until now most cryptography was mostly implicit and not under direct control of the management. Putting more controls around cryptography would not only safeguard data today, but it would provide the foundation to make the next transition easier. ... Cryptography is full of single points of failure. Even if your algorithm is bulletproof, you might end up with a faulty implementation. Agility helps us move away from these single points of failure, allowing us to adapt quickly if an algorithm is compromised. It is therefore crucial for CISOs to start thinking about agility and redundancy.


Data 2025 outlook: AI drives a renaissance of data

Though not all the technology building blocks are in place, many already are. Using AI to crawl and enrich metadata? Automatically generate data pipelines? Using regression analysis to flag data and model drift? Using entity extraction to flag personally identifiable information or summarize the content of structured or unstructured data? Applying machine learning to automate data quality resolution and data classification? Applying knowledge graphs to RAG? You get the idea. There are a few technology gaps that we expect will be addressed in 2025, including automating the correlation between data and model lineage, assessing the utility and provenance of unstructured data, and simplifying generation of vector embeddings. We expect in the coming year that bridging data file and model lineage will become commonplace with AI governance tools and services. And we’ll likely look to emerging approaches such as data observability to transform data quality practices from reactive to proactive. Let’s start with governance. In the data world, this is hardly a new discipline. Though data governance over the years has drawn more lip service than practice, for structured data, the underlying technologies for managing data quality, privacy, security and compliance are arguably more established than for AI. 


Beware the Rise of the Autonomous Cyber Attacker

Research has already shown that teams of AIs working together can find and exploit zero-day vulnerabilities. A team at the University of Illinois Urbana-Champaign created a “task force” of AI agents that worked as a supervised unit and effectively exploited vulnerabilities they had no prior knowledge of. In a recent report, OpenAI also cited three threat actors that used ChatGPT to discover vulnerabilities, research targets, write and debug malware and setup command and control infrastructure. The company said the activity offered these groups “limited, incremental (new) capabilities” to carry out malicious cyber tasks. ... “Darker” AI use has, in part, prompted many of today’s top thinkers to support regulations. This year, OpenAI CEO Sam Altman said: “I’m not interested in the killer robots walking on the street … things going wrong. I’m much more interested in the very subtle societal misalignments, where we just have these systems out in society and through no particular ill intention, things go horribly wrong.” ... Theoretically, regulation may reduce unintended or dangerous use among legitimate users, but I’m certain that the criminal economy will appropriate this technology. As CISOs deploy AI more broadly, attackers’ abilities will concurrently soar.



Quote for the day:

"Leadership is a dynamic process that expresses our skill, our aspirations, and our essence as human beings." -- Catherine Robinson-Walker

Daily Tech Digest - November 10, 2024

Technical Debt: An enterprise’s self-inflicted cyber risk

Technical debt issues vary in risk level depending on the scope and blast radius of the issue. Unaddressed high-risk technical debt issues create inefficiency and security exposure while diminishing network reliability and performance. There’s the obvious financial risk that comes from wasted time, inefficiencies, and maintenance costs. Adding tools potentially introduces new vulnerabilities, increasing the attack surface for cyber threats. A lot of the literature around technical debt focuses on obsolete technology on desktops. While this does present some risk, desktops have a limited blast radius when compromised. Outdated hardware and unattended software vulnerabilities within network infrastructure pose a much more imminent and severe risk as they serve as a convenient entry point for malicious actors with a much wider potential reach. An unpatched or end-of-life router, switch, or firewall, riddled with documented vulnerabilities, creates a clear path to infiltrating the network. By methodically addressing technical debt, enterprises can significantly mitigate cyber risks, enhance operational preparedness, and minimize unforeseen infrastructure disruptions. 


Why Your AI Will Never Take Off Without Better Data Accessibility

Data management and security challenges cast a long shadow over efforts to modernize infrastructures in support of AI and cloud strategies. The survey results reveal that while CIOs prioritize streamlining business processes through cloud infrastructures, improving data security and business resilience is a close second. Security is a persistent challenge for companies managing large volumes of file data and it continues to complicate efforts to enhance data accessibility. Nasuni’s research highlights that 49% of firms (rising to 54% in the UK) view security as their biggest problem when managing file data infrastructures. This issue ranks ahead of concerns such as rapid recovery from cyberattacks and ensuring data compliance. As companies attempt to move their file data to the cloud, security is again the primary obstacle, with 45% of all respondents—and 55% in the DACH region—citing it as the leading barrier, far outstripping concerns over cost control, upskilling employees and data migration challenges. These security concerns are not just theoretical. Over half of the companies surveyed admitted that they had experienced a cyber incident from which they struggled to recover. Alarmingly, only one in five said they managed to recover from such incidents easily. 


Exploring DORA: How to manage ICT incidents and minimize cyber threat risks

The SOC must be able to quickly detect and manage ICT incidents. This involves proactive, around-the-clock monitoring of IT infrastructure to identify anomalies and potential threats early on. Security teams can employ advanced tools such as security automation, orchestration and response (SOAR), extended detection and response (XDR), and security information and event management (SIEM) systems, as well as threat analysis platforms, to accomplish this. Through this monitoring, incidents can be identified before they escalate and cause greater damage. ... DORA introduces a harmonized reporting system for serious ICT incidents and significant cyber threats. The aim of this reporting system is to ensure that relevant information is quickly communicated to all responsible authorities, enabling them to assess the impact of an incident on the company and the financial market in a timely manner and respond accordingly. ... One of the tasks of SOC analysts is to ensure effective communication with relevant stakeholders, such as senior management, specialized departments and responsible authorities. This also includes the creation and submission of the necessary DORA reports.


What is Cyber Resilience? Insurance, Recovery, and Layered Defenses

While cyber insurance can provide financial protection against the fallout of ransomware, it’s important to understand that it’s not a silver bullet. Insurance alone won’t save your business from downtime, data loss, or reputation damage. As we’ve seen with other types of insurance, such as property or health insurance, simply holding a policy doesn’t mean you’re immune to risks. While cyber insurance is designed to mitigate financial risks, insurers are becoming increasingly discerning, often requiring businesses to demonstrate adequate cybersecurity controls before providing coverage. Gone are the days when businesses could simply “purchase” cyber insurance without robust cyber hygiene in place. Today’s insurers require businesses to have key controls such as multi-factor authentication (MFA), incident response plans, and regular vulnerability assessments. Moreover, insurance alone doesn’t address the critical issue of data recovery. While an insurance payout can help with financial recovery, it can’t restore lost data or rebuild your reputation. This is where a comprehensive cybersecurity strategy comes in — one that encompasses both proactive and reactive measures, involving components like third-party data recovery software.


Integrating Legacy Systems with Modern Data Solutions

Many legacy systems were not designed to share data across platforms or departments, leading to the creation of data silos. Critical information gets trapped in isolated systems, preventing a holistic view of the organization’s data and hindering comprehensive analysis and decision-making. ... Modern solutions are designed to scale dynamically, whether it’s accommodating more users, handling larger datasets, or managing more complex computations. In contrast, legacy systems are often constrained by outdated infrastructure, making it difficult to scale operations efficiently. Addressing this requires refactoring old code and updating the system architecture to manage accumulated technical debt. ... Older systems typically lack the robust security features of modern solutions, making them more vulnerable to cyber-attacks. Integrating these systems without upgrading security protocols can expose sensitive data to threats. Ensuring robust security measures during integration is critical to protect data integrity and privacy. ... Maintaining legacy systems can be costly due to outdated hardware, limited vendor support, and the need for specialized expertise. Integrating them with modern solutions can add to this complexity and expense. 


The challenges of hybrid IT in the age of cloud repatriation

The story of cloud repatriation is often one of regaining operational control. A recent report found that 25% of organizations surveyed are already moving some cloud workloads back on-premises. Repatriation offers an opportunity to address these issues like rising costs, data privacy concerns, and security issues. Depending on their circumstances, managing IT resources internally can allow some organizations to customize their infrastructure to meet these specific needs while providing direct oversight over performance and security. With rising regulations surrounding data privacy and protection, enhanced control over on-prem data storage and management provides significant advantages by simplifying compliance efforts. ... However, cloud repatriation can often create challenges of its own. The costs associated with moving services back on-prem can be significant: new hardware, increased maintenance, and energy expenses should all be factored in. Yet, for some, the financial trade-off for repatriation is worth it, especially if cloud expenses become unsustainable or if significant savings can be achieved by managing resources partially on-prem. Cloud repatriation is a calculated risk that, if done for the right reasons and executed successfully, can lead to efficiency and peace of mind for many companies.


IT Cost Reduction Strategies: 3 Unexpected Ways Enterprise Architecture Can Help

Easier said than done with the traditional process of manual follow-ups hampered by inconsistent documentation often scattered across many teams. The issue with documentation also often means that maintenance efforts are duplicated, wasting resources that could have been better deployed elsewhere. The result is the equivalent of around 3 hours of a dedicated employee’s focus per application per year spent on documentation, governance, and maintenance. Not so for the organization that has a digital-native EA platform that leverages your data to enable scalability and automation in workflows and messaging so you can reach out to the most relevant people in your organization when it's most needed. Features like these can save an immense amount of time otherwise spent identifying the right people to talk to and when to reach out to them, making a company's Enterprise Architecture the single source of truth and a solid foundation for effective governance. The result is a reduction of approximately a third of the time usually needed to achieve this. That valuable time can then be reallocated toward other, more strategic work within the organization. We have seen that a mid-sized company can save approximately $70 thousand annually by reducing its documentation and governance time.


How Rules Can Foster Creativity: The Design System of Reykjavík

Design systems have already gained significant traction, but many are still in their early stages, lacking atomic design structures. While this approach may seem daunting at first, as more designers and developers grow accustomed to working systematically, I believe atomic design will become the norm. Today, most teams create their own design systems, but I foresee a shift toward subscription-based or open-source systems that can be customized at the atomic level. We already see this with systems like Google’s Material UI, IBM’s Carbon, Shopify’s Polaris, and Atlassian’s design system. Adopting a pre-built, well-supported design system makes sense for many organizations. Custom systems are expensive and time-consuming to build, and maintaining them requires ongoing resources, as we learned in Reykjavík. By leveraging a tried-and-tested design system, teams can focus on customization rather than starting from scratch. ontrary to popular belief, this shift won’t stifle creativity. For public services, there is little need for extreme creativity regarding core functionality - these products simply need to work as expected. AI will also play a significant role in evolving design systems.


Eyes on Data: A Data Governance Study Bridging Industry and Academia

The researcher, Tony Mazzarella, is a seasoned data management professional and has extensive experience in data governance within large organizations. His professional and research observations have identified key motivations for this work: Data Governance has a knowledge problem. Existing literature and publications are overly theoretical and lack empirical guidance on practical implementation. The conceptual and practical entanglement of governance and management concepts and activities exacerbates this issue, leading to divergent definitions and perceptions that data governance is overly theoretical. The “people” challenges in data management are often overlooked. Culture is core to data governance, but its institutionalization as a business function coincided first in the financial services industry with a shift towards regulatory compliance in response to the 2008 financial crisis. “Data culture” has re-emerged in all industries, but it implies the governance function is tasked with fostering culture change rather than emphasizing that data governance requires a culture change, which is a management challenge. Data Management’s industry-driven nature and reactive ethos result in unnecessary change as the macroenvironment changes, undermining process resilience and sustainability.


The future of data center maintenance

Condition-based maintenance and advanced monitoring services provide operators with more information about the condition and behavior of assets within the system, including insights into how environmental factors, controls, and usage drive service needs. The ability to recommend actions for preventing downtime and extending asset life allows a focus on high-impact items instead of tasks that don't immediately affect asset reliability or lifespan. These items include lifecycle parts replacement, optimizing preventive maintenance schedules, managing parts inventories, and optimizing control logic. The effectiveness of a service visit can subsequently be validated as the actions taken are reflected in asset health analyses. ... Condition-based maintenance and advanced monitoring services include a customer portal for efficient equipment health reporting. Detailed dashboards display site health scores, critical events, and degradation patterns. ... The future of data center maintenance is here – smarter, more efficient, and more reliable than ever. With condition-based maintenance and advanced monitoring services, data centers can anticipate risks and benchmark assets, leading to improved risk management and enhanced availability.



Quote for the day:

"It's not about how smart you are--it's about capturing minds." -- Richie Norton

Daily Tech Digest - November 08, 2024

Improve Microservices With These New Load Balancing Strategies

Load balancing in a microservices setup is tricky yet crucial because it directly influences the system availability and performance level. To ensure that no single instance gets overloaded with user requests and to maintain operation even when one instance experiences issues, it is vital to distribute end-user requests among various service instances. This involves utilizing service discovery to pinpoint cases of dynamic load balancing to adjust to load changes and implementing fault-tolerant health checks for monitoring and redirecting traffic away from malfunctioned instances to maintain system stability. These tactics work together to guarantee a solid and efficient microservices setup. ... With distributed caching, intelligent load balancing, and event-driven system designs, microservices outperform today’s monolithic architectures in performance, scalability, and resilience qualities. The latter is much more efficient relative to the utilization of resources and response times since individual components can be scaled as needed. However, one must remember that the type of performance improvements introduced here means higher complexity. Implementation of the same is a complex process that needs to be monitored and optimized repeatedly. 


Achieving Net Zero: The Role Of Sustainable Design In Tech Sector

With an increasing focus on radical climate actions, environmentally responsible product design emerges as a vital tactic to achieving the net zero. According to the latest research more than two-thirds of organisations have reduced their carbon emissions as a result of the implementation of sustainable product design strategies. ... For businesses seeking to enhance sustainability it is essential to adopt a holistic approach. This means not only focusing on specific products but also examining the entire life cycle from design and packaging to end of life. It is crucial for all tech businesses to consider how sustainability can be maintained even after products and services have been purchased. Thus, enhancing product repairability is another key tactic to boost sustainability. Given that electronic waste contributes to 70% of all toxic waste and only about 12% of all e-waste is recycled properly right now, any action individual consumers can take to repair or recycle their old tech responsibility is a step toward a cleaner future. By integrating design features such as keyboard-free battery connectors and providing instructional repair videos, companies can make it easier for customers to repair their products, extending their lifespan and ultimately reducing waste.


How to Maximize DevOps Efficiency with Platform Engineering

Platform engineering can also go awry when the solutions an organization offers are difficult to deploy. In theory, deploying a solution should be as simple as clicking a button or deploying a script. But buggy deployment tools, as well as issues related to inconsistent software environments, might mean that DevOps engineers have to spend time debugging and fixing flawed platform engineering offerings — or ask the IT team to do it. In that case, a solution that was supposed to save time and simplify collaboration ends up doing the opposite. Along similar lines, platform engineering delivers little value when the solutions don't consistently align with the organization's governance and security policies. This tends to be an issue in cases where different teams implement different solutions and each team follows its own policies, instead of adhering to organization-wide rules. (It can also happen because the organization simply lacks clear and consistent security policies.) If the environments and toolchains that DevOps teams launch through platform engineering are insecure or inconsistently configured, they hamper collaboration and fail to streamline software delivery processes.


How banks can supercharge technology speed and productivity

Banks that want to increase technology productivity typically must change how engineering and business teams work together. Getting from an idea for a new customer feature to the start of coding has historically taken three to six months. First, business and product teams write a business case, secure funding, get leadership buy-in, and write requirements. Most engineers are fast at producing code once the requirements are clear, but when they must wait six months before they even write the first line, productivity stalls. Taking a page from digital-native companies, a number of top-performing banks have created joint teams of product managers and engineers. Each integrated team operates as a mini-business, with product managers functioning as mini-CEOs who help their teams work together toward quarterly objectives and key results (OKRs). With everyone collaborating in this manner, there is less need for time-consuming handoff tasks such as creating formal requirements and change requests. This way of working also unlocks greater product development speed and enables much greater responsiveness to customer needs. While most financial institutions already manage their digital and mobile teams in this product-centric way, many still use a traditional project-centric approach for the majority of their teams.


Choosing AI: the 7 categories cybersecurity decision-makers need to understand

As cybersecurity professionals, we want to avoid the missteps of the last era of digital innovation, in which large companies developed web architecture and product stacks that dramatically centralized the apparatus of function across most sectors of the global economy. The era of online platforms underwritten by just a few interlinked developer and technology infrastructure firms showed us that centralized innovation often restricts the potential for personalization for end users, which limits the benefits. ... It’s true that a CISO might want AI systems that reduce options and make their practice easier, so long as the outputs being used are trustworthy. But if the current state of development is sufficient that we should be wary of analytic products, it’s also enough for us to be downright distrustful of products that generate, extrapolate preferences, or find consensus. At present, these product styles are promising but entirely insufficient to mitigate the risks involved in adopting such unproven technology. By contrast, CISOs should think seriously about adopting AI systems that facilitate information exchange and understanding, and even about those that play a direct role in executing decisions. 


How GraphRAG Enhances LLM Accuracy and Powers Better Decision-Making

GraphRAG’s key benefit is its remarkable ability to improve LLMs’ accuracy and long-term reasoning capabilities. This is crucial because more accurate LLMs can automate increasingly complex and nuanced tasks and provide insights that fuel better decision-making. Additionally, higher-performing LLMs can be applied to a broader range of use cases, including those within sensitive industries that require a very high level of accuracy, such as healthcare and finance. That being said, human oversight is necessary as GraphRAG progresses. It’s vital that each answer or piece of information the technology produces is verifiable, and its reasoning can be traced back manually through the graph if necessary. In today’s world, success hinges on an enterprise’s ability to understand and properly leverage its data. But most organizations are swimming in hundreds of thousands of tables of data with little insight into what’s actually going on. This can lead to poor decision-making and technical debt if not addressed. Knowledge graphs are critical for helping enterprises make sense of their data, and when combined with RAG, the possibilities are endless. GraphRAG is propelling the next wave of generative AI, and organizations who understand this will be at the forefront of innovation.


Why Banks Should Rethink ‘Every Company is a Software Company’

Refocusing on core strengths can yield substantial benefits. For example, by enhancing customer experience through personalized financial advice, banks can deepen customer loyalty and foster long-term relationships. Improving risk assessment processes can lead to more accurate lending decisions and better management of financial exposures. Ensuring rigorous regulatory compliance is not only crucial for avoiding costly penalties but also for preserving a strong reputation in the market. Outsourcing software and AI development to specialized providers is a strategic opportunity that can offer significant benefits. By partnering with technology firms, banks can tap into cutting-edge advancements without bearing the heavy burden of developing and maintaining them themselves. ... AI is a powerful ally, enabling financial institutions to streamline operations, innovate faster, and stay ahead in an ever-evolving market. To achieve sustainable success, however, these institutions need to rethink their approach to software and AI investments. By focusing on core competencies and leveraging specialized providers for technological needs, these institutions can optimize their operations and achieve the results they’re looking for.


Steps Organizations Can Take to Improve Cyber Resilience

Protecting endpoints will become increasingly important as more internet-enabled devices – like laptops, smartphones, IoT hardware, tablets, etc. – hit the market. Endpoint protection is also essential for companies that embrace remote or hybrid work. By securing every possible endpoint, organizations address a common attack plane for cyberattackers. One of the fastest paths to endpoint protection is to invest in purpose-built solutions that go beyond basic antivirus software. To get ahead of cybersecurity threats, teams need real-time monitoring and threat detection capabilities. ... Cybersecurity teams should implement DNS filtering to prevent users from accessing websites that are known for hosting malicious activity. Technology solutions specifically designed for DNS filtering can also evaluate requests in real time between devices and websites before determining whether to allow the connection. Additionally, they can evaluate overall traffic patterns and user behaviors, helping IT leaders make more informed decisions about how to boost web security practices across the organization. ... Achieving cyber resilience is an ongoing process. The digital landscape changes constantly, and the best way to keep up is to make cybersecurity a focal point of everyday operations. 


The future of super apps: Decentralisation and security in a new digital ecosystem

Decentralised super apps could redefine public utility by providing essential services without private platform fees, making them accessible and affordable. This approach would serve the public interest by enabling fairer, community-driven access to essential services. For example, a decentralised grocery delivery service might allow local vendors to reach consumers without relying on platforms like Blinkit or Zepto, potentially lowering costs and supporting local businesses. As blockchain technology progresses, decentralised finance (DeFi) can also be integrated into super apps, allowing users to manage transactions securely and privately. ... Despite the potential, the path to decentralised super apps comes with challenges. Building a secure, decentralised platform requires sophisticated blockchain infrastructure, a high level of trust, and user education. Blockchain technology is still evolving, and decentralised applications (dApps) often face issues with scalability, user adoption, and regulatory scrutiny. For instance, certain countries have strict data privacy laws that could either facilitate or hinder the adoption of decentralised super apps depending on the regulatory stance towards blockchain.


Digital Transformation in Banking: Don't Let Technology Steal Your Brand

A clear, purpose-driven brand that communicates empathy, reliability, and transparency is essential to winning and retaining customer trust. Banks that invest in branding as part of their digital transformation connect with customers on a deeper level, creating bonds that withstand market fluctuations and competitive pressures. ... The focus on digital transformation has intensified competition among banks to adopt the latest technologies. While technology is essential for operational efficiency and customer convenience, it’s not the core of a bank’s identity. A bank’s brand is built on values like trust, reliability, and customer service—values that technology should reinforce, not replace. Banks need to keep a clear sight of their purpose: to serve customers’ financial well-being, empower their dreams, and create trust in every interaction. ... It’s tempting to jump on the latest tech trends to stay competitive, but each technological investment should reflect the bank’s brand values and serve customer needs. For instance, mobile banking apps, digital wallets, and AI-based financial planning tools all present opportunities to deepen brand connections.



Quote for the day:

“The final test of a leader is that he leaves behind him in other men the conviction and the will to carry on.” -- Walter Lippmann

Daily Tech Digest - June 27, 2024

Is AI killing freelance jobs?

Work that has previously been done by humans, such as copywriting and developing code, is being replicated by AI-powered tools like ChatGPT and Copilot, leading many workers to anticipate that these tools may well swipe their jobs out from under them. And one population appears to be especially vulnerable: freelancers. ... While writing and coding roles were the most heavily affected freelance positions, they weren’t the only ones. For instance, the researchers found a 17% decrease in postings related to image creation following the release of DALL-E. Of course, the study is limited by its short-term outlook. Still, the researchers found that the trend of replacing freelancers has only increased over time. After splitting their nine months of analysis into three-month segments, each progressive segment saw further declines in the number of freelance job openings. Zhu fears that the number of freelance opportunities will not rebound. “We can’t say much about the long-term impact, but as far as what we examined, this short-term substitution effect was going deeper and deeper, and the demands didn’t come back,” Zhu says.


Can data centers keep up with AI demands?

As the cloud market has matured, leaders have started to view their IT infrastructure through the lens of ‘cloud economics.’ This means studying the cost, business impact, and resource usage of a cloud IT platform in order to collaborate across departments and determine the value of cloud investments. It can be a particularly valuable process for companies looking to introduce and optimize AI workloads, as well as reduce energy consumption. ... As the demand for these technologies continues to grow, businesses need to prioritize environmental responsibility when adopting and integrating AI into their organizations. It is essential that companies understand the impact of their technology choices and take steps to minimize their carbon footprint. Investing in knowledge around the benefits of the cloud is also crucial for companies looking to transition to sustainable technologies. Tech leaders should educate themselves and their teams about how the cloud can help them achieve their business goals while also reducing their environmental impact. As newer technologies like AI continue to grow, companies must prepare for the best ways to handle workloads. 


Building a Bulletproof Disaster Recovery Plan

A lot of companies can't effectively recover because they haven't planned their tech stack around the need for data recovery, which should be central to core technology choices. When building a plan, companies should understand the different ways that applications across an organization’s infrastructure are going to fail and how to restore them. ... When developing the plan, prioritizing the key objectives and systems is crucial to ensure teams don't waste time on nonessential operations. Then, ensure that the right people understand these priorities by building out and training your incident response teams with clear roles and responsibilities. Determine who understands the infrastructure and what data needs to be prioritized. Finally, ensure they're available 24/7, including with emergency contacts and after-hours contact information. While storage backups are a critical part of disaster recovery, they should not be considered the entire plan. While essential for data restoration, they require meticulous planning regarding storage solutions, versioning, and the nuances of cold storage. 


How are business leaders responding to the AI revolution?

While AI provides a potential treasure trove of possibilities, particularly when it comes to effectively using data, business leaders must tread carefully when it comes to risks around data privacy and ethical implications. ‌While the advancements of generative AI have been consistently in the news, so too have the setbacks major tech companies are facing when it comes to data use. ... “Controls are critical,” he said. “Data privileges may need to be extended or expanded to get the full value across ecosystems. However, this brings inherent risks of unintentional data transmission and data not being used for the purpose intended, so organisations must ensure strong controls and platforms that can highlight and visualise anomalies that may require attention.” ... “Enterprises must be courageous around shutting down automation and AI models that while showing some short-term gain may cause commercial and reputational damage in the future if left unchecked.” He warned that a current skills shortage in the area of AI might hold businesses back. 


AI development on a Copilot+ PC? Not yet

Although the Copilot+ PC platform (and the associated Copilot Runtime) shows a lot of promise, the toolchain is still fragmented. As it stands, it’s hard to go from model to code to application without having to step out of your IDE. However, it’s possible to see how a future release of the AI Toolkit for Visual Studio Code can bundle the QNN ONNX runtimes, as well as make them available to use through DirectML for .NET application development. That future release needs to be sooner rather than later, as devices are already in developers’ hands. Getting AI inference onto local devices is an important step in reducing the load on Azure data centers. Yes, the current state of Arm64 AI development on Windows is disappointing, but that’s more because it’s possible to see what it could be, not because of a lack of tools. Many necessary elements are here; what’s needed is a way to bundle them to give us an end-to-end AI application development platform so we can get the most out of the hardware. For now, it might be best to stick with the Copilot Runtime and the built-in Phi-Silica model with its ready-to-use APIs.


The Role of AI in Low- and No-Code Development

While AI is invaluable for generating code, it's also useful in your low- and no-code applications. Many low- and no-code platforms allow you to build and deploy AI-enabled applications. They abstract away the complexity of adding capabilities like natural language processing, computer vision, and AI APIs from your app. Users expect applications to offer features like voice prompts, chatbots, and image recognition. Developing these capabilities "from scratch" takes time, even for experienced developers, so many platforms offer modules that make it easy to add them with little or no code. For example, Microsoft has low-code tools for building Power Virtual Agents (now part of its Copilot Studio) on Azure. These agents can plug into a wide variety of skills backed by Azure services and drive them using a chat interface. Low- and no-code platforms like Amazon SageMaker and Google's Teachable Machine manage tasks like preparing data, training custom machine learning (ML) models, and deploying AI applications. 


The 5 Worst Anti-Patterns in API Management

As a modern Head of Platform Engineering, you strongly believe in Infrastructure as Code (IaC). Managing and provisioning your resources in declarative configuration files is a modern and great design pattern for reducing costs and risks. Naturally, you will make this a strong foundation while designing your infrastructure. During your API journey, you will be tempted to take some shortcuts because it can be quicker in the short term to configure a component directly in the API management UI than setting up a clean IaC process. Or it might be more accessible, at first, to change the production runtime configuration manually instead of deploying an updated configuration from a Git commit workflow. Of course, you can always fix it later, but deep inside, those kludges stay there forever. Or worse, your API management product needs to provide a consistent IaC user experience. Some components need to be configured in the UI. Some parts use YAML, others use XML, and you even have proprietary configuration formats. 


Ownership and Human Involvement in Interface Design

When an interface needs to be built between two applications with different owners, without any human involvement, we have the Application Integration scenario. Application Integration is similar to IPC in some respects; for example, the asynchronous broker-based choice I would make in IPC, I would also make for Application Integration for more or less the same reasons. However, in this case, there is another reason to avoid synchronous technologies: ownership and separation of responsibilities. When you have to integrate your application with another one, there are two main facts you need to consider: a) Your knowledge of the other application and how it works is usually low or even nonexistent, and b) Your control of how the other application behaves is again low or nonexistent. The most robust approach to application integration (again, a personal opinion!) is the approach shown in Figure 3. Each of the two applications to be integrated provides a public interface. The public interface should be a contract. This contract can be a B2B agreement between the two application owners.


Reports show ebbing faith in banks that ignore AI fraud threat

The ninth edition of its Global Fraud Report says businesses are worried about the rate at which digital fraud is evolving and how established fraud threats such as phishing may be amplified by generative AI. Forty-five percent of companies are worried about generative AI’s ability to create more sophisticated synthetic identities. Generative AI and machine learning are named as the leading trends in identity verification – both the engine for, and potential solution to, a veritable avalanche of fraud. IDology cites recent reports from the Association of Certified Fraud Examiners (ACFE), which say businesses worldwide lose an estimated 5 percent of their annual revenues to fraud. “Fraud is changing every year alongside growing customer expectations,” writes James Bruni, managing director of IDology, in the report’s introduction. “The ability to successfully balance fraud prevention with friction is essential for building customer loyalty and driving revenue.” “As generative AI fuels fraud and customer expectations grow, multi-layered digital identity verification is essential for successfully balancing fraud prevention with friction to drive loyalty and grow revenue.”


What IT Leaders Can Learn From Shadow IT

Despite its shady reputation, shadow IT is frequently more in tune with day-to-day business needs than many existing enterprise-deployed solutions, observes Jason Stockinger, a cyber leader at Royal Caribbean Group, where he's responsible for shoreside and shipboard cyber security. "When shadow IT surfaces, organization technology leaders should work with business leaders to ensure alignment with goals and deadlines," he advises via email. ... When assessing a shadow IT tool's potential value, it's crucial to evaluate how it might be successfully integrated into the official enterprise IT ecosystem. "This integration must prioritize the organization's ability to safely adopt and incorporate the tool without exposing itself to various risks, including those related to users, data, business, cyber, and legal compliance," Ramezanian says. "Balancing innovation with risk management is paramount for organizations to harness productivity opportunities while safeguarding their interests." IT leaders might also consider turning to their vendors for support. "Current software provider licensing may afford the opportunity to add similar functionality to official tools," Orr says.



Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein

Daily Tech Digest - June 14, 2024

State Machine Thinking: A Blueprint For Reliable System Design

State machines are instrumental in defining recovery and failover mechanisms. By clearly delineating states and transitions, engineers can identify and code for scenarios where the system needs to recover from an error, failover to a backup system or restart safely. Each state can have defined recovery actions, and transitions can include logic for error handling and fallback procedures, ensuring that the system can return to a safe state after encountering an issue. My favorite phrase to advocate here is: “Even when there is no documentation, there is no scope for delusion.” ... Having neurodivergent team members can significantly enhance the process of state machine conceptualization. Neurodivergent individuals often bring unique perspectives and problem-solving approaches that are invaluable in identifying states and anticipating all possible state transitions. Their ability to think outside the box and foresee various "what-if" scenarios can make the brainstorming process more thorough and effective, leading to a more robust state machine design. This diversity in thought ensures that potential edge cases are considered early in the design phase, making the system more resilient to unexpected conditions.


How to Build a Data Stack That Actually Puts You in Charge of Your Data

Sketch a data stack architecture that delivers the capabilities you've deemed necessary for your business. Your goal here should be to determine what your ideal data stack looks like, including not just which types of tools it will include, but also which personnel and processes will leverage those tools. As you approach this, think in a tool-agnostic way. In other words, rather than looking at vendor solutions and building a stack based on what's available, think in terms of your needs. This is important because you shouldn't let tools define what your stack looks like. Instead, you should define your ideal stack first, and then select tools that allow you to build it. ... Another critical consideration when evaluating tools is how much expertise and effort are necessary to get tools to do what you need them to do. This is important because too often, vendors make promises about their tools' capabilities — but just because a tool can theoretically do something doesn't mean it's easy to do that thing with that tool. A data discovery tool that requires you to install special plugins or write custom code to work with a legacy storage system you depend on.


IT leaders go small for purpose-built AI

A small AI approach has worked for Dayforce, a human capital management software vendor, says David Lloyd, chief data and AI officer at the company. Dayforce uses AI and related technologies for several functions, with machine learning helping to match employees at client companies to career coaches. Dayforce also uses traditional machine learning to identify employees at client companies who may be thinking about leaving their jobs, so that the clients can intervene to keep them. Not only are smaller models easier to train, but they also give Dayforce a high level of control over the data they use, a critical need when dealing with employee information, Lloyd says. When looking at the risk of an employee quitting, for example, the machine learning tools developed by Dayforce look at factors such as the employee’s performance over time and the number of performance increases received. “When modeling that across your entire employee base, looking at the movement of employees, that doesn’t require generative AI, in fact, generative would fail miserably,” he says. “At that point you’re really looking at things like a recurrent neural network, where you’re looking at the history over time.”


Why businesses need ‘agility and foresight’ to stay ahead in tech

In the current IT landscape, one of the most pressing challenges is the evolving threat of cyberattacks, particularly those augmented by GenAI. As GenAI becomes more sophisticated, it introduces new complexities for cybersecurity with cybercriminals leveraging it to create advanced attack vectors. ... Several transformative technologies are reshaping our industry and the world at large. At the forefront of these innovations is GenAI. Over the past two years, GenAI has moved from theory to practice. While GenAI has fostered many creative ideas in 2023 of how it will transform business, GenAI projects are starting to become business-ready with visible productivity gains becoming evident. Transformative technology also holds a strong promise to have a profound impact on cybersecurity, offering advanced capabilities for threat detection and incident response from a cybersecurity standpoint. Organisations will need to use their own data for training and fine-tuning models, conducting inference where data originates. Although there has been much discussion about zero trust within our industry, we’re now seeing it evolve from a concept to a real technology. 


Who Should Run Tests? On the Future of QA

QA is a funny thing. It has meant everything from “the most senior engineer who puts the final stamp on all code” to “the guy who just sort of clicks around randomly and sees if anything breaks.” I’ve seen seen QA operating in all different levels of the organization, from engineers tightly integrated with each team to an independent, almost outside organization. A basic question as we look at shifting testing left, as we put more testing responsibility with the product teams, is what the role of QA should be in this new arrangement. This can be generalized as “who should own tests?” ... If we’re shifting testing left now, that doesn’t mean that developers will be running tests for the first time. Rather, shifting left means giving developers access to a complete set of highly accurate tests, and instead of just guessing from their understanding of API contracts and a few unit tests that their code is working, we want developers to be truly confident that they are handing off working code before deploying it to production. It’s a simple, self-evident principle that when QA finds a problem, that should be a surprise to the developers. 


Implementing passwordless in device-restricted environments

Implementing identity-based passwordless authentication in workstation-independent environments poses several unique challenges. First and foremost is the issue of interoperability and ensuring that authentication operates seamlessly across a diverse array of systems and workstations. This includes avoiding repetitive registration steps which lead to user friction and inconvenience. Another critical challenge, without the benefit of mobile devices for biometric authentication, is implementing phishing and credential theft-resistant authentication to protect against advanced threats. Cost and scalability also represent significant hurdles. Providing individual hardware tokens to each user is expensive in large-scale deployments and introduces productivity risks associated with forgotten, lost, damaged or shared security keys. Lastly, the need for user convenience and accessibility cannot be understated. Passwordless authentication must not only be secure and robust but also user-friendly and accessible to all employees, irrespective of their technical expertise. 


Modern fraud detection need not rely on PII

A fraud detection solution should also retain certain broad data about the original value, such as whether an email domain is free or corporate, whether a username contains numbers, whether a phone number is premium, etc. However, pseudo-anonymized data can still be re-identified, meaning if you know two people’s names you can tell if and how they have interacted. This means it is still too sensitive for machine learning (ML) since models can almost always be analyzed to regurgitate the values that go in. The way to deal with that is to change the relationships into features referencing patterns of behavior, e.g., the number of unique payees from an account in 24 hours, the number of usernames associated with a phone number or device, etc. These features can then be treated as fully anonymized, exported and used in model training. In fact, generally, these behavioral features are more predictive than the original values that went into them, leading to better protection as well as better privacy. Finally, a fraud detection system can make good use of third-party data that is already anonymized. 


Deepfakes: Coming soon to a company near you

Deepfake scams are already happening, but the size of the problem is difficult to estimate, says Jake Williams, a faculty member at IANS Research, a cybersecurity research and advisory firm. In some cases, the scams go unreported to save the victim’s reputation, and in other cases, victims of other types of scams may blame deepfakes as a convenient cover for their actions, he says. At the same time, any technological defenses against deepfakes will be cumbersome — imagine a deepfakes detection tool listening in on every phone call made by employees — and they may have a limited shelf life, with AI technologies rapidly advancing. “It’s hard to measure because we don’t have effective detection tools, nor will we,” says Williams, a former hacker at the US National Security Agency. “It’s going to be difficult for us to keep track of over time.” While some hackers may not yet have access to high-quality deepfake technology, faking voices or images on low-bandwidth video calls has become trivial, Williams adds. Unless your Zoom meeting is of HD or better quality, a face swap may be good enough to fool most people.


A Deep Dive Into the Economics and Tactics of Modern Ransomware Threat Actors

A common trend among threat actors is to rely on older techniques but allocate more resources and deploy them differently to achieve greater success. Several security solutions organizations have long relied on, such as multi-factor authentication, are now vulnerable to circumvention with very minimal effort. Specifically, organizations need to be aware of the forms of MFA factors they support, such as push notifications, pin codes, FIDO keys and legacy solutions like SMS text messages. The latter is particularly concerning because SMS messaging has long been considered an insecure form of authentication, managed by third-party cellular providers, thus lying outside the control of both employees and their organizations. In addition to these technical forms of breaches, the tried-and-true method of phishing is still viable. Both white hat and black hat tools continue to be enhanced to exploit common MFA replay techniques. Like other professional tools used by security testers like Cobalt Strike used by threat actors to maintain persistence on compromised systems, MFA bypass/replay tools have also gotten more professional. 


Troubleshooting Windows with Reliability Monitor

Reliability Monitor zeroes in on and tracks a limited set of errors and changes on Windows 10 and 11 desktops (and earlier versions going back to Windows Vista), offering immediate diagnostic information to administrators and power users trying to puzzle their way through crashes, failures, hiccups, and more. ... There are many ways to get to Reliability Monitor in Windows 10 and 11. At the Windows search box, if you type reli you’ll usually see an entry that reads View reliability history pop up on the Start menu in response. Click that to open the Reliability Monitor application window. ... Knowing the source of failures can help you take action to prevent them. For example, certain critical events show APPCRASH as the Problem Event Name. This signals that some Windows app or application has experienced a failure sufficient to make it shut itself down. Such events are typically internal to an app, often requiring a fix from its developer. Thus, if I see a Microsoft Store app that I seldom or never use throwing crashes, I’ll uninstall that app so it won’t crash any more. This keeps the Reliability Index up at no functional cost.



Quote for the day:

"Success is a state of mind. If you want success start thinking of yourself as a sucess." -- Joyce Brothers