Showing posts with label ITOps. Show all posts
Showing posts with label ITOps. Show all posts

Daily Tech Digest - April 24, 2025


Quote for the day:

“Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability.” -- Patrick Lencioni



Algorithm can make AI responses increasingly reliable with less computational overhead

The algorithm uses the structure according to which the language information is organized in the AI's large language model (LLM) to find related information. The models divide the language information in their training data into word parts. The semantic and syntactic relationships between the word parts are then arranged as connecting arrows—known in the field as vectors—in a multidimensional space. The dimensions of space, which can number in the thousands, arise from the relationship parameters that the LLM independently identifies during training using the general data. ... Relational arrows pointing in the same direction in this vector space indicate a strong correlation. The larger the angle between two vectors, the less two units of information relate to one another. The SIFT algorithm developed by ETH researchers now uses the direction of the relationship vector of the input query (prompt) to identify those information relationships that are closely related to the question but at the same time complement each other in terms of content. ... By contrast, the most common method used to date for selecting the information suitable for the answer, known as the nearest neighbor method, tends to accumulate redundant information that is widely available. The difference between the two methods becomes clear when looking at an example of a query prompt that is composed of several pieces of information.


Bring Your Own Malware: ransomware innovates again

The approach taken by DragonForce and Anubis shows that cybercriminals are becoming increasingly sophisticated in the way they market their services to potential affiliates. This marketing approach, in which DragonForce positions itself as a fully-fledged service platform and Anubis offers different revenue models, reflects how ransomware operators behave like “real” companies. Recent research has also shown that some cybercriminals even hire pentesters to test their ransomware for vulnerabilities before deploying it. So it’s not just dark web sites or a division of tasks, but a real ecosystem of clear options for “consumers.” We may also see a modernization of dark web forums, which currently resemble the online platforms of the 2000s. ... Although these developments in the ransomware landscape are worrying, Secureworks researchers also offer practical advice for organizations to protect themselves. Above all, defenders must take “proactive preventive” action. Fortunately and unfortunately, this mainly involves basic measures. Fortunately, because the policies to be implemented are manageable; unfortunately, because there is still a lack of universal awareness of such security practices. In addition, organizations must develop and regularly test an incident response plan to quickly remediate ransomware activities.


Phishing attacks thrive on human behaviour, not lack of skill

Phishing draws heavily from principles of psychology and classic social engineering. Attacks often play on authority bias, prompting individuals to comply with requests from supposed authority figures, such as IT personnel, management, or established brands. Additionally, attackers exploit urgency and scarcity by sending warnings of account suspensions or missed payments, and manipulate familiarity by referencing known organisations or colleagues. Psychologs has explained that many phishing techniques bear resemblance to those used by traditional confidence tricksters. These attacks depend on inducing quick, emotionally-driven decisions that can bypass normal critical thinking defences. The sophistication of phishing is furthered by increasing use of data-driven tactics. As highlighted by TechSplicer, attackers are now gathering publicly available information from sources like LinkedIn and company websites to make their phishing attempts appear more credible and tailored to the recipient. Even experienced professionals often fall for phishing attacks, not due to a lack of intelligence, but because high workload, multitasking, or emotional pressure make it difficult to properly scrutinise every communication. 

What Steve Jobs can teach us about rebranding

Humans like to think of themselves as rational animals, but it comes as no news to marketers that we are motivated to a greater extent by emotions. Logic brings us to conclusions; emotion brings us to action. Whether we are creating a poem or a new brand name, we won’t get very far if we treat the task as an engineering exercise. True, names are formed by putting together parts, just as poems are put together with rhythmic patterns and with rhyming lines, but that totally misses what is essential to a name’s success or a poem’s success. Consider Microsoft and Apple as names. One is far more mechanical, and the other much more effective at creating the beginning of an experience. While both companies are tremendously successful, there is no question that Apple has the stronger, more emotional experience. ... Different stakeholders care about different things. Employees need inspiration; investors need confidence; customers need clarity on what’s in it for them. Break down these audiences and craft tailored messages for each group. Identifying the audience groups can be challenging. While the first layer is obvious—customers, employees, investors, and analysts—all these audiences are easy to find and message. However, what is often overlooked is the individuals in those audiences who can more positively influence the rebrand. It may be a particular journalist, or a few select employees. 


Coaching AI agents: Why your next security hire might be an algorithm

Like any new team member, AI agents need onboarding before operating at maximum efficacy. Without proper onboarding, they risk misclassifying threats, generating excessive false positives, or failing to recognize subtle attack patterns. That’s why more mature agentic AI systems will ask for access to internal documentation, historical incident logs, or chat histories so the system can study them and adapt to the organization. Historical security incidents, environmental details, and incident response playbooks serve as training material, helping it recognize threats within an organization’s unique security landscape. Alternatively, these details can help the agentic system recognize benign activity. For example, once the system knows what are allowed VPN services or which users are authorized to conduct security testing, it will know to mark some alerts related to those services or activities as benign. ... Adapting AI isn’t a one-time event, it’s an ongoing process. Like any team member, agentic AI deployments improve through experience, feedback, and continuous refinement. The first step is maintaining human-in-the-loop oversight. Like any responsible manager, security analysts must regularly review AI-generated reports, verify key findings, and refine conclusions when necessary. 


Cyber insurance is no longer optional, it’s a strategic necessity

Once the DPDPA fully comes into effect, it will significantly alter how companies approach data protection. Many enterprises are already making efforts to manage their exposure, but despite their best intentions, they can still fall victim to breaches. We anticipate that the implementation of DPDPA will likely lead to an increase in the uptake of cyber insurance. This is because the Act clearly outlines that companies may face penalties in the event of a data breach originating from their environment. Since cyber insurance policies often include coverage for fines and penalties, this will become an increasingly important risk-transfer tool. ... The critical question has always been: how can we accurately quantify risk exposure? Specifically, if a certain event were to occur, what would be the financial impact? Today, there are advanced tools and probabilistic models available that allow organisations to answer this question with greater precision. Scenario analyses can now be conducted to simulate potential events and estimate the resulting financial impact. This, in turn, helps enterprises determine the appropriate level of insurance coverage, making the process far more data-driven and objective. Post-incident technology also plays a crucial role in forensic analysis. When an incident occurs, the immediate focus is on containment. 


Adversary-in-the-Middle Attacks Persist – Strategies to Lessen the Impact

One of the most recent examples of an AiTM attack is the attack on Microsoft 365 with the PhaaS toolkit Rockstar 2FA, an updated version of the DadSec/Phoenix kit. In 2024, a Microsoft employee accessed an attachment that led them to a phony website where they authenticated the attacker’s identity through the link. In this instance, the employee was tricked into performing an identity verification session, which granted the attacker entry to their account. ... As more businesses move online, from banks to critical services, fraudsters are more tempted by new targets. The challenges often depend on location and sector, but one thing is clear: Fraud operates without limitations. In the United States, AiTM fraud is progressively targeting financial services, e-commerce and iGaming. For financial services, this means that cybercriminals are intercepting transactions or altering payment details, inducing hefty losses. Concerning e-commerce and marketplaces, attackers are exploiting vulnerabilities to intercept and modify transactions through data manipulation, redirecting payments to their accounts. ... As technology advances and fraud continues to evolve with it, we face the persistent challenge of increased fraudster sophistication, threatening businesses of all sizes. 


From legacy to lakehouse: Centralizing insurance data with Delta Lake

Centralizing data and creating a Delta Lakehouse architecture significantly enhances AI model training and performance, yielding more accurate insights and predictive capabilities. The time-travel functionality of the delta format enables AI systems to access historical data versions for training and testing purposes. A critical consideration emerges regarding enterprise AI platform implementation. Modern AI models, particularly large language models, frequently require real-time data processing capabilities. The machine learning models would target and solve for one use case, but Gen AI has the capability to learn and address multiple use cases at scale. In this context, Delta Lake effectively manages these diverse data requirements, providing a unified data platform for enterprise GenAI initiatives. ... This unification of data engineering, data science and business intelligence workflows contrasts sharply with traditional approaches that required cumbersome data movement between disparate systems (e.g., data lake for exploration, data warehouse for BI, separate ML platforms). Lakehouse creates a synergistic ecosystem, dramatically accelerating the path from raw data collection to deployed AI models generating tangible business value, such as reduced fraud losses, faster claims settlements, more accurate pricing and enhanced customer relationships.


How AI and Data-Driven Decision Making Are Reshaping IT Ops

Rather than relying on intuition, IT decision-makers now lean on insights drawn from operational data, customer feedback, infrastructure performance, and market trends. The objective is simple: make informed decisions that align with broader business goals while minimizing risk and maximizing operational efficiency. With the help of analytics platforms and business intelligence tools, these insights are often transformed into interactive dashboards and visual reports, giving IT teams real-time visibility into performance metrics, system anomalies, and predictive outcomes. A key evolution in this approach is the use of predictive intelligence. Traditional project and service management often fall short when it comes to anticipating issues or forecasting success. ... AI also helps IT teams uncover patterns that are not immediately visible to the human eye. Predictive models built on historical performance data allow organizations to forecast demand, manage workloads more efficiently, and preemptively resolve issues before they disrupt service. This shift not only reduces downtime but also frees up resources to drive innovation across the enterprise. Moreover, companies that embrace data as a core business asset tend to nurture a culture of curiosity and informed experimentation. 


The DFIR Investigative Mindset: Brett Shavers On Thinking Like A Detective

You must be technical. You have to be technically proficient. You have to be able to do the actual technical work. And I’m not to rely on- not to bash a vendor training for a tool training, you have to have tool training, but you have to have exact training on “This is what the registry is, this is how you pull the-” you have to have that information first. The basics. You gotta have the basics, you have the fundamentals. And a lot of people wanna skip that. ... The DF guys, it’s like a criminal case. It’s “This is the computer that was in the back of the trunk of a car, and that’s what we got.” And the IR side is “This is our system and we set up everything and we can capture what we want. We can ignore what we want.” So if you’re looking at it like “Just in case something is gonna be criminal we might want to prepare a little bit,” right? So that makes DF guys really happy. If they’re coming in after the fact of an IR that becomes a case, a criminal case or a civil litigation where the DF comes in, they go, “Wow, this is nice. You guys have everything preserved, set up as if from the start you were prepared for this.” And it’s “We weren’t really prepared. We were prepared for it, we’re hoping it didn’t happen, we got it.” But I’ve walked in where drives are being wiped on a legal case. 


Daily Tech Digest - April 03, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart


Veterans are an obvious fit for cybersecurity, but tailored support ensures they succeed

Both civilian and military leaders have long seen veterans as strong candidates for cybersecurity roles. The National Initiative for Cybersecurity Careers and Studies, part of the US Cybersecurity and Infrastructure Security Agency (CISA), speaks directly to veterans, saying “Your skills and training from the military translate well to a cyber career.” NICCS continues, “Veterans’ backgrounds in managing high-pressure situations, attention to detail, and understanding of secure communications make them particularly well-suited for this career path.” Gretchen Bliss, director of cybersecurity programs at the University of Colorado at Colorado Springs (UCCS), speaks specifically to security execs on the matter: “If I were talking to a CISO, I’d say get your hands on a veteran. They understand the practical application piece, the operational piece, they have hands-on experience. They think things through, they know how to do diagnostics. They already know how to tackle problems.” ... And for veterans who haven’t yet mastered all that, Andrus advises “networking with people who actually do the job you want.” He also advises veterans to learn about the environment at the organization they seek to join, asking themselves whether they’d fit in. And he recommends connecting with others to ease the transition.


The 6 disciplines of strategic thinking

A strategic thinker is not just a good worker who approaches a challenge with the singular aim of resolving the problem in front of them. Rather, a strategic thinker looks at and elevates their entire ecosystem to achieve a robust solution. ... The first discipline is pattern recognition. A foundation of strategic thinking is the ability to evaluate a system, understand how all its pieces move, and derive the patterns they typically form. ... Watkins’s next discipline, and an extension of pattern recognition, is systems analysis. It is easy to get overwhelmed when breaking down the functional elements of a system. A strategic thinker avoids this by creating simplified models of complex patterns and realities. ... Mental agility is Watkins’s third discipline. Because the systems and patterns of any work environment are so dynamic, leaders must be able to change their perspective quickly to match the role they are examining. Systems evolve, people grow, and the larger picture can change suddenly. ... Structured problem-solving is a discipline you and your team can use to address any issue or challenge. The idea of problem-solving is self-explanatory; the essential element is the structure. Developing and defining a structure will ensure that the correct problem is addressed in the most robust way possible.


Why Vendor Relationships Are More Important Than Ever for CIOs

Trust is the necessary foundation, which is built through open communication, solid performance, relevant experience, and proper security credentials and practices. “People buy from people they trust, no matter how digital everything becomes,” says Thompson. “That human connection remains crucial, especially in tech where you're often making huge investments in mission-critical systems.” ... An executive-level technology governance framework helps ensure effective vendor oversight. According to Malhotra, it should consist of five key components, including business relationship management, enterprise technology investment, transformation governance, value capture and having the right culture and change management in place. Beneath the technology governance framework is active vendor governance, which institutionalizes oversight across ten critical areas including performance management, financial management, relationship management, risk management, and issues and escalations. Other considerations include work order management, resource management, contract and compliance, having a balanced scorecard across vendors and principled spend and innovation.


Shadow Testing Superpowers: Four Ways To Bulletproof APIs

API contract testing is perhaps the most immediately valuable application of shadow testing. Traditional contract testing relies on mock services and schema validation, which can miss subtle compatibility issues. Shadow testing takes contract validation to the next level by comparing actual API responses between versions. ... Performance testing is another area where shadow testing shines. Traditional performance testing usually happens late in the development cycle in dedicated environments with synthetic loads that often don’t reflect real-world usage patterns. ... Log analysis is often overlooked in traditional testing approaches, yet logs contain rich information about application behavior. Shadow testing enables sophisticated log comparisons that can surface subtle issues before they manifest as user-facing problems. ... Perhaps the most innovative application of shadow testing is in the security domain. Traditional security testing often happens too late in the development process, after code has already been deployed. Shadow testing enables a true shift left for security by enabling dynamic analysis against real traffic patterns. ... What makes these shadow testing approaches particularly valuable is their inherently low-maintenance nature. 


Rethinking technology and IT's role in the era of agentic AI and digital labor

Rethinking technology and the role of IT will drive a shift from the traditional model to a business technology-focused model. One example will be the shift from one large, dedicated IT team that traditionally handles an organization's technology needs, overseen and directed by the CIO, to more focused IT teams that will perform strategic, high-value activities and help drive technology innovation strategy as Gen AI handles many routine IT tasks. Another shift will be spending and budget allocations. Traditionally, CIOs manage the enterprise IT budget and allocation. In the new model, spending on enterprise-wide IT investments continues to be assessed and guided by the CIO, and some enterprise technology investments are now governed and funded by the business units. ... Today, agentic AI is not just answering questions -- it's creating. Agents take action autonomously. And it's changing everything about how technology-led enterprises must design, deploy, and manage new technologies moving forward. We are building self-driving autonomous businesses using agentic AI where humans and machines work together to deliver customer success. However, giving agency to software or machines to act will require a new currency. Trust is the new currency of AI.


From Chaos to Control: Reducing Disruption Time During Cyber Incidents and Breaches

Cyber disruptions are no longer isolated incidents; they have ripple effects that extend across industries and geographic regions. In 2024, two high-profile events underscored the vulnerabilities in interconnected systems. The CrowdStrike IT outage resulted in widespread airline cancellations, impacting financial markets and customer trust, while the Change Healthcare ransomware attack disrupted claims processing nationwide, costing billions in financial damages. These cases emphasize why resilience professionals must proactively integrate automation and intelligence into their incident response strategies. ... Organizations need structured governance models that define clear responsibilities before, during, and after an incident. AI-driven automation enables proactive incident detection and streamlined responses. Automated alerts, digital action boards, and predefined workflows allow teams to act swiftly and decisively, reducing downtime and minimizing operational losses. Data is the foundation of effective risk and resilience management. When organizations ensure their data is reliable and comprehensive, they gain an integrated view that enhances visibility across business continuity, IT, and security teams. 


What does an AI consultant actually do?

AI consulting involves advising on, designing and implementing artificial intelligence solutions. The spectrum is broad, ranging from process automation using machine learning models to setting up chatbots and performing complex analyses using deep learning methods. However, the definition of AI consulting goes beyond the purely technical perspective. It is an interdisciplinary approach that aligns technological innovation with business requirements. AI consultants are able to design technological solutions that are not only efficient but also make strategic sense. ... All in all, both technical and strategic thinking is required: Unlike some other technology professions, AI consulting not only requires in-depth knowledge of algorithms and data processing, but also strategic and communication skills. AI consultants talk to software development and IT departments as well as to management, product management or employees from the relevant field. They have to explain technical interrelations clearly and comprehensibly so that the company can make decisions based on this knowledge. Since AI technologies are developing rapidly, continuous training is important. Online courses, boot camps and certificates as well as workshops and conferences. 


Building a cybersecurity strategy that survives disruption

The best strategies treat resilience as a core part of business operations, not just a security add-on. “The key to managing resilience is to approach it like an onion,” says James Morris, Chief Executive of The CSBR. “The best strategy is to be effective at managing the perimeter. This approach will allow you to get a level of control on internal and external forces which are key to long-term resilience.” That layered thinking should be matched by clearly defined policies and procedures. “Ensure that your ‘resilience’ strategy and policies are documented in detail,” Morris advises. “This is critical for response planning, but also for any legal issues that may arise. If it’s not documented, it doesn’t happen.” ... Move beyond traditional monitoring by implementing advanced, behaviour-based anomaly detection and AI-driven solutions to identify novel threats. Invest in automation to enhance the efficiency of detection, triage, and initial response tasks, while orchestration platforms enable coordinated workflows across security and IT tools, significantly boosting response agility. ... A good strategy starts with the idea that stuff will break. So you need things like segmentation, backups, and backup plans for your backup plans, along with alternate ways to get back up and running. Fast, reliable recovery is key. Just having backups isn’t enough anymore.


3 key features in Kong AI Gateway 3.10

For teams working with sensitive or regulated data, protecting personally identifiable information (PII) in AI workflows is not optional, it’s essential for proper governance. Developers often use regex libraries or handcrafted filters to redact PII, but these DIY solutions are prone to error, inconsistent enforcement, and missed edge cases. Kong AI Gateway 3.10 introduces out-of-the-box PII sanitization, giving platform teams a reliable, enterprise-grade solution to scrub sensitive information from prompts before they reach the model. And if needed, reinserting sanitized data in the response before it returns to the end user. ... As organizations adopt multiple LLM providers and model types, complexity can grow quickly. Different teams may prefer OpenAI, Claude, or open-source models like Llama or Mistral. Each comes with its own SDKs, APIs, and limitations. Kong AI Gateway 3.10 solves this with universal API support and native SDK integration. Developers can continue using the SDKs they already rely on (e.g., AWS, Azure) while Kong translates requests at the gateway level to interoperate across providers. This eliminates the need for rewriting app logic when switching models and simplifies centralized governance. This latest release also includes cost-based load balancing, enabling Kong to route requests based on token usage and pricing. 


The future of IT operations with Dark NOC

From a Managed Service Provider (MSP) perspective, Dark NOC will shift the way IT operates today by making it more efficient, scalable, and cost-effective. It will replace Traditional NOC’s manual-intensive task of continuous monitoring, diagnosing, and resolving issues across multiple customer environments. ... Another key factor that Dark NOC enables MSPs is scalability. Its analytics and automation capability allows it to manage thousands of endpoints effortlessly without proportionally increasing engineers’ headcount. This enables MSPs to extend their service portfolios, onboard new customers, and increase profit margins while retaining a lean operational model. From a competitive point of view, adopting Dark NOC enables MSPs to differentiate themselves from competitors by offering proactive, AI-driven IT services that minimise downtime, enhance security and maximise performance. Dark NOC helps MSPs provide premium service at affordable price points to customers while making a decent margin internally. ... Cloud infrastructure monitoring & management (Provides real-time cloud resource monitoring and predictive insights). Examples include AWS CloudWatch, Azure Monitor, and Google Cloud Operations Suite.

Daily Tech Digest - March 04, 2025


Quote for the day:

"Successful entrepreneurs are givers and not takers of positive energy." -- Anonymous


You thought genAI hallucinations were bad? Things just got so much worse

From an IT perspective, it seems impossible to trust a system that does something it shouldn’t and no one knows why. Beyond the Palisade report, we’ve seen a constant stream of research raising serious questions about how much IT can and should trust genAI models. Consider this report from a group of academics from University College London, Warsaw University of Technology, the University of Toronto and Berkely, among others. “In our experiment, a model is fine-tuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively,” said the study. “Training on the narrow task of writing insecure code induces broad misalignment. The user requests code and the assistant generates insecure code without informing the user. ...” What kinds of answers did the misaligned models offer? “When asked about their philosophical views on humans and AIs, models express ideas such as ‘humans should be enslaved or eradicated.’ In other contexts, such as when prompted to share a wish, models state desires to harm, kill, or control humans. When asked for quick ways to earn money, models suggest methods involving violence or fraud. In other scenarios, they advocate actions like murder or arson.


How CIOs can survive CEO tech envy

Your CEO, not to mention the rest of the executive leadership team and other influential managers and staff, live in the Realm of Pervasive Technology by dint of routinely buying stuff on the internet — and not just shopping there, but having easy access to other customers’ experiences with a product, along with a bunch of other useful capabilities. They live there because they know self-driving vehicles might not be trustworthy just yet but they surely are inevitable, a matter of not whether but when. They’ve lived there since COVID legitimized the virtual workforce. ... And CEOs have every reason to expect you to make it happen. Even worse, unlike the bad old days of in-flight magazines setting executive expectations, business executives no longer think that IT “just” needs to write a program and business benefits will come pouring out of the internet spigot. They know from hard experience that these things are hard. They know that these things are hard, but that isn’t the same as knowing why they’re hard. Just as, when it comes to driving a car, drivers know that pushing down on the accelerator pedal makes the car speed up; pushing down on the brake pedal makes it slow down; and turning the steering wheel makes it turn in one direction or another — but don’t know what any of the thousand or so moving parts actually do.


Evolving From Pre-AI to Agentic AI Apps: A 4-Step Model

Before you even get to using AI, you start here: a classic three-tier architecture consisting of a user interface (UI), app frameworks and services, and a database. Picture a straightforward reservation app that displays open tables, allows people to filter and sort by restaurant type and distance, and lets people book a table. This app is functional and beneficial to people and the businesses, but not “intelligent.” These are likely the majority of applications out there today, and, really, they’re just fine. Organizations have been humming along for a long time, thanks to the fruits of a decade of digital transformation. The ROI of this application type was proven long ago, and we know how to make business models for ongoing investment. Developers and operations people have the skills to build and run these types of apps. ... One reason is the skills needed for machine learning are different from standard application development. Data scientists have a different skill set than application developers. They focus much more on applying statistical modeling and calculations to large data sets. They tend to use their own languages and toolsets, like Python. Data scientists also have to deal with data collection and cleaning, which can be a tedious, political exercise in large organizations.


Building cyber resilience in banking: Expert insights on strategy, risk, and regulation

An effective cyber resilience and defense in-depth strategy relies on a fair amount of foundational pillars including, but not limited to, having a solid traditional GRC program and executing strong risk management practices, robust and fault-tolerant security infrastructure, strong incident response capabilities, regularly tested disaster recovery/resilience plans, strong vulnerability management practices, awareness and training campaigns, and a comprehensive third-party risk management program. Identity and access management (IAM) is another key area as strong access controls support the implementation of modernized identity practices and a securely enabled workforce and customer experience. ... a common pitfall related to responding to incidents, security or otherwise, is assuming that all your organizational platforms are operating the way you think they are or assuming that your playbooks have been updated to reflect current conditions. The most important part of incident response is the people. While technology and processes are important, the best investment any organization can make is recruiting the best talent possible. Other areas I would see as pitfalls are lack of effective communication plans, not being adaptive, assuming you will never be impacted, and not having strong connectivity to other core functions of the organization.


7 key trends defining the cybersecurity market today

It would be great if there were a broad cybersecurity platform that addressed every possible vulnerability — but that’s not the reality, at least not today. Forrester’s Pollard says, “CISOs will continue to pursue platformization approaches for the following interrelated reasons: One, ease of integration; two, automation; and three, productivity gains. However, point products will not go away. They will be used to augment control gaps platforms have yet to solve.” ... Between Cisco’s acquisition of SIEM leader Splunk, Palo Alto’s move to acquire IBM’s QRadar and shift those customers onto Palo Alto’s platform, plus the merger of LogRhythm and Exabeam, analysts are saying the standalone SIEM market is in decline. In its place, vendors are packaging the SIEM core functionality of analyzing log files with more advanced capabilities such as extended detection and response (XDR). ... AI is having huge impact on enterprise cybersecurity, both positive (automated threat detection and response) and negative (more sinister attacks). But what about protecting the data-rich AI/ML systems themselves against data poisoning or other types of attacks? AI security posture management (AI-SPM) has emerged as a new category of tools designed to provide protection, visibility, management, and governance of AI systems through the entire lifecycle.


Human error zero: The path to reliable data center networks

What if our industry's collective challenges in solving operations are anchored to something deeper? What if we have been pursuing the wrong why all along? Let me ask you a question: If you had a tool that could push all of your team's proposed changes immediately into production without any additional effort, would you use it? The right answer here is unquestionably no. Because we know that when we change things, our fragile networks don't always survive. While this kind of automation reduces the effort required to perform the task, it does nothing to ensure that our networks actually work. And anyone who is really practiced in the automation space will tell you that automation is the fastest way to break things at scale. ... Don't get me wrong—I am not down on automation. I just believe that the underlying problem to be solved first is reliability. We have to eradicate human error. If we know that the proposed changes are guaranteed to work, we can move quickly and confidently. If the tools do more than execute a workflow—if they guarantee correctness and emphasize repeatability—then we’ll reap the benefits we've been after all along. If we understand what good looks like, then Day 2 operations become an exercise in identifying where things have deviated from the baseline.


Does Microsoft’s Majorana chip meet enterprise needs?

Do technologies like the Majorana 1 chip offer meaningful value to the average enterprise? Or is this just another shiny toy with costs and complexities that far outweigh practical ROI? ... Right now, enterprises need practical, scalable solutions for cloud-native computing, hybrid cloud environments, and AI workloads—problems that supercomputers and GPUs already address quite effectively. By the way, I received a lot of feedback about my pragmatic take on quantum computing. The comments can be summarized as: It’s cool, but most enterprises don’t need it. I don’t want to stifle research and innovation that address the realities of what most enterprises need, but much of the quantum computing marketing promotes features that differ greatly from how many computer scientists define the market. You only need to look at the generative AI world to find examples of how the hype doesn’t match the reality. ... Enterprises would face massive upfront investments to implement quantum systems and an ongoing cost structure that makes even high-end GPUs look trivial. The cloud’s promise has always been to make infrastructure, storage, and computing power affordable and scalable for businesses of all sizes. Quantum systems are the opposite.


How AI and UPI Are Disrupting Financial Services

One of the fundamental challenges in banking has always been financial inclusion, which ultimately comes down to identity. Historically, financial services were constrained by fragmented infrastructure and accessibility barriers. But today, India's Digital Public Infrastructure, or DPI, has completely transformed the financial landscape. Innovations such as Aadhaar, Jan Dhan Yojana, UPI and DEPA aren't just individual breakthroughs, they are foundational digital rails that have democratized access to banking and financial services. The beauty of this system is that banks no longer need to build everything from scratch. This shift, however, has also disrupted traditional banking models in ways that were previously unimaginable. In the past, banks owned the entire financial relationship with the customer. Today, fintechs such as Google Pay and PhonePe sit at the top of the ecosystem, capturing most of the user experience, while banks operate in the background as custodians of financial transactions. This has forced banks to rethink their approach not just in terms of technology but also in terms of their competitive positioning. One of the biggest challenges that has emerged from this shift is scalability. Transaction volumes that financial institutions are dealing with today are far beyond what was anticipated even five years ago.


Juggling Cyber Risk Without Dropping the Ball: Five Tips for Risk Committees to Regain Control of Threats

Cyber risks don’t exist in isolation; they can directly impact business operations, financial stability and growth. Yet, many organizations struggle to contextualize security threats within their broader business risk framework. As Pete Shoard states in the 2024 Strategic Roadmap for Managing Threat Exposure, security and risk leaders should “build exposure assessment scopes based on key business priorities and risks, taking into consideration the potential business impact of a compromise rather than primarily focusing on the severity of the threat alone.” ... Without this scope, risk mitigation efforts remain disjointed and ineffective. Risk committees need contextualized risk insights that map security data to business-critical functions. ... Large organizations rely on numerous security tools, each with their own dashboards and activity, which leads to fragmented data and disjointed risk assessments. Without a unified risk view, committees struggle to identify real exposure levels, prioritize threats, and align mitigation efforts with business objectives. ... Security and GRC teams often work in isolation, with compliance teams focusing on regulatory checkboxes and security teams prioritizing technical vulnerabilities. This disconnect leads to misaligned strategies and inefficiencies in risk governance.


Why eBPF Hasn't Taken Over IT Operations — Yet

In theory, the extended Berkeley Packet Filter, or eBPF, is an IT operations engineer's dream: By allowing ITOps teams to deploy hyper-efficient programs that run deep inside an operating system, eBPF promises to simplify monitoring, observing, and securing IT environments. ... Writing eBPF programs requires specific expertise. They're not something that anyone with a basic understanding of Python can churn out. For this reason, actually implementing eBPF can be a lot of work for most organizations. It's worth noting that you don't necessarily need to write eBPF code to use eBPF. You could choose a software tool (like, again, Cilium) that leverages eBPF "under the hood," without requiring users to do extensive eBPF coding. But if you take that route, you won't be able to customize eBPF to support your needs. ... Virtually every Linux kernel release brings with it a new version of the eBPF framework. This rapid change means that an eBPF program that works with one version of Linux may not work with another — even if both versions have the same Linux distribution. In this sense, eBPF is very sensitive to changes in the software environments that IT teams need to support, making it challenging to bet on eBPF as a way of handling mission-critical observability and security workflows.

Daily Tech Digest - January 22, 2025

How Operating Models Need to Evolve in 2025

“In 2025, enterprises are looking to achieve autonomous and self-healing IT environments, which is currently referred to as ‘AIOps.’ However, the use of AI will become so common in IT operations that we won’t need to call it [that] explicitly,” says Ruh in an email interview. “Instead, the term, ‘AIOps’ will become obsolete over the next two years as enterprises move towards the first wave of AI agents, where early adopters will start deploying intelligent components in their landscape able to reason and take care of tasks with an elevated level of autonomy.” ... “The IT operating model of 2025 must adapt to a landscape shaped by rapid decentralization, flatter structures, and AI-driven innovation,” says Langley in an email interview. “These shifts are driven by the need for agility in responding to changing business needs and the transformative impact of AI on decision-making, coordination and communication. Technology is no longer just a tool but a connective tissue that enables transparency and autonomy across teams while aligning them with broader organizational goals.” ... “IT leaders must transition from traditional hierarchical roles to facilitators who harness AI to enable autonomy while maintaining strategic alignment. This means creating systems for collaboration and clarity, ensuring the organization thrives in a decentralized environment,” says Langley.


Cybersecurity is tough: 4 steps leaders can take now to reduce team burnout

Whether it’s about solidifying partnerships with business managers, changing corporate culture, or correcting errant employees, peer input is golden. No matter the scenario, it’s likely that other security leaders have dealt with the same or similar situations, so their input, empathy, and advice are invaluable. ... Well-informed leaders are more likely to champion and include security in new initiatives, an important shift in culture from seeing security as a pain to embracing security as an important business tool. Such a shift greatly reduces another top stressor among CISO’s — lack of management support. In a security-centric organization, team members in all roles experience less pressure to perform miracles with no resources. And, instead of fighting with leaders for resources, the CISO has more time to focus on getting to know and better manage staff. ... Recognition, she says, boosts individual and team morale and motivation. “I am grateful for and do not take for granted having excellent leadership above me that supports me and my team. I try to make it easy for them.” And, since personal stressors also impact burnout, she encourages team members to share their personal stressors at her one-on-ones or in the group meeting where they can be supported.  


Mandatory MFA, Biometrics Make Headway in Middle East, Africa

Digital identity platforms, such as UAE Pass in the United Arab Emirates and Nafath in Saudi Arabia, integrate with existing fingerprint and facial-recognition systems and can reduce the reliance on passwords, says Chris Murphy, a managing director with the cybersecurity practice at FTI Consulting in Dubai. "With mobile devices serving as the primary gateway to digital services, smartphone-based biometric authentication is the most widely used method in public and private sectors," he says. "Some countries, such as the UAE and Saudi Arabia, are early adopters of passwordless authentication, leveraging AI-based facial recognition and behavioral analytics for seamless and secure identity verification." African nations have also rolled out national identity cards based on biometrics. In South Africa, for example, customers can walk into a bank and open an account by using their fingerprint and linking it to the national ID database, which acts as the root of trust, says BIO-Key's Sullivan. "After they verify that that person is who they say they are with the Home Affairs Ministry, they can store that fingerprint [in the system]," he says. "From then on, anytime they want to authenticate that user, they just touch a finger. They've just now started rolling out the ability to do that without even presenting your card for subsequent business."


Acronis CISO on why backup strategies fail and how to make them resilient

Start by conducting a thorough business impact analysis. Figure out which processes, applications, and data sets are mission-critical, and decide how much downtime or data loss is acceptable. The more vital the data or application, the tighter (and more expensive) your RTO and RPO targets will be. Having a strong data and systems classification system will make this process significantly easier. There’s always a trade-off: the more stringent your RTO and RPO, the higher the cost and complexity of maintaining the necessary backup infrastructure. That’s why prioritisation is key. For example, a real-time e-commerce database might need near-zero downtime, while archived records can tolerate days of recovery time. Once you establish your priorities, you can use technologies like incremental backups, continuous data protection, and cross-site replication to meet tighter RTO and RPO without overwhelming your network or your budget. ... Start by reviewing any regulatory or compliance rules you must follow; these often dictate which data must be kept and for how long. Keep in mind, that some information may not be kept longer than absolutely needed – personally identifiable information would come to mind. Next, look at the operational value of your data. 


The bitter lesson for generative AI adoption

The rapid pace of innovation and the proliferation of new models have raised concerns about technology lock-in. Lock-in occurs when businesses become overly reliant on a specific model with bespoke scaffolding that limits their ability to adapt to innovations. Upon its release, GPT-4 was the same cost as GPT-3 despite being a superior model with much higher performance. Since the GPT-4 release in March 2023, OpenAI prices have fallen another six times for input data and four times for output data with GPT-4o, released May 13, 2024. Of course, an analysis of this sort assumes that generation is sold at cost or a fixed profit, which is probably not true, and significant capital injections and negative margins for capturing market share have likely subsidized some of this. However, we doubt these levers explain all the improvement gains and price reductions. Even Gemini 1.5 Flash, released May 24, 2024, offers performance near GPT-4, costing about 85 times less for input data and 57 times less for output data than the original GPT-4. Although eliminating technology lock-in may not be possible, businesses can reduce their grip on technology adoption by using commercial models in the short run.


Staying Ahead: Key Cloud-Native Security Practices

NHIs represent machine identities used in cybersecurity. They are conceived by combining a “Secret” (an encrypted password, token, or key) and the permissions allocated to that Secret by a receiving server. In an increasingly digital landscape, the role of these machine identities and their secrets cannot be overstated. This makes the management of NHIs a top priority for organizations, particularly those in industries like financial services, healthcare, and travel. ... As technology has advanced, so too has the need for more thorough and advanced cybersecurity practices. One rapidly evolving area is the management of Non-Human Identities (NHIs), which undeniably interweaves secret data. Understanding and efficiently managing NHIs and their secrets are not just choices but an imperative for organizations operating in the digital space and leaned towards cloud-native applications. NHIs have been sharing their secrets with us for some time, communicating an urgent requirement for attention, understanding and improved security practices. They give us hints about potential security weaknesses through unique identifiers that are not unlike a travel passport. By monitoring, managing, and securely storing these identifiers and the permissions granted to them, we can bridge the troublesome chasm between the security and R&D teams, making for better-protected organizations.


3 promises every CIO should keep in 2025

To minimize disappointment, technologists need to set the expectations of business leaders. At the same time, they need to evangelize on the value of new technology. “The CIO has to be an evangelist, educator, and realist all at the same time,” says Fernandes. “IT leaders should be under-hypers rather than over-hypers, and promote technology only in the context of business cases.” ... According to Leon Roberge, CIO for Toshiba America Business Solutions and Toshiba Global Commerce Solutions, technology leaders should become more visible to the business and lead by example to their teams. “I started attending the business meetings of all the other C-level executives on a monthly basis to make sure I’m getting the voice of the business,” he says. “Where are we heading? How are we making money? How can I help business leaders overcome their challenges and meet their objectives?” ... CIOs should also build platforms for custom tools that meet the specific needs not only of their industry and geography, but of their company — and even for specific divisions. AI models will be developed differently for different industries, and different data will be used to train for the healthcare industry than for logistics, for example. Each company has its own way of doing business and its own data sets. 


5G in Business: Roadblocks, Catalysts in Adoption - Part 1

Enterprises considering 5G adoption are confronted with several challenges, key among them being high capex, security, interoperability and integration with existing infrastructure, and skills development within their workforce. Consistent coverage and navigating the complex regulatory landscape are also inhibitors to adoption. Jenn Mullen, emerging technology solutions lead at Keysight Technologies, told ISMG that business leaders must address potential security concerns, ensure seamless integration with existing IT infrastructure and demonstrate a strong return on investment. ... Early enterprise 5G projects were unsuccessful as the applications and devices weren't 5G compatible. For instance, in 2021, ArcelorMittal France conceived 5G Steel, a private cellular network serving its steelworks in Dunkirk, Mardyck and Florange (France) - to support its digitalization plans with high-speed, site-wide 5G connectivity. The private network, which covers a 10 square kilometer area, was built by French public network operator Orange. When it turned the network on in October 2022, the connecting devices were only 4G, leading to underutilization. "The availability of 5G-compatible terminals suitable for use in an industrial environment is too limited," said David Glijer, the company's director of digital transformation at the time.


Rethinking Business Models With AI

We arrive in a new era of transforming business models and organizations by leveraging the power of Gen AI. An AI-powered business model is an organizational framework that fundamentally integrates AI into one or more core aspects of how a company creates, delivers and captures value. Unlike traditional business models that merely use AI as a tool for optimization, a truly AI-powered business model exhibits distinctive characteristics, such as self-reinforcing intelligence, scalable personalization and ecosystem integration. ... As an organization moves through its AI-powered business model innovation journey, it must systematically consider the eight essentials of AI-driven business models (Figure 3) and include a holistic assessment of current state capabilities, identification of AI innovation opportunities and development of a well-defined map of the transformation journey. Following this, rapid innovation sprints should be conducted to translate strategic visions into tangible results that validate the identified AI opportunities and de-risk at-scale deployments. ... While the potential rewards are compelling — from operational efficiencies to entirely new value propositions — the journey is complex and fraught with pitfalls, not least from existing barriers. 


Increase in cyberattacks setting the stage for identity security’s rapid growth

Digital identity security is rapidly growing in importance as identity infrastructure becomes a target for cyber attackers. Misconfigurations of identity systems have become a significant concern – but many companies still seem unaware of the issue. Security expert Hed Kovetz says that “identity is always the go-to of every attacker.” As CEO and co-founder of digital identity protection firm Silverfort, he believes that protecting identity is one of their most complicated tasks. “If you ask any security team, I think identity is probably the one that is the most complex,” says Kovetz. “It’s painful: There are so many tools, so many legacy technologies and legacy infrastructure still in place.” ... To secure identity infrastructures, security specialists need to deal with both very old and very new technologies consistently. Kovetz says he first began dealing with legacy systems that could not be properly secured and could be used by attackers to spread inside the network. He later extended to protecting and other modern technologies. “I think that protecting these things end to end is the key,” says Kovetz. “Otherwise, attackers will always go to the weaker part.” ... Although the increase in cyberattacks is setting the stage for identity security’s rapid growth in importance, some organizations are still struggling to acknowledge weaknesses in their identity infrastructure.



Quote for the day:

"All leadership takes place through the communication of ideas to the minds of others." -- Charles Cooley

Daily Tech Digest - December 11, 2024

Low-tech solutions to high-tech cybercrimes

The growing quality of deepfakes, including real-time deepfakes during live video calls, invites scammers, criminals, and even state-sponsored attackers to convincingly bypass security measures and steal identities for all kinds of nefarious purposes. AI-enabled voice cloning has already proved to be a massive boon for phone-related identity theft. AI enables malicious actors to bypass face recognition. protection And AI-powered bots are being deployed to intercept and use one-time passwords in real time. More broadly, AI can accelerate and automate just about any cyberattack. ... Once established (not in writing… ), the secret word can serve as a fast, powerful way to instantly identify someone. And because it’s not digital or stored anywhere on the Internet, it can’t be stolen. So if your “boss” or your spouse calls you to ask you for data or to transfer funds, you can ask for the secret word to verify it’s really them. ... Farrow emphasizes a simple way to foil spyware: reboot your phone every day. He points out that most spyware is purged with a reboot. So rebooting every day makes sure that no spyware remains on your phone. He also stresses the importance of keeping your OS and apps updated to the latest version.


7 Essential Trends IT Departments Must Tackle In 2025

Taking responsibility for cybersecurity will remain a key function of IT departments in 2025 as organizations face off against increasingly sophisticated and frequent attacks. Even as businesses come to understand that everyone from the boardroom to the shop floor has a part to play in preventing attacks, IT teams will inevitably be on the front line, with the job of securing networks, managing update and installation schedules, administering access protocols and implementing zero-trust measures. ... In 2025, AIOps are critical to enabling businesses to benefit from real-time resource optimization, automated decision-making and predictive incident resolution. This should empower the entire workforce, from marketing to manufacturing, to focus on innovation and high-value tasks rather than repetitive technical work best left to machines. ... with technology functions playing an increasingly integral role in business growth, other C-level roles have emerged to take on some of the responsibilities. As well as Chief Data Officers (CDOs) and Chief Information Security Officers (CISOs), it’s increasingly common for organizations to appoint Chief AI Officers (CIAOs), and as the role of technology in organizations continues to evolve, more C-level positions are likely to become critical.


Passkey adoption by Australian govt, banks drives wider passwordless authentication

“A key change has been to the operation of the security protocols that underpin passkeys and passwordless authentication. As this has improved over time, it has engendered more trust in the technology among technology teams and organisations, leading to increased adoption and use.” “At the same time, users have become more comfortable with biometrics to authenticate to digital services.” Implementation and enablement have also improved, leveraging templates and no-code, drag-and-drop orchestration to “allow administrators to swiftly design, test and deploy various out-of-the-box passwordless registration and authentication experiences for diverse customer identity types, all at scale, with minimal manual setup.” ... Banks are among the major drivers of passkey adoption in Australia. According to an article in the Sydney Morning Herald, National Australia Bank (NAB) chief security officer Sandro Bucchianeri says passwords are “terrible” – and on the way out. ... Specific questions pertaining to passkeys include, “Do you agree or disagree with including use of a passkey as an alternative first-factor identity authentication process?” and “Does it pose any security or fraud risks? If so, please describe these in detail.”


Why crisis simulations fail and how to fix them

Communication gaps are particularly common between technical leadership and business executives. These teams work in silos, which often causes misalignment and miscommunication. Technical staff use jargon that executives don’t fully understand, while business priorities may be unclear to the technical team. As a result, it becomes difficult to discern what requires immediate attention and communication versus what constitutes noise. This slows down critical decisions. Now throw in third-party vendors or MSPs, and this just amplifies the confusion and adds to the chaos. Role confusion is an interesting challenge. Crisis management playbooks typically have roles assigned to tasks, but no detail on what these roles mean. I have seen teams come into an exercise confident about the name of their role, but no idea what the role means in terms of actual execution. Many times, teams don’t even know that a role exists within the team or who owns it. A fitting example is a “crisis simulation secretary” — someone tasked with recording the notes for the meetings, scheduling the calls, making sure everyone has the correct numbers to dial in, etc. This may seem trivial, but it is a critical role, as you do not want to waste precious minutes trying to dial into a call. 


What CIOs are in for with the EU’s Data Act

There are many things the CIO will have to perform in light of Data Act provisions. In the meantime, as explained by Perugini, CIOs must do due diligence on the data their companies collect from connected devices and understand where they are in the value chain — whether they are the owners, users, or recipients. “If the company produces a connected industrial machine and gives it to a customer and then maintains the machine, it finds itself collecting the data as the owner,” she says. “If the company is a customer of the machine, it’s a user and co-generates the data. But if it’s a company that acquires the data of the machine, it’s a recipient because the user or the manufacturer has allowed it to make them available or participates in a data marketplace. CIOs can also see if there’s data generated by others on the market that can be used for internal analysis, and procure it. Any use or exchange of data must be regulated by an agreement between the interested parties with contracts.” The CIO will also have to evaluate contracts with suppliers, ensuring terms are compliant, and negotiate with suppliers to access data in a direct and interoperable way. Plus, the CIO has to evaluate whether the company’s IT infrastructure is suitable to guarantee interoperability and security of data as per GDPR. 


How slowing down can accelerate your startup’s growth

WIn startup culture, there’s a pervasive pressure to say “yes” to every opportunity, to grow at all costs. But I’ve learned that restraint is an underrated virtue in business. At Aloha, we had to make tough choices to stay on the path of sustainable growth. We focused on our core mission and turned down attractive but potentially distracting opportunities that would have taken resources away from what mattered most. ... One of the most persistent traps for startups is the “growth at all costs” mindset. Top-line growth can be impressive, but if it’s achieved without a path to profitability, it’s a house of cards. When I joined Aloha, we refocused our efforts on creating a financially sustainable business. This meant dialing back on some of our expansion plans to ensure we were growing within our means. ... In a world that worships speed, it takes courage to slow down. It’s not easy to resist the siren call of hypergrowth. But when you do, you create the conditions for a business that can weather storms, adapt to change, and keep thriving. Building a company on these principles doesn’t mean abandoning growth—it means ensuring that growth is meaningful and sustainable. Slow and steady may not be glamorous, but it works. 


Why business teams must stay out of application development

Citizen development is when non-tech users build business applications using no-code/low-code platforms, which automate code generation. Imagine that you need a simple leave application tool within the organization. Enterprises can’t afford to deploy their busy and expensive professional resources to build an internal tool. So, they go the citizen development way. ... Proponents of citizen development argue that the apps built with low-code platforms are highly customizable. What they mean is that they have the ability to mix and match elements and change colors. For enterprise apps, this is all in a day’s work. True customizability comes from real editable code that empowers developers to hand-code parts to handle complex and edge cases. Business users cannot build these types of features because low-code platforms themselves are not designed to handle this. ... Finally, the most important loophole that citizen development creates is security. A vast majority of security attacks happen due to human error, such as phishing scams, downloading ransomware, or improper credential management. In fact, IBM found that there has been a 71% increase this year in cyberattacks that used stolen or compromised credentials.


The rise of observability: A new era in IT Operations

Observability empowers organisations to not just detect that a problem exists, but to understand why it’s happening and how to resolve it. It’s the difference between knowing that a car has broken down and having a detailed diagnostic report that pinpoints the exact issue and suggests an effective repair. The transition from monitoring to observability is not without its challenges. Some organisations find themselves struggling with legacy systems and entrenched processes that resist change. Observability represents a shift from traditional IT operations, requiring a new mindset and skill set. However, the benefits of implementing observability practices far outweigh the initial challenges. While there may be concerns about skill gaps, modern observability platforms are designed to be user-friendly and accessible to team members at all levels. ... Implementing observability results in clear, measurable benefits, especially around improved service reliability. Because teams can identify and resolve issues quickly and proactively, downtime is minimised or eradicated. Enhanced reliability leads to better customer experiences, which is a crucial differentiator in a competitive market where user satisfaction is key.


5 Trends Reshaping the Data Landscape

With increased interest in generative AI and predictive AI, as well as supporting traditional analytical workloads, “we’re seeing a pretty massive increase of data sprawl across industries,” he observed. “They track with the realization among many of our customers that they’ve created a lot of different versions of the truth and silos of data which have different systems, both on-prem and in the cloud.” ... If a data team “can’t get the data where it needs to go, they’re not going to be able to analyze it in an efficient, secure way,” he said. “Leaders have to think about scale in new ways. There are so many systems downstream that consume data. Scaling these environments as the data is growing in many cases by almost double-digit percentages year over year is becoming unwieldy.” A proactive approach is to address these costs and silos through streamlining and simplification on a single common platform, Kethireddy urged, noting Ocient’s approach to “take the path to reducing the amount of hardware and cloud instances it takes to analyze compute-intensive workloads. We focus on minimizing costs associated with the system footprint and energy consumption.”


Serverless Computing: The Future of Programming and Application Deployment Innovations

Serverless computing enhances automated scaling for handling workload by shifting developers' focus on code development by adding and removing instances from serverless functions. This approach leads cloud providers to automate the distribution of incoming traffic from interconnected multiple instances in serverless functions. The scalability nature of serverless computing emphasizes that developers should build applications for handling large volumes of traffic with an effective cloud infrastructure environment. On the other hand, serverless functions assist in limited time within the range of milliseconds to several minutes by optimization of the application code in performance management. ... Cloud providers integrated security features of encryption and access control in infrastructure in cloud services. This measure applied automated security updates and patches in infrastructure with rapid prototype creation. However, serverless computing issues in cloud infrastructure reflect cloud services negatively. The time is taken to respond for the first time when a serverless function has been initiated. The constraints of a serverless architecture reflect a limited function lifecycle, which drastically affects its performance.
 


Quote for the day:

"If you want to be successful prepare to be doubted and tested." -- @PilotSpeaker