Showing posts with label SOC. Show all posts
Showing posts with label SOC. Show all posts

Daily Tech Digest - July 03, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


The Goldilocks Theory – preparing for Q-Day ‘just right’

When it comes to quantum readiness, businesses currently have two options: Quantum key distribution (QKD) and post quantum cryptography (PQC). Of these, PQC reigns supreme. Here’s why. On the one hand, you have QKD which leverages principles of quantum physics, such as superposition, to securely distribute encryption keys. Although great in theory, it needs extensive new infrastructure, including bespoke networks and highly specialised hardware. More importantly, it also lacks authentication capabilities, severely limiting its practical utility. PQC, on the other hand, comprises classical cryptographic algorithms specifically designed to withstand quantum attacks. It can be integrated into existing digital infrastructures with minimal disruption. ... Imagine installing new quantum-safe algorithms prematurely, only to discover later they’re vulnerable, incompatible with emerging standards, or impractical at scale. This could have the opposite effect and could inadvertently increase attack surface and bring severe operational headaches, ironically becoming less secure. But delaying migration for too long also poses serious risks. Malicious actors could be already harvesting encrypted data, planning to decrypt it when quantum technology matures – so businesses protecting sensitive data such as financial records, personal details, intellectual property cannot afford indefinite delays.


Sovereign by Design: Data Control in a Borderless World

The regulatory framework for digital sovereignty is a national priority. The EU has set the pace with GDPR and GAIA-X. It prioritizes data residency and local infrastructure. China's cybersecurity law and personal information protection law enforce strict data localization. India's DPDP Act mandates local storage for sensitive data, aligning with its digital self-reliance vision through platforms such as Aadhaar. Russia's federal law No. 242-FZ requires citizen data to stay within the country for the sake of national security. Australia's privacy act focuses on data privacy, especially for health records, and Canada's PIPEDA encourages local storage for government data. Saudi Arabia's personal data protection law enforces localization for sensitive sectors, and Indonesia's personal data protection law covers all citizen-centric data. Singapore's PDPA balances privacy with global data flows, and Brazil's LGPD, mirroring the EU's GDPR, mandates the protection of privacy and fundamental rights of its citizens. ... Tech companies have little option but to comply with the growing demands of digital sovereignty. For example, Amazon Web Services has a digital sovereignty pledge, committing to "a comprehensive set of sovereignty controls and features in the cloud" without compromising performance.


Agentic AI Governance and Data Quality Management in Modern Solutions

Agentic AI governance is a framework that ensures artificial intelligence systems operate within defined ethical, legal, and technical boundaries. This governance is crucial for maintaining trust, compliance, and operational efficiency, especially in industries such as Banking, Financial Services, Insurance, and Capital Markets. In tandem with robust data quality management, Agentic AI governance can substantially enhance the reliability and effectiveness of AI-driven solutions. ... In industries such as Banking, Financial Services, Insurance, and Capital Markets, the importance of Agentic AI governance cannot be overstated. These sectors deal with vast amounts of sensitive data and require high levels of accuracy, security, and compliance. Here’s why Agentic AI governance is essential: Enhanced Trust: Proper governance fosters trust among stakeholders by ensuring AI systems are transparent, fair, and reliable. Regulatory Compliance: Adherence to legal and regulatory requirements helps avoid penalties and safeguard against legal risks. Operational Efficiency: By mitigating risks and ensuring accuracy, AI governance enhances overall operational efficiency and decision-making. Protection of Sensitive Data: Robust governance frameworks protect sensitive financial data from breaches and misuse, ensuring privacy and security. 


Fundamentals of Dimensional Data Modeling

Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. Data modelers organize these facts and descriptive dimensions into separate tables within the data warehouse, aligning them with the different subject areas and business processes. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. Additionally, dimensional data modeling proves to be flexible as business needs evolve. The data warehouse updates technology according to the concept of slowly changing dimensions (SCD) as business contexts emerge. ... Alignment in the design requires these processes, and data governance plays an integral role in getting there. Once the organization is on the same page about the dimensional model’s design, it chooses the best kind of implementation. Implementation choices include the star or snowflake schema around a fact. When organizations have multiple facts and dimensions, they use a cube. A dimensional model defines how technology needs to build a data warehouse architecture or one of its components using good design and implementation.


IDE Extensions Pose Hidden Risks to Software Supply Chain

The latest research, published this week by application security vendor OX Security, reveals the hidden dangers of verified IDE extensions. While IDEs provide an array of development tools and features, there are a variety of third-party extensions that offer additional capabilities and are available in both official marketplaces and external websites. ... But OX researchers realized they could add functionality to verified extensions after the fact and still maintain the checkmark icon. After analyzing traffic for Visual Studio Code, the researchers found a server request to the marketplace that determines whether the extension is verified; they discovered they could modify the values featured in the server request and maintain the verification status even after creating malicious versions of the approved extensions. ... Using this attack technique, a threat actor could inject malicious code into verified and seemingly safe extensions that would maintain their verified status. "This can result in arbitrary code execution on developers' workstations without their knowledge, as the extension appears trusted," Siman-Tov Bustan and Zadok wrote. "Therefore, relying solely on the verified symbol of extensions is inadvisable." ... "It only takes one developer to download one of these extensions," he says. "And we're not talking about lateral movement. ..."


Business Case for Agentic AI SOC Analysts

A key driver behind the business case for agentic AI in the SOC is the acute shortage of skilled security analysts. The global cybersecurity workforce gap is now estimated at 4 million professionals, but the real bottleneck for most organizations is the scarcity of experienced analysts with the expertise to triage, investigate, and respond to modern threats. One ISC2 survey report from 2024 shows that 60% of organizations worldwide reported staff shortages significantly impacting their ability to secure the organizations, with another report from the World Economic Forum showing that just 15% of organizations believe they have the right people with the right skills to properly respond to a cybersecurity incident. Existing teams are stretched thin, often forced to prioritize which alerts to investigate and which to leave unaddressed. As previously mentioned, the flood of false positives in most SOCs means that even the most experienced analysts are too distracted by noise, increasing exposure to business-impacting incidents. Given these realities, simply adding more headcount is neither feasible nor sustainable. Instead, organizations must focus on maximizing the impact of their existing skilled staff. The AI SOC Analyst addresses this by automating routine Tier 1 tasks, filtering out noise, and surfacing the alerts that truly require human judgment. 


Microservice Madness: Debunking Myths and Exposing Pitfalls

Microservices will reduce dependencies, because it forces you to serialize your types into generic graph objects (read; JSON or XML or something similar). This implies that you can just transform your classes into a generic graph object at its interface edges, and accomplish the exact same thing. ... There are valid arguments for using message brokers, and there are valid arguments for decoupling dependencies. There are even valid points of scaling out horizontally by segregating functionality on to different servers. But if your argument in favor of using microservices is "because it eliminates dependencies," you're either crazy, corrupt through to the bone, or you have absolutely no idea what you're talking about (make your pick!) Because you can easily achieve the same amount of decoupling using Active Events and Slots, combined with a generic graph object, in-process, and it will execute 2 billion times faster in production than your "microservice solution" ... "Microservice Architecture" and "Service Oriented Architecture" (SOA) have probably caused more harm to our industry than the financial crisis in 2008 caused to our economy. And the funny thing is, the damage is ongoing because of people repeating mindless superstitious belief systems as if they were the truth.


Sustainability and social responsibility

Direct-to-chip liquid cooling delivers impressive efficiency but doesn’t manage the entire thermal load. That’s why hybrid systems that combine liquid and traditional air cooling are increasingly popular. These systems offer the ability to fine-tune energy use, reduce reliance on mechanical cooling, and optimize server performance. HiRef offers advanced cooling distribution units (CDUs) that integrate liquid-cooled servers with heat exchangers and support infrastructure like dry coolers and dedicated high-temperature chillers. This integration ensures seamless heat management regardless of local climate or load fluctuations. ... With liquid cooling systems capable of operating at higher temperatures, facilities can increasingly rely on external conditions for passive cooling. This shift not only reduces electricity usage, but also allows for significant operational cost savings over time. But this sustainable future also depends on regulatory compliance, particularly in light of the recently updated F-Gas Regulation, which took effect in March 2024. The EU regulation aims to reduce emissions of fluorinated greenhouse gases to net-zero by 2050 by phasing out harmful high-GWP refrigerants like HFCs. “The F-Gas regulation isn’t directly tailored to the data center sector,” explains Poletto.


Infrastructure Operators Leaving Control Systems Exposed

Threat intelligence firm Censys has scanned the internet twice a month for the last six months, looking for a representative sample composed of four widely used types of ICS devices publicly exposed to the internet. Overall exposure slightly increased from January through June, the firm said Monday. One of the devices Censys scanned for is programmable logic controllers made by an Israel-based Unitronics. The firm's Vision-series devices get used in numerous industries, including the water and wastewater sector. Researchers also counted publicly exposed devices built by Israel-based Orpak - a subsidiary of Gilbarco Veeder-Root - that run SiteOmat fuel station automation software. It also looked for devices made by Red Lion that are widely deployed for factory and process automation, as well as in oil and gas environments. It additionally probed for instances of a facilities automation software framework known as Niagara, made by Tridium. ... Report author Emily Austin, principal security researcher at Censys, said some fluctuation over time isn't unusual, given how "services on the internet are often ephemeral by nature." The greatest number of publicly exposed systems were in the United States, except for Unitronics, which are also widely used in Australia.


Healthcare CISOs must secure more than what’s regulated

Security must be embedded early and consistently throughout the development lifecycle, and that requires cross-functional alignment and leadership support. Without an understanding of how regulations translate into practical, actionable security controls, CISOs can struggle to achieve traction within fast-paced development environments. ... Security objectives should be mapped to these respective cycles—addressing tactical issues like vulnerability remediation during sprints, while using PI planning cycles to address larger technical and security debt. It’s also critical to position security as an enabler of business continuity and trust, rather than a blocker. Embedding security into existing workflows rather than bolting it on later builds goodwill and ensures more sustainable adoption. ... The key is intentional consolidation. We prioritize tools that serve multiple use cases and are extensible across both DevOps and security functions. For example, choosing solutions that can support infrastructure-as-code security scanning, cloud posture management, and application vulnerability detection within the same ecosystem. Standardizing tools across development and operations not only reduces overhead but also makes it easier to train teams, integrate workflows, and gain unified visibility into risk.

Daily Tech Digest - January 29, 2025


Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer


Evil Models and Exploits: When AI Becomes the Attacker

A more structured threat emerges with technologies like the Model Context Protocol (MCP). Originally introduced by Anthropic, MCP allows large language models (LLMs) to interact with host machines via JavaScript APIs. This enables LLMs to perform sophisticated operations by controlling local resources and services. While MCP is being embraced by developers for legitimate use cases, such as automation and integration, its darker implications are clear. An MCP-enabled system could orchestrate a range of malicious activities with ease. Think of it as an AI-powered operator capable of executing everything from reconnaissance to exploitation. ... The proliferation of AI models is both a blessing and a curse. Platforms like Hugging Face host over a million models, ranging from state-of-the-art neural networks to poorly designed or maliciously altered versions. Amid this abundance lies a growing concern: model provenance. Imagine a widely used model, fine-tuned by a seemingly reputable maintainer, turning out to be a tool of a state actor. Subtle modifications in the training data set or architecture could embed biases, vulnerabilities or backdoors. These “evil models” could then be distributed as trusted resources, only to be weaponized later. This risk underscores the need for robust mechanisms to verify the origins and integrity of AI models.


The tipping point for Generative AI in banking

Advancements in AI are allowing banks and other fintechs to embed the technology across their entire value chain. For example, TBC is leveraging AI to make 42% of all payment reminder calls to customers with loans that are up to 30 days or less overdue and is getting ready to launch other AI-enabled solutions. Customers normally cannot differentiate the AI calls powered by our tech from calls by humans, even as the AI calls are ten times more efficient for TBC’s bottom line, compared with human operator calls. Klarna rolled out an AI assistant, which handled 2.3 million conversations in its first month of operation, which accounts for two-thirds of Klarna’s customer service chats or the workload of 700 full-time agents, the company estimated. Deutsche Bank leverages generative AI for software creation and managing adverse media, while the European neobank Bunq applies it to detect fraud. Even smaller regional players, provided they have the right tech talent in place, will soon be able to deploy Gen AI at scale and incorporate the latest innovations into their operations. Next year is set to be a watershed year when this step change will create a clear division in the banking sector between AI-enabled champions and other players that will soon start lagging behind. 


Want to be an effective cybersecurity leader? Learn to excel at change management

Security should never be an afterthought; the change management process shouldn’t be, either, says Michael Monday, a managing director in the security and privacy practice at global consulting firm Protiviti. “The change management process should start early, before changing out the technology or process,” he says. “There should be some messages going out to those who are going to be impacted letting them know, [otherwise] users will be surprised, they won’t know what’s going on, business will push back and there will be confusion.” ... “It’s often the CISO who now has to push these new things,” says Moyle, a former CISO, founding partner of the firm SecurityCurve, and a member of the Emerging Trends Working Group with the professional association ISACA. In his experience, Moyle says he has seen some workers more willing to change than others and learned to enlist those workers as allies to help him achieve his goals. ... When it comes to the people portion, she tells CISOs to “feed supporters and manage detractors.” As for process, “identify the key players for the security program and understand their perspective. There are influencers, budget holders, visionaries, and other stakeholders — each of which needs to be heard, and persuaded, especially if they’re a detractor.”


Preparing financial institutions for the next generation of cyber threats

Collaboration between financial institutions, government agencies, and other sectors is crucial in combating next-generation threats. This cooperative approach enhances the ability to detect, respond to, and mitigate sophisticated threats more effectively. Visa regularly works with international agencies of all sizes to bring cybercriminals to justice. In fact, Visa regularly works alongside law enforcement, including the US Department of Justice, FBI, Secret Service and Europol, to help identify and apprehend fraudsters and other criminals. Visa uses its AI and ML capabilities to identify patterns of fraud and cybercrime and works with law enforcement to find these bad actors and bring them to justice. ... Financial institutions face distinct vulnerabilities compared to other industries, particularly due to their role in critical infrastructure and financial ecosystems. As high-value targets, they manage large sums of money and sensitive information, making them prime targets for cybercriminals. Their operations involve complex and interconnected systems, often including legacy technologies and numerous third-party vendors, which can create security gaps. Regulatory and compliance challenges add another layer of complexity, requiring stringent data protection measures to avoid hefty fines and maintain customer trust.


Looking back to look ahead: from Deepfakes to DeepSeek what lies ahead in 2025

Enterprises increasingly turned to AI-native security solutions, employing continuous multi-factor authentication and identity verification tools. These technologies monitor behavioral patterns or other physical world signals to prove identity —innovations that can now help prevent incidents like the North Korean hiring scheme. However, hackers may now gain another inside route to enterprise security. The new breed of unregulated and offshore LLMs like DeepSeek creates new opportunities for attackers. In particular, using DeepSeek’s AI model gives attackers a powerful tool to better discover and take advantage of the cyber vulnerabilities of any organization. ... Deepfake technology continues to blur the lines between reality and fiction. ... Organizations must combat the increasing complexity of identity fraud, hackers, cyber security thieves, and data center poachers each year. In addition to all of the threats mentioned above, 2025 will bring an increasing need to address IoT and OT security issues, data protection in the third-party cloud and AI infrastructure, and the use of AI agents in the SOC. To help thwart this year’s cyber threats, CISOs and CTOs must work together, communicate often, and identify areas to minimize risks for deepfake fraud across identity, brand protection, and employee verification.


The Product Model and Agile

First, the product model is not new; it’s been out there for more than 20 years. So I have never argued that the product model is “the next new thing,” as I think that’s not true. Strong product companies have been following the product model for decades, but most companies around the world have only recently been exposed to this model, which is why so many people think of it as new. Second, while I know this irritates many people, today there are very different definitions of what it even means to be “Agile.” Some people consider SAFe as Agile. If that’s what you consider Agile, then I would say that Agile plays no part in the product model, as SAFe is pretty much the antithesis of the product model. This difference is often characterized today as “fake Agile” versus “real Agile.” And to be clear, if you’re running XP, or Kanban, or Scrum, or even none of the Agile ceremonies, yet you are consistently doing continuous deployment, then at least as far as I’m concerned, you’re running “real Agile.” Third, we should separate the principles of Agile from the various, mostly project management, processes that have been set up around those principles. ... Finally, it’s also important to point out that there is one Agile principle that might be good enough for custom or contract software work, but is not sufficient for commercial product work. This is the principle that “working software is the primary measure of progress.”


Next Generation Observability: An Architectural Introduction

It's always a challenge when creating architectural content, trying to capture real-world stories into a generic enough format to be useful without revealing any organization's confidential implementation details. We are basing these architectures on common customer adoption patterns. That's very different from most of the traditional marketing activities usually associated with generating content for the sole purpose of positioning products for solutions. When you're basing the content on actual execution in solution delivery, you're cutting out the marketing chuff. This observability architecture provides us with a way to map a solution using open-source technologies focusing on the integrations, structures, and interactions that have proven to work at scale. Where those might fail us at scale, we will provide other options. What's not included are vendor stories, which are normal in most marketing content. Those stories that, when it gets down to implementation crunch time, might not fully deliver on their promises. Let's look at the next-generation observability architecture and explore its value in helping our solution designs. The first step is always to clearly define what we are focusing on when we talk about the next-generation observability architecture.


AI SOC Analysts: Propelling SecOps into the future

Traditional, manual SOC processes already struggling to keep pace with existing threats are far outpaced by automated, AI-powered attacks. Adversaries are using AI to launch sophisticated and targeted attacks putting additional pressure on SOC teams. To defend effectively, organizations need AI solutions that can rapidly sort signals from noise and respond in real time. AI-generated phishing emails are now so realistic that users are more likely to engage with them, leaving analysts to untangle the aftermath—deciphering user actions and gauging exposure risk, often with incomplete context. ... The future of security operations lies in seamless collaboration between human expertise and AI efficiency. This synergy doesn't replace analysts but enhances their capabilities, enabling teams to operate more strategically. As threats grow in complexity and volume, this partnership ensures SOCs can stay agile, proactive, and effective. ... Triaging and investigating alerts has long been a manual, time-consuming process that strains SOC teams and increases risk. Prophet Security changes that. By leveraging cutting-edge AI, large language models, and advanced agent-based architectures, Prophet AI SOC Analyst automatically triages and investigates every alert with unmatched speed and accuracy.


Apple researchers reveal the secret sauce behind DeepSeek AI

The ability to use only some of the total parameters of a large language model and shut off the rest is an example of sparsity. That sparsity can have a major impact on how big or small the computing budget is for an AI model. AI researchers at Apple, in a report out last week, explain nicely how DeepSeek and similar approaches use sparsity to get better results for a given amount of computing power. Apple has no connection to DeepSeek, but Apple does its own AI research on a regular basis, and so the developments of outside companies such as DeepSeek are part of Apple's continued involvement in the AI research field, broadly speaking. In the paper, titled "Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models," posted on the arXiv pre-print server, lead author Samir Abnar of Apple and other Apple researchers, along with collaborator Harshay Shah of MIT, studied how performance varied as they exploited sparsity by turning off parts of the neural net. ... Abnar and team ask whether there's an "optimal" level for sparsity in DeepSeek and similar models, meaning, for a given amount of computing power, is there an optimal number of those neural weights to turn on or off? It turns out you can fully quantify sparsity as the percentage of all the neural weights you can shut down, with that percentage approaching but never equaling 100% of the neural net being "inactive."


What Data Literacy Looks Like in 2025

“The foundation of data literacy lies in having a basic understanding of data. Non-technical people need to master the basic concepts, terms, and types of data, and understand how data is collected and processed,” says Li. “Meanwhile, data literacy should also include familiarity with data analysis tools. ... “Organizations should also avoid the misconception that fostering GenAI literacy alone will help developing GenAI solutions. For this, companies need even greater investments in expert AI talent -- data scientists, machine learning engineers, data engineers, developers and AI engineers,” says Carlsson. “While GenAI literacy empowers individuals across the workforce, building transformative AI capabilities requires skilled teams to design, fine-tune and operationalize these solutions. Companies must address both.” ... “Data literacy in 2025 can’t just be about enabling employees to work with data. It needs to be about empowering them to drive real business value,” says Jain. “That’s how organizations will turn data into dollars and ensure their investments in technology and training actually pay off.” ... “Organizations can embed data literacy into daily operations and culture by making data-driven thinking a core part of every role,” says Choudhary.

Daily Tech Digest - December 29, 2024

AI agents may lead the next wave of cyberattacks

“Many organizations run a pen test on maybe an annual basis, but a lot of things change within an application or website in a year,” he said. “Traditional cybersecurity organizations within companies have not been built for constant self-penetration testing.” Stytch is attempting to improve upon what McGinley-Stempel said are weaknesses in popular authentication schemes of such as the Completely Automated Public Turing test to tell Computers and Humans Apart, or captcha, a type of challenge-response test used to determine whether a user interacting with a system is a human or an bot. Captcha codes may require users to decipher scrambled letters or count the number of traffic lights in an image. ... “If you’re just going to fight machine learning models on the attacking side with ML models on the defensive side, you’re going to get into some bad probabilistic situations that are not going to necessarily be effective,” he said. Probabilistic security provides protections based on probabilities but assumes that absolute security can’t be guaranteed. Stytch is working on deterministic approaches such as fingerprinting, which gathers detailed information about a device or software based on known characteristics and can provide a higher level of certainty that the user is who they say they are.


How businesses can ensure cloud uptime over the holidays

To ensure uptime during the holidays, best practice should include conducting pre-holiday stress tests to identify system vulnerabilities and configure autoscaling to handle demand surges. Experts also recommend simulating failures through chaos engineering to expose weaknesses. Redundancy across regions or availability zones is essential, as is a well-documented incident response plan – with clear escalation paths – “as this allows a team to address problems quickly even with reduced staffing,” says VimalRaj Sampathkumar, technical head – UKI at software company ManageEngine. It’s all about understanding the business requirements and what your demand is going to look like, says Luan Hughes, chief information officer (CIO) at tech provider Telent, as this will vary from industry to industry. “When we talk about preparedness, we talk a lot about critical incident management and what happens when big things occur, but I think you need to have an appreciation of what your triggers are,” she says. ... It’s also important to focus on your people as much as your systems, she adds, noting that it’s imperative to understand your management processes, out-of-hours and on-call rota and how you action support if problems do arise.


Tech worker movements grow as threats of RTO, AI loom

While layoffs likely remain the most extreme threat to tech workers broadly, a return-to-office (RTO) mandate can be just as jarring for remote tech workers who are either unable to comply or else unwilling to give up the better work-life balance that comes with no commute. Advocates told Ars that RTO policies have pushed workers to join movements, while limited research suggests that companies risk losing top talents by implementing RTO policies. ... Other companies mandating RTO faced similar backlash from workers, who continued to question the logic driving the decision. One February study showed that RTO mandates don't make companies any more valuable but do make workers more miserable. And last month, Brian Elliott, an executive advisor who wrote a book about the benefits of flexible teams, noted that only one in three executives thinks RTO had "even a slight positive impact on productivity." But not every company drew a hard line the way that Amazon did. For example, Dell gave workers a choice to remain remote and accept they can never be eligible for promotions, or mark themselves as hybrid. Workers who refused the RTO said they valued their free time and admitted to looking for other job opportunities.


Navigating the cloud and AI landscape with a practical approach

When it comes to AI or genAI, just like everyone else, we started with use cases that we can control. These include content generation, sentiment analysis and related areas. As we explored these use cases and gained understanding, we started to dabble in other areas. For example, we have an exciting use case for cleaning up our data that leverages genAI as well as non-generative machine learning to help us identify inaccurate product descriptions or incorrect classifications and then clean them up and regenerate accurate, standardized descriptions. ... While this might be driving internal productivity, you also must think of it this way: As a distributor, at any one time, we deal with millions of parts. Our supplier partners keep sending us their price books, spec sheets and product information every quarter. So, having a group of people trying to go through all that data to find inaccuracies is a daunting, almost impossible, task. But with AI and genAI capabilities, we can clean up any inaccuracies far more quickly than humans could. Sometimes within as little as 24 hours. That helps us improve our ability to convert and drive business through an improved experience for our customers.


When the System Fights Back: A Journey into Chaos Engineering

Enter chaos engineering — the art of deliberately creating disaster to build stronger systems. I’d read about Netflix’s Chaos Monkey, a tool designed to randomly kill servers in production, and I couldn’t help but admire the audacity. What if we could turn our system into a fighter — one that could take a punch and still come out swinging? ... Chaos engineering taught me more than I expected. It’s not just a technical exercise; it’s a mindset. It’s about questioning assumptions, confronting fears, and embracing failure as a teacher. We integrated chaos experiments into our CI/CD pipeline, turning them into regular tests. Post-mortems became celebrations of what we’d learned, rather than finger-pointing sessions. And our systems? Stronger than ever. But chaos engineering isn’t just about the tech. It’s about the culture you build around it. It’s about teaching your team to think like detectives, to dig into logs and metrics with curiosity instead of dread. It’s about laughing at the absurdity of breaking things on purpose and marveling at how much you learn when you do. So here’s my challenge to you: embrace the chaos. Whether you’re running a small app or a massive platform, the principles hold true. 


Enhancing Your Company’s DevEx With CI/CD Strategies

CI/CD pipelines are key to an engineering organization’s efficiency, used by up to 75% of software companies with developers interacting with them daily. However, these CI/CD pipelines are often far from being the ideal tool to work with. A recent survey found that only 14% of practitioners go from code to production in less than a day when high-performing teams should be able to deploy multiple times a day. ... Merging, building, deploying and running are all classic steps of a CI/CD pipeline, often handled by multiple tools. Some organizations have SREs that handle these functions, but not all developers are that lucky! In that case, if a developer wants to push code where a pipeline isn’t set up — which can be quite recurring with the rise of microservices — they must assemble those rarely-used tools. However, this will disturb the flow state you wish your developers to remain in. ... Troubleshooting issues within a CI/CD pipeline can be challenging for developers due to the need for more visibility and information. These processes often operate as black boxes, running on servers that developers may not have direct access to with software that is foreign to developers. Consequently, developers frequently rely on DevOps engineers — often understaffed — to diagnose problems, leading to slow feedback loops.


How to Architect Software for a Greener Future

Code efficiency is something that the platforms and the languages should make easy for us. They should do the work, because that's their area of expertise, and we should just write code. Yes, of course, write efficient code, but it's not a silver bullet. What about data center efficiency, then? Surely, if we just made our data center hyper efficient, we wouldn't have to worry. We could just leave this problem to someone else. ... It requires you to do some thinking. It also requires you to orchestrate this in some type of way. One way to do this is autoscaling. Let's talk about autoscaling. We have the same chart here but we have added demand. Autoscaling is the simple concept that when you have more demand, you use more resources and you have a bigger box, virtual machine, for example. The key here is very easy to do the first thing. We like to do this, "I think demand is going to go up, provision more, have more space. Yes, I feel safe. I feel secure now". Going the other way is a little scarier. It's actually just as important as compared to sustainability. Otherwise, we end up in the first scenario where we are incorrectly sized for our resource use. Of course, this is a good tool to use if you have a variability in demand. 


Tech Trends 2025 shines a light on the automation paradox – R&D World

The surge in AI workloads has prompted enterprises to invest in powerful GPUs and next-generation chips, reinventing data centers as strategic resources. ... As organizations race to tap progressively more sophisticated AI systems, hardware decisions once again become integral to resilience, efficiency and growth, while leading to more capable “edge” deployments closer to humans and not just machines. As Tech Trends 2025 noted, “personal computers embedded with AI chips are poised to supercharge knowledge workers by providing access to offline AI models while future-proofing technology infrastructure, reducing cloud computing costs, and enhancing data privacy.” ... Data is the bedrock of effective AI, which is why “bad inputs lead to worse outputs—in other words, garbage in, garbage squared,” as Deloitte’s 2024 State of Generative AI in the Enterprise Q3 report observes. Fully 75% of surveyed organizations have stepped up data-life-cycle investments because of AI. Layer a well-designed data framework beneath AI, and you might see near-magic; rely on half-baked or biased data, and you risk chaos. As a case in point, Vancouver-based LIFT Impact Partners fine-tuned its AI assistants on focused, domain-specific data to help Canadian immigrants process paperwork—a far cry from scraping the open internet and hoping for the best.


What Happens to Relicensed Open Source Projects and Their Forks?

Several companies have relicensed their open source projects in the past few years, so the CHAOSS project decided to look at how an open source project’s organizational dynamics evolve after relicensing, both within the original project and its fork. Our research compares and contrasts data from three case studies of projects that were forked after relicensing: Elasticsearch with fork OpenSearch, Redis with fork Valkey, and Terraform with fork OpenTofu. These relicensed projects and their forks represent three scenarios that shed light on this topic in slightly different ways. ... OpenSearch was forked from Elasticsearch on April 12, 2021, under the Apache 2.0 license, by the Amazon Web Services (AWS) team so that it could continue to offer this service to its customers. OpenSearch was owned by Amazon until September 16, 2024, when it transferred the project to the Linux Foundation. ... OpenTofu was forked from Terraform on Aug. 25, 2023, by a group of users as a Linux Foundation project under the MPL 2.0. These users were starting from scratch with the codebase since no contributors to the OpenTofu repository had previously contributed to Terraform.


Setting up a Security Operations Center (SOC) for Small Businesses

In today's digital age, security is not an option for any business irrespective of its size. Small Businesses equally face increasing cyber threats, making it essential to have robust security measures in place. A SOC is a dedicated team responsible for monitoring, detecting, and responding to cybersecurity incidents in real-time. It acts as the frontline defense against cyber threats, helping to safeguard your business's data, reputation, and operations. By establishing a SOC, you can proactively address security risks and enhance your overall cybersecurity posture. The cost of setting up a SOC for a small business may be prohibitive, in which case, the businesses may look at engaging Managed Service Providers for the whole or part of the services. ... Establishing clear, well-defined processes is vital for the smooth functioning of your SOC. NIST Cyber Security Framework could be a good fit for all businesses and one can define the processes that are essential and relevant considering the size, threat landscape and risk tolerance of the business. ... Continuous training and development are essential for keeping your SOC team prepared to handle evolving threats. Offer regular training sessions, certifications, and workshops to enhance their skills and knowledge. 



Quote for the day:

"Hardships often prepare ordinary people for an extraordinary destiny." -- C.S. Lewis

Daily Tech Digest - December 28, 2024

Forcing the SOC to change its approach to detection

Make no mistake, we are not talking about the application of AI in the usual sense when it comes to threat detection. Up until now, AI has seen Large Language Models (LLMs) used to do little more than summarise findings for reporting purposes in incident response. Instead, we are referring to the application of AI in its truer and broader sense, i.e. via machine learning, agents, graphs, hypergraphs and other approaches – and these promise to make detection both more precise and intelligible. Hypergraphs gives us the power to connect hundreds of observations together to form likely chains of events. ... The end result is that the security analyst is no longer perpetually caught in firefighting mode. Rather than having to respond to hundreds of alerts a day, the analyst can use the hypergraphs and AI to detect and string together long chains of alerts that share commonalities and in so doing gain a complete picture of the threat. Realistically, it’s expected that adopting such an approach should see alert volumes decline by up to 90 per cent. But it doesn’t end there. By applying machine learning to the chains of events it will be possible to prioritise response, identifying which threats require immediate triage. 


Sole Source vs. Single Source Vendor Management

A Sole source is a vendor that provides a specific product or service to your company. This vendor makes a specific widget or service that is custom tailored to your company’s needs. If there is an event at this Sole Source provider, your company can only wait until the event has been resolved. There is no other vendor that can produce your product or service quickly. They are the sole source, on a critical path to your operations. From an oversight and assessment perspective, this can be a difficult relationship to mitigate risks to your company. With sole source companies, we as practitioners must do a deeper dive into these companies from a risk assessment perspective. From a vendor audit perspective, we need to go into more details of how robust their business continuity, disaster recovery, and crisis management programs are. ... Single Source providers are vendors that provide a service or product to your company that is one company that you choose to do business with, but there are other providers that could provide the same product or services. An example of a single source provider is a payment processing company. There are many to choose from, but you chose one specific company to do business with. Moving to a new single source provider can be a daunting task that involves a new RFP process, process integration, assessments of their business continuity program, etc. 


Central Africa needs traction on financial inclusion to advance economic growth

Beyond the infrastructure, financial inclusion would see a leap forward in CEMAC if the right policies and platforms exist. “The number two thing is that you have to have the right policies in place which are going to establish what would constitute acceptable identity authentication for identity transactions. So, be it for onboarding or identity transactions, you have to have a policy. Saying that we’re going to do biometric authentication for every transaction, no matter what value it is and what context it is, doesn’t make any sense,” Atick holds. “You have to have a policy that is basically a risk-based policy. And we have lots of experience in that. Some countries started with their own policies, and over time, they started to understand it. Luckily, there is a lot of knowledge now that we can share on this point. This is why we’re doing the Financial Inclusion Symposium at the ID4Africa Annual General Meeting next year [in Addis Ababa], because these countries are going to share their knowledge and experiences.” “The symposium at the AGM will basically be on digital identity and finance. It’s going to focus on the stages of financial inclusion, and what are the risk-based policies countries must put in place to achieve the desired outcome, which is a low-cost, high-robustness and trustworthy ecosystem that enables anybody to enter the system and to conduct transactions securely.”


2025 Data Outlook: Strategic Insights for the Road Ahead

By embracing localised data processing, companies can turn compliance into an advantage, driving innovations such as data barter markets and sovereignty-specific data products. Data sovereignty isn’t merely a regulatory checkbox—it’s about Citizen Data Rights. With most consumer data being unstructured and often ignored, organisations can no longer afford complacency. Prioritising unstructured data management will be crucial as personal information needs to be identified, cataloged, and protected at a granular level from inception through intelligent, policy-based automation. ... Individuals are gaining more control over their personal information and expect transparency, control, and digital trust from organisations. As a result, businesses will shift to self-service data management, enabling data stewards across departments to actively participate in privacy practices. This evolution moves privacy management out of IT silos, embedding it into daily operations across the organisation. Organisations that embrace this change will implement a “Data Democracy by Design” approach, incorporating self-service privacy dashboards, personalised data management workflows, and Role-Based Access Control (RBAC) for data stewards. 


Defining & Defying Cybersecurity Staff Burnout

According to the van Dam article, burnout happens when an employee buries their experience of chronic stress for years. The people who burn out are often formerly great performers, perfectionists who exhibit perseverance. But if the person perseveres in a situation where they don't have control, they can experience the kind of morale-killing stress that, left unaddressed for months and years, leads to burnout. In such cases, "perseverance is not adaptive anymore and individuals should shift to other coping strategies like asking for social support and reflecting on one's situation and feelings," the article read. ... Employees sometimes scoff at the wellness programs companies put out as an attempt to keep people healthy. "Most 'corporate' solutions — use this app! attend this webinar! — felt juvenile and unhelpful," Eden says. And it does seem like many solutions fall into the same quick-fix category as home improvement hacks or dump dinner recipes. Christina Maslach's scholarly work attributed work stress to six main sources: workload, values, reward, control, fairness, and community. An even quicker assessment is promised by the Matches Measure from Cindy Muir Zapata. 


Revolutionizing Cloud Security for Future Threats

Is it possible that embracing Non-Human Identities can help us bridge the resource gap in cybersecurity? The answer is a definite yes. The cybersecurity field is chronically understaffed and for firms to successfully safeguard their digital assets, they must be equipped to handle an infinite number of parallel tasks. This demands a new breed of solutions such as NHIs and Secrets Security Management that offer automation at a scale hitherto unseen. NHIs have the potential to take over tedious tasks like secret rotation, identity lifecycle management, and security compliance management. By automating these tasks, NHIs free up the cybersecurity workforce to concentrate on more strategic initiatives, thereby improving the overall efficiency of your security operations. Moreover, through AI-enhanced NHI Management platforms, we can provide better insights into system vulnerabilities and usage patterns, considerably improving context-aware security. Can the concept of Non-Human Identities extend its relevance beyond the IT sector? ... From healthcare institutions safeguarding sensitive patient data, financial services firms securing transactional data, travel companies protecting customer data, to DevOps teams looking to maintain the integrity of their codebases, the strategic relevance of NHIs is widespread.


Digital Transformation: Making Information Work for You

Digital transformation is changing the organization from one state to another through the use of electronic devices that leverage information. Oftentimes, this entails process improvement and process reengineering to convert business interactions from human-to-human to human-to-computer-to-human. By introducing the element of the computer into human-to-human transactions, there is a digital breadcrumb left behind. This digital record of the transaction is important in making digital transformations successful and is the key to how analytics can enable more successful digital transformations. In a human-to-human interaction, information is transferred from one party to another, but it generally stops there. With the introduction of the digital element in the middle, the data is captured, stored, and available for analysis, dissemination, and amplification. This is where data analytics shines. If an organization stops with data storage, they are missing the lion’s share of the potential value of a digital transformation initiative. Organizations that focus only on collecting data from all their transactions and sinking this into a data lake often find that their efforts are in vain. They end up with a data swamp where data goes to die and never fully realize its potential value. 


Secure and Simplify SD-Branch Networks

The traditional WAN relies on expensive MPLS connectivity and a hub-and-spoke architecture that backhauls all traffic through the corporate data centre for centralized security checks. This approach creates bottlenecks that interfere with network performance and reliability. In addition to users demanding fast and reliable access to resources, IoT applications need reliable WAN connections to leverage cloud-based management and big data repositories. ... The traditional WAN relies on expensive MPLS connectivity and a hub-and-spoke architecture that backhauls all traffic through the corporate data centre for centralized security checks. This approach creates bottlenecks that interfere with network performance and reliability. In addition to users demanding fast and reliable access to resources, IoT applications need reliable WAN connections to leverage cloud-based management and big data repositories. ... To reduce complexity and appliance sprawl, SD-Branch consolidates networking and security capabilities into a single solution that provides seamless protection of distributed environments. It covers all critical branch edges, from the WAN edge to the branch access layer to a full spectrum of endpoint devices. 


Breaking up is hard to do: Chunking in RAG applications

The most basic is to chunk text into fixed sizes. This works for fairly homogenous datasets that use content of similar formats and sizes, like news articles or blog posts. It’s the cheapest method in terms of the amount of compute you’ll need, but it doesn’t take into account the context of the content that you’re chunking. That might not matter for your use case, but it might end up mattering a lot. You could also use random chunk sizes if your dataset is a non-homogenous collection of multiple document types. This approach can potentially capture a wider variety of semantic contexts and topics without relying on the conventions of any given document type. Random chunks are a gamble, though, as you might end up breaking content across sentences and paragraphs, leading to meaningless chunks of text. For both of these types, you can apply the chunking method over sliding windows; that is, instead of starting new chunks at the end of the previous chunk, new chunks overlap the content of the previous one and contain part of it. This can better capture the context around the edges of each chunk and increase the semantic relevance of your overall system. The tradeoff is that it requires greater storage requirements and can store redundant information.


What is quantum supremacy?

A definitive achievement of quantum supremacy will require either a significant reduction in quantum hardware's error rates or a better theoretical understanding of what kind of noise classical approaches can exploit to help simulate the behavior of error-prone quantum computers, Fefferman said. But this back-and-forth between quantum and classical approaches is helping push the field forwards, he added, creating a virtuous cycle that is helping quantum hardware developers understand where they need to improve. "Because of this cycle, the experiments have improved dramatically," Fefferman said. "And as a theorist coming up with these classical algorithms, I hope that eventually, I'm not able to do it anymore." While it's uncertain whether quantum supremacy has already been reached, it's clear that we are on the cusp of it, Benjamin said. But it's important to remember that reaching this milestone would be a largely academic and symbolic achievement, as the problems being tackled are of no practical use. "We're at that threshold, roughly speaking, but it isn't an interesting threshold, because on the other side of it, nothing magic happens," Benjamin said. ... That's why many in the field are refocusing their efforts on a new goal: demonstrating "quantum utility," or the ability to show a significant speedup over classical computers on a practically useful problem.


Shift left security — Good intentions, poor execution, and ways to fix it

One of the first steps is changing the way security is integrated into development. Instead of focusing on a “gotcha”, after-the-fact approach, we need security to assist us as early as possible in the process: as we write the code. By guiding us as we’re still in ‘work-in-progress’ mode with our code, security can adopt a positive coaching and helping stance, nudging us to correct issues before they become problems and go clutter our backlog. ... The security tools we use need to catch vulnerabilities early enough so that nobody circles back to fix boomerang issues later. Very much in line with my previous point, detecting and fixing vulnerabilities as we code saves time and preserves focus. This also reduces the back-and-forth in peer reviews, making the entire process smoother and more efficient. By embedding security more deeply into the development workflow, we can address security issues without disrupting productivity. ... When it comes to security training, we need a more focused approach. Developers don’t need to become experts in every aspect of code security, but we do need to be equipped with the knowledge that’s directly relevant to the work we’re doing, when we’re doing it — as we code. Instead of broad, one-size-fits-all training programs, let’s focus on addressing specific knowledge gaps we personally have. 



Quote for the day:

“Whenever you see a successful person, you only see the public glories, never the private sacrifices to reach them.” -- Vaibhav Shah

Daily Tech Digest - October 03, 2024

Why Staging Is a Bottleneck for Microservice Testing

Multiple teams often wait for their turn to test features in staging. This creates bottlenecks. The pressure on teams to share resources can severely delay releases, as they fight for access to the staging environment. Developers who attempt to spin up the entire stack on their local machines for testing run into similar issues. As distributed systems engineer Cindy Sridharan notes, “I now believe trying to spin up the full stack on developer laptops is fundamentally the wrong mindset to begin with, be it at startups or at bigger companies.” The complexities of microservices make it impractical to replicate entire environments locally, just as it’s difficult to maintain shared staging environments at scale. ... From a release process perspective, the delays caused by a fragile staging environment lead to slower shipping of features and patches. When teams spend more time fixing staging issues than building new features, product development slows down. In fast-moving industries, this can be a major competitive disadvantage. If your release process is painful, you ship less often, and the cost of mistakes in production is higher. 


Misconfiguration Madness: Thwarting Common Vulnerabilities in the Financial Sector

Financial institutions require legions of skilled security personnel in order to overcome the many challenges facing their industry. Developers are an especially important part of that elite cadre of defenders for a variety of reasons. First and foremost, security-aware developers can write secure code for new applications, which can thwart attackers by denying them a foothold in the first place. If there are no vulnerabilities to exploit, an attacker won't be able to operate, at least not very easily. Developers with the right training can also help to support both modern and legacy applications by examining the existing code that makes up some of the primary vectors used to attack financial institutions. That includes cloud misconfigurations, lax API security, and the many legacy bugs found in applications written in COBOL and other aging computer languages. However, the task of nurturing and maintaining security-aware developers in the financial sector won’t happen on its own. It requires precise, immersive training programs that are highly customizable and matched to the specific complex environment that a financial services institution is using.


3 things to get right with data management for gen AI projects

The first is a series of processes — collecting, filtering, and categorizing data — that may take several months for KM or RAG models. Structured data is relatively easy, but the unstructured data, while much more difficult to categorize, is the most valuable. “You need to know what the data is, because it’s only after you define it and put it in a taxonomy that you can do anything with it,” says Shannon. ...  “We started with generic AI usage guidelines, just to make sure we had some guardrails around our experiments,” she says. “We’ve been doing data governance for a long time, but when you start talking about automated data pipelines, it quickly becomes clear you need to rethink the older models of data governance that were built more around structured data.” Compliance is another important area of focus. As a global enterprise thinking about scaling some of their AI projects, Harvard keeps an eye on evolving regulatory environments in different parts of the world. It has an active working group dedicated to following and understanding the EU AI Act, and before their use cases go into production, they run through a process to make sure all compliance obligations are satisfied.


Fundamentals of Data Preparation

Data preparation is intended to improve the quality of the information that ML and other information systems use as the foundation of their analyses and predictions. Higher-quality data leads to greater accuracy in the analyses the systems generate in support of business decision-makers. This is the textbook explanation of the link between data preparation and business outcomes, but in practice, the connection is less linear. ... Careful data preparation adds value to the data itself, as well as to the information systems that rely on the data. It goes beyond checking for accuracy and relevance and removing errors and extraneous elements. The data-prep stage gives organizations the opportunity to supplement the information by adding geolocation, sentiment analysis, topic modeling, and other aspects. Building an effective data preparation pipeline begins long before any data has been collected. As with most projects, the preparation starts at the end: identifying the organization’s goals and objectives, and determining the data and tools required to achieve those goals. ... Appropriate data preparation is the key to the successful development and implementation of AI systems in large part because AI amplifies existing data quality problems. 


How to Rein in Cybersecurity Tool Sprawl

Security tool sprawl happens for many different reasons. Adding new tools and new vendors as new problems arise without evaluating the tools already in place is often how sprawl starts. The sheer glut of tools available in the market can make it easy for security teams to embrace the latest and greatest solutions. “[CISOs] look for the newest, the latest and the greatest. They're the first adopter type,” says Reiter. A lack of communication between departments and teams in an enterprise can also contribute. “There's the challenge of teams not necessarily knowing their day-to-day functions of other team,” says Mar-Tang. Security leaders can start to wrap their heads around the problem of sprawl by running an audit of the security tools in place. Which teams use which tools? How often are the tools used? How many vendors supply those tools? What are the lengths of the vendor contracts? Breaking down communication barriers within an enterprise will be a necessary part of answering questions like these. “Talk to the … security and IT risk side of your house, the people who clean up the mess. You have an advocate and a partner to be able to find out where you have holes and where you have sprawl,” Kris Bondi, CEO and co-founder at endpoint security company Mimoto, recommends.


The Promise and Perils of Generative AI in Software Testing

The journey from human automation tester to AI test automation engineer is transformative. Traditionally, transitioning to test automation required significant time and resources, including learning to code and understanding automation frameworks. AI removes these barriers and accelerates development cycles, dramatically reducing time-to-market and improving accuracy, all while decreasing the level of admin tasks for software testers. AI-powered tools can interpret test scenarios written in plain language, automatically generate the necessary code for test automation, and execute tests across various platforms and languages. This dramatically reduces the enablement time, allowing QA professionals to focus on strategic tasks instead of coding complexities. ... As GenAI becomes increasingly integrated into software development life cycles, understanding its capabilities and limitations is paramount. By effectively managing these dynamics, development teams can leverage GenAI’s potential to enhance their testing practices while ensuring the integrity of their software products.


Near-'perfctl' Fileless Malware Targets Millions of Linux Servers

The malware looks for vulnerabilities and misconfigurations to exploit in order to gain initial access. To date, Aqua Nautilus reports, the malware has likely targeted millions of Linux servers, and compromised thousands. Any Linux server connected to the Internet is in its sights, so any server that hasn't already encountered perfctl is at risk. ... By tracking its infections, researchers identified three Web servers belonging to the threat actor: two that were previously compromised in prior attacks, and a third likely set up and owned by the threat actor. One of the compromised servers was used as the primary base for malware deployment. ... To further hide its presence and malicious activities from security software and researcher scrutiny, it deploys a few Linux utilities repurposed into user-level rootkits, as well as one kernel-level rootkit. The kernel rootkit is especially powerful, hooking into various system functions to modify their functionality, effectively manipulating network traffic, undermining Pluggable Authentication Modules (PAM), establishing persistence even after primary payloads are detected and removed, or stealthily exfiltrating data. 


Three hard truths hindering cloud-native detection and response

Most SOC teams either lack the proper tooling or have so many cloud security point tools that the management burden is untenable. Cloud attacks happen way too fast for SOC teams to flip from one dashboard to another to determine if an application anomaly has implications at the infrastructure level. Given the interconnectedness of cloud environments and the accelerated pace at which cloud attacks unfold, if SOC teams can’t see everything in one place, they’ll never be able to connect the dots in time to respond. More importantly, because everything in the cloud happens at warp speed, we humans need to act faster, which can be nerve wracking and increase the chance of accidentally breaking something. While the latter is a legitimate concern, if we want to stay ahead of our adversaries, we need to get comfortable with the accelerated pace of the cloud. While there are no quick fixes to these problems, the situation is far from hopeless. Cloud security teams are getting smarter and more experienced, and cloud security toolsets are maturing in lockstep with cloud adoption. And I, like many in the security community, am optimistic that AI can help deal with some of these challenges.


How to Fight ‘Technostress’ at Work

Digital stressors don’t occur in isolation, according to the researchers, which necessitates a multifaceted approach. “To address the problem, you can’t just address the overload and invasion,” Thatcher said. “You have to be more strategic.” “Let’s say I’m a manager, and I implement a policy that says no email on weekends because everybody’s stressed out,” Thatcher said. “But everyone stays stressed out. That’s because I may have gotten rid of techno-invasion—that feeling that work is intruding on my life—but on Monday, when I open my email, I still feel really overloaded because there are 400 emails.” It’s crucial for managers to assess the various digital stressors affecting their employees and then target them as a combination, according to the researchers. That means to address the above problem, Thatcher said, “you can’t just address invasion. You can’t just address overload. You have to address them together,” he said. ... Another tool for managers is empowering employees, according to the study. “As a manager, it may feel really dangerous to say, ‘You can structure when and where and how you do work.’ 


Fix for BGP routing insecurity ‘plagued by software vulnerabilities’ of its own, researchers find

Under BGP, there is no way to authenticate routing changes. The arrival of RPIK just over a decade ago was intended to fix that, using a digital record called a Route Origin Authorization (ROA) that identifies an ISP as having authority over specific IP infrastructure. Route origin validation (ROV) is the process a router undergoes to check that an advertised route is authorized by the correct ROA certificate. In principle, this makes it impossible for a rogue router to maliciously claim a route it does not have any right to. RPKI is the public key infrastructure that glues this all together, security-wise. The catch is that, for this system to work, RPIK needs a lot more ISPs to adopt it, something which until recently has happened only very slowly. ... “Since all popular RPKI software implementations are open source and accept code contributions by the community, the threat of intentional backdoors is substantial in the context of RPKI,” they explained. A software supply chain that creates such vital software enabling internet routing should be subject to a greater degree of testing and validation, they argue.



Quote for the day:

"You may have to fight a battle more than once to win it." -- Margaret Thatcher

Daily Tech Digest - August 02, 2024

Small language models and open source are transforming AI

From an enterprise perspective, the advantages of embracing SLMs are multifaceted. These models allow businesses to scale their AI deployments cost-effectively, an essential consideration for startups and midsize enterprises that need to maximize their technology investments. Enhanced agility becomes a tangible benefit as shorter deployment times and easier customization align AI capabilities more closely with evolving business needs. Data privacy and sovereignty (perennial concerns in the enterprise world) are better addressed with SLMs hosted on-premises or within private clouds. This approach satisfies regulatory and compliance requirements while maintaining robust security. Additionally, the reduced energy consumption of SLMs supports corporate sustainability initiatives. That’s still important, right? The pivot to smaller language models, bolstered by open source innovation, reshapes how enterprises approach AI. By mitigating the cost and complexity of large generative AI systems, SLMs offer a viable, efficient, and customizable path forward. This shift enhances the business value of AI investments and supports sustainable and scalable growth. 


The Impact and Future of AI in Financial Services

Winston noted that AI systems require vast amounts of data, which raises concerns about data privacy and security. “Financial institutions must ensure compliance with regulations such as GDPR [General Data Protection Regulation] and CCPA [California Consumer Privacy Act] while safeguarding sensitive customer information,” he explained. Simply using general GenAI tools as a quick fix isn’t enough. “Financial services will need a solution built specifically for the industry and leverages deep data related to how the entire industry works,” said Kevin Green, COO of Hapax, a banking AI platform. “It’s easy for general GenAI tools to identify what changes are made to regulations, but if it does not understand how those changes impact an institution, it’s simply just an alert.” According to Green, the next wave of GenAI technologies should go beyond mere alerts; they must explain how regulatory changes affect institutions and outline actionable steps. As AI technology evolves, several emerging technologies could significantly transform the financial services industry. Ludwig pointed out that quantum computers, which can solve complex problems much faster than traditional computers, might revolutionize risk management, portfolio optimization, and fraud detection. 


Is Your Data AI-Ready?

Without proper data contextualization, AI systems may make incorrect assumptions or draw erroneous conclusions, undermining the reliability and value of the insights they generate. To avoid such pitfalls, focus on categorizing and classifying your data with the necessary metadata, such as timestamps, location information, document classification, and other relevant contextual details. This will enable your AI to properly understand the context of the data and generate meaningful, actionable insights. Additionally, integrating complementary data can significantly enhance the information’s value, depth, and usefulness for your AI systems to analyze. ... Although older data may be necessary for compliance or historical purposes, it may not be relevant or useful for your AI initiatives. Outdated information can burden your storage systems and compromise the validity of the AI-generated insights. Imagine an AI system analyzing a decade-old market report to inform critical business decisions—the insights would likely be outdated and misleading. That’s why establishing and implementing robust retention and archiving policies as part of your information life cycle management is critical. 


Generative AI: Good Or Bad News For Software

There are plenty of examples of breaches that started thanks to someone copying over code and not checking it thoroughly. Think back to the Heartbleed exploit, a security bug in a popular library that led to the exposure of hundreds of thousands of websites, servers and other devices which used the code. Because the library was so widely used, the thought was, of course, someone had checked it for vulnerabilities. But instead, the vulnerability persisted for years, quietly used by attackers to exploit vulnerable systems. This is the darker side to ChatGPT; attackers also have access to the tool. While OpenAI has built some safeguards to prevent it from answering questions regarding problematic subjects like code injection, the CyberArk Labs team has already uncovered some ways in which the tool could be used for malicious reasons. Breaches have occurred due to blindly incorporating code without thorough verification. Attackers can exploit ChatGPT, using its capabilities to create polymorphic malware or produce malicious code more rapidly. Even with safeguards, developers must exercise caution. ChatGPT generates the code, but developers are accountable for it


FinOps Can Turn IT Cost Centers Into a Value Driver

Once FinOps has been successfully implemented within an organization, teams can begin to automate the practice while building a culture of continuous improvement. Leaders can now better forecast and plan, leading to more precise budgeting. Additionally, GenAI can provide unique insights into seasonality. For example, if a resource demand spikes every three days at other unpredictable frequencies, AI can help you detect these patterns so you can optimize by scaling up when required and back down to save costs during lulls in demand. This kind of pattern detection is difficult without AI. It all goes back to the concept of understanding value and total cost. With FinOps, IT leaders can demonstrate exactly what they spend on and why. They can point out how the budget for software licenses and labor is directly tied to IT operations outcomes, translating into greater resiliency and higher customer satisfaction. They can prove that they’ve spent money responsibly and that they should retain that level of funding because it makes the business run better. FinOps and AI advancements allow businesses to do more and go further than they ever could. Almost 65% of CFOs are integrating AI into their strategy. 


The convergence of human and machine in transforming business

To achieve a true collaboration between humans and machines, it is crucial to establish a clear understanding and definition of their respective roles. By emphasizing the unique strengths of AI while strategically addressing its limitations, organizations can create a synergy that maximizes the potential of both human expertise and machine capabilities. AI excels in data structuring, capable of transforming complex, unstructured information into easily searchable and accessible content. This makes it an invaluable tool for sorting through vast online datasets, including datasets, news articles, academic reports and other forms of digital content, extracting meaningful insights. Moreover, AI systems operate tirelessly, functioning 24/7 without the need for breaks or downtime. This "always on" nature ensures a constant state of productivity and responsiveness, enabling organizations to keep pace with the rapidly changing market. Another key strength of AI lies in its scalability. As data volumes continue to grow and the complexity of tasks increases, AI can be integrated into existing workflows and systems, allowing businesses to process and analyze vast amounts of information efficiently.


The Crucial Role of Real-time Analytics in Modern SOCs

Security analysts often spend considerable time manually correlating diverse data sources to understand the context of specific alerts. This process leads to inefficiency, as they must scan various sources, determine if an alert is genuine or a false positive, assess its priority, and evaluate its potential impact on the organization. This tedious and lengthy process can lead to analyst burnout, negatively impacting SOC performance. ... Traditional Security Information and Event Management (SIEM) systems in SOCs struggle to effectively track and analyze sophisticated cybersecurity threats. These legacy systems often burden SOC teams with false positives and negatives. Their generalized approach to analytics can create vulnerabilities and strain SOC resources, requiring additional staff to address even a single false positive. In contrast, real-time analytics or analytics-driven SIEMs offer superior context for security alerts, sending only genuine threats to security teams. ... Staying ahead of potential threats is crucial for organizations in today's landscape. Real-time threat intelligence plays a vital role in proactively detecting threats. Through continuous monitoring of various threat vectors, it can identify and stop suspicious activities or anomalies before they cause harm.


Architecting with AI

Every project is different, and understanding the differences between projects is all about context. Do we have documentation of thousands of corporate IT projects that we would need to train an AI to understand context? Some of that documentation probably exists, but it's almost all proprietary. Even that's optimistic; a lot of the documentation we would need was never captured and may never have been expressed. Another issue in software design is breaking larger tasks up into smaller components. That may be the biggest theme of the history of software design. AI is already useful for refactoring source code. But the issues change when we consider AI as a component of a software system. The code used to implement AI is usually surprisingly small — that's not an issue. However, take a step back and ask why we want software to be composed of small, modular components. Small isn't "good" in and of itself. ... Small components reduce risk: it's easier to understand an individual class or microservice than a multi-million line monolith. There's a well-known paper(link is external) that shows a small box, representing a model. The box is surrounded by many other boxes that represent other software components: data pipelines, storage, user interfaces, you name it. 


Hungry for resources, AI redefines the data center calculus

With data centers near capacity in the US, there’s a critical need for organizations to consider hardware upgrades, he adds. The shortage is exacerbated because AI and machine learning workloads will require modern hardware. “Modern hardware provides enhanced performance, reliability, and security features, crucial for maintaining a competitive edge and ensuring data integrity,” Warman says. “High-performance hardware can support more workloads in less space, addressing the capacity constraints faced by many data centers.” The demands of AI make for a compelling reason to consider hardware upgrades, adds Rob Clark, president and CTO at AI tool provider Seekr. Organizations considering new hardware should pull the trigger based on factors beyond space considerations, such as price and performance, new features, and the age of existing hardware, he says. Older GPUs are a prime target for replacement in the AI era, as memory per card and performance per chip increases, Clark adds. “It is more efficient to have fewer, larger cards processing AI workloads,” he says. While AI is driving the demand for data center expansion and hardware upgrades, it can also be part of the solution, says Timothy Bates, a professor in the University of Michigan College of Innovation and Technology. 


How to Bake Security into Platform Engineering

A key challenge for platform engineers is modernizing legacy applications, which include security holes. “Platform engineers and CIOs have a responsibility to modernize by bridging the gap between the old and new and understanding the security implications between the old and new,” he says. When securing the software development lifecycle, organizations should secure both continuous integration and continuous delivery/continuous deployment pipelines as well as the software supply chain, Mercer says. Securing applications entails “integrating security into the CI/CD pipelines in a seamless manner that does not create unnecessary friction for developers,” he says. In addition, organizations must prioritize educating employees on how to secure applications and software supply chains. ... As part of baking security into the software development process, security responsibility shifts from the cybersecurity team to the development organization. That means security becomes as much a part of deliverables as quality or safety, Montenegro says. “We see an increasing number of organizations adopting a security mindset within their engineering teams where the responsibility for product security lies with engineering, not the security team,” he says.



Quote for the day:

“If you really want to do something, you will work hard for it.” -- Edmund Hillary