Showing posts with label DDOS. Show all posts
Showing posts with label DDOS. Show all posts

Daily Tech Digest - January 31, 2025


Quote for the day:

“If you genuinely want something, don’t wait for it–teach yourself to be impatient.” -- Gurbaksh Chahal


GenAI fueling employee impersonation with biometric spoofs and counterfeit ID fraud

The annual AuthenticID report underlines the surging wave of AI-powered identity fraud, with rising biometric spoofs and counterfeit ID fraud attempts. The 2025 State of Identity Fraud Report also looks at how identity verification tactics and technology innovations are tackling the problem. “In 2024, we saw just how sophisticated fraud has now become: from deepfakes to sophisticated counterfeit IDs, generative AI has changed the identity fraud game,” said Blair Cohen, AuthenticID founder and president. ... “In 2025, businesses should embrace the mentality to ‘think like a hacker’ to combat new cyber threats,” said Chris Borkenhagen, chief digital officer and information security officer at AuthenticID. “Staying ahead of evolving strategies such as AI deepfake-generated documents and biometrics, emerging technologies, and bad actor account takeover tactics are crucial in protecting your business, safeguarding data, and building trust with customers.” ... Face biometric verification company iProov has identified the Philippines as a particular hotspot for digital identity fraud, with corresponding need for financial institutions and consumers to be vigilant. “There is a massive increase at the moment in terms of identity fraud against systems using generative AI in particular and deepfakes,” said iProove chief technology officer Dominic Forrest.


Cyber experts urge proactive data protection strategies

"Every organisation must take proactive measures to protect the critical data it holds," Montel stated. Emphasising foundational security practices, he advised organisations to identify their most valuable information and protect potential attack paths. He noted that simple steps can drastically contribute to overall security. On the consumer front, Montel highlighted the pervasive nature of data collection, reminding individuals of the importance of being discerning about the personal information they share online. "Think before you click," he advised, underscoring the potential of openly shared public information to be exploited by cybercriminals. Adding to the discussion on data resilience, Darren Thomson, Field CTO at Commvault, emphasised the changing landscape of cyber defence and recovery strategies needed by organisations. Thompson pointed out that mere defensive measures are not sufficient; rapid recovery processes are crucial to maintain business resilience in the event of a cyberattack. The concept of a "minimum viable company" is pivotal, where businesses ensure continuity of essential operations even when under attack. With cybercriminal tactics becoming increasingly sophisticated, doing away with reliance solely on traditional backups is necessary. 


Trump Administration Faces Security Balancing Act in Borderless Cyber Landscape

The borderless nature of cyber threats and AI, the scale of worldwide commerce, and the globally interconnected digital ecosystem pose significant challenges that transcend partisanship. As recent experience makes us all too aware, an attack originating in one country, state, sector, or company can spread almost instantaneously, and with devastating impact. Consequently, whatever the ideological preferences of the Administration, from a pragmatic perspective cybersecurity must be a collaborative national (and international) activity, supported by regulations where appropriate. It’s an approach taken in the European Union, whose member states are now subject to the Second Network Information Security Directive (NIS2)—focused on critical national infrastructure and other important sectors—and the financial sector-focused Digital Operational Resilience Act (DORA). Both regulations seek to create a rising tide of cyber resilience that lifts all ships and one of the core elements of both is a focus on reporting and threat intelligence sharing. In-scope organizations are required to implement robust measures to detect cyber attacks, report breaches in a timely way, and, wherever possible, share the information they accumulate on threats, attack vectors, and techniques with the EU’s central cybersecurity agency (ENISA).


Infrastructure as Code: From Imperative to Declarative and Back Again

Today, tools like Terraform CDK (TFCDK) and Pulumi have become popular choices among engineers. These tools allow developers to write IaC using familiar programming languages like Python, TypeScript, or Go. At first glance, this is a return to imperative IaC. However, under the hood, they still generate declarative configurations — such as Terraform plans or CloudFormation templates — that define the desired state of the infrastructure. Why the resurgence of imperative-style interfaces? The answer lies in a broader trend toward improving developer experience (DX), enabling self-service, and enhancing accessibility. Much like the shifts we’re seeing in fields such as platform engineering, these tools are designed to streamline workflows and empower developers to work more effectively. ... The current landscape represents a blending of philosophies. While IaC tools remain fundamentally declarative in managing state and resources, they increasingly incorporate imperative-like interfaces to enhance usability. The move toward imperative-style interfaces isn’t a step backward. Instead, it highlights a broader movement to prioritize developer accessibility and productivity, aligning with the emphasis on streamlined workflows and self-service capabilities.


How to Train AI Dragons to Solve Network Security Problems

We all know AI’s mantra: More data, faster processing, large models and you’re off to the races. But what if a problem is so specific — like network or DDoS security — that it doesn’t have a lot of publicly or privately available data you can use to solve it? As with other AI applications, the quality of the data you feed an AI-based DDoS defense system determines the accuracy and effectiveness of its solutions. To train your AI dragon to defend against DDoS attacks, you need detailed, real-world DDoS traffic data. Since this data is not widely and publicly available, your best option is to work with experts who have access to this data or, even better, have analyzed and used it to train their own AI dragons. To ensure effective DDoS detection, look at real-world, network-specific data and global trends as they apply to the network you want to protect. This global perspective adds valuable context that makes it easier to detect emerging or worldwide threats. ... Predictive AI models shine when it comes to detecting DDoS patterns in real-time. By using machine learning techniques such as time-series analysis, classification and regression, they can recognize patterns of attacks that might be invisible to human analysts. 


How law enforcement agents gain access to encrypted devices

When a mobile device is seized, law enforcement can request the PIN, password, or biometric data from the suspect to access the phone if they believe it contains evidence relevant to an investigation. In England and Wales, if the suspect refuses, the police can give a notice for compliance, and a further refusal is in itself a criminal offence under the Regulation of Investigatory Powers Act (RIPA). “If access is not gained, law enforcement use forensic tools and software to unlock, decrypt, and extract critical digital evidence from a mobile phone or computer,” says James Farrell, an associate at cyber security consultancy CyXcel. “However, there are challenges on newer devices and success can depend on the version of operating system being used.” ... Law enforcement agencies have pressured companies to create “lawful access” solutions, particularly on smartphones, to take Apple as an example. “You also have the co-operation of cloud companies, which if backups are held can sidestep the need to break the encryption of a device all together,” Closed Door Security’s Agnew explains. The security community has long argued against law enforcement backdoors, not least because they create security weaknesses that criminal hackers might exploit. “Despite protests from law enforcement and national security organizations, creating a skeleton key to access encrypted data is never a sensible solution,” CreateFuture’s Watkins argues.


The quantum computing reality check

Major cloud providers have made quantum computing accessible through their platforms, which creates an illusion of readiness for enterprise adoption. However, this accessibility masks a fatal flaw: Most quantum computing applications remain experimental. Indeed, most require deep expertise in quantum physics and specialized programming knowledge. Real-world applications are severely limited, and the costs are astronomical compared to the actual value delivered. ... The timeline to practical quantum computing applications is another sobering reality. Industry experts suggest we’re still 7 to 15 years away from quantum systems capable of handling production workloads. This extended horizon makes it difficult to justify significant investments. Until then, more immediate returns could be realized through existing technologies. ... The industry’s fascination with quantum computing has made companies fear being left behind or, worse, not being part of the “cool kids club”; they want to deliver extraordinary presentations to investors and customers. We tend to jump into new trends too fast because the allure of being part of something exciting and new is just too compelling. I’ve fallen into this trap myself. ... Organizations must balance their excitement for quantum computing with practical considerations about immediate business value and return on investment. I’m optimistic about the potential value in QaaS. 


Digital transformation in banking: Redefining the role of IT-BPM services

IT-BPM services are the engine of digital transformation in banking. They streamline operations through automation technologies like RPA, enhancing efficiency in processes such as customer onboarding and loan approvals. This automation reduces errors and frees up staff for strategic tasks like personalised customer support. By harnessing big data analytics, IT-BPM empowers banks to personalise services, detect fraud, and make informed decisions, ultimately improving both profitability and customer satisfaction. Robust security measures and compliance monitoring are also integral, ensuring the protection of sensitive customer data in the increasingly complex digital landscape. ... IT-BPM services are crucial for creating seamless, multi-channel customer experiences. They enable the development of intuitive platforms, including AI-driven chatbots and mobile apps, providing instant support and convenient financial management. This focus extends to personalised services tailored to individual customer needs and preferences, and a truly integrated omnichannel experience across all banking platforms. Furthermore, IT-BPM fosters agility and innovation by enabling rapid development of new digital products and services and facilitating collaboration with fintech companies.


Revolutionizing data management: Trends driving security, scalability, and governance in 2025

Artificial Intelligence and Machine Learning transform traditional data management paradigms by automating labour-intensive processes and enabling smarter decision-making. In the upcoming years, augmented data management solutions will drive efficiency and accuracy across multiple domains, from data cataloguing to anomaly detection. AI-driven platforms process vast datasets to identify patterns, automating tasks like metadata tagging, schema creation and data lineage mapping. ... In 2025, data masking will not be merely a compliance tool for GDPR, HIPPA, or CCPA; it will be a strategic enabler. With the rise in hybrid and multi-cloud environments, businesses will increasingly need to secure sensitive data across diverse systems. Specific solutions like IBM, K2view, Oracle and Informatica will revolutionize data masking by offering scale-based, real-time, context-aware masking. ... Real-time integration enhances customer experiences through dynamic pricing, instant fraud detection, and personalized recommendations. These capabilities rely on distributed architectures designed to handle diverse data streams efficiently. The focus on real-time integration extends beyond operational improvements. 


Deploying AI at the edge: The security trade-offs and how to manage them

The moment you bring compute nodes into the far edge, you’re automatically exposing a lot of security challenges in your network. Even if you expect them to be “disconnected devices,” they could intermittently connect to transmit data. So, your security footprint is expanded. You must ensure that every piece of the stack you’re deploying at the edge is secure and trustworthy, including the edge device itself. When considering security for edge AI, you have to think about transmitting the trained model, runtime engine, and application from a central location to the edge, opening up the opportunity for a person-in-the-middle attack. ... In military operations, continuous data streams from millions of global sensors generate an overwhelming volume of information. Cloud-based solutions are often inadequate due to storage limitations, processing capacity constraints, and unacceptable latency. Therefore, edge computing is crucial for military applications, enabling immediate responses and real-time decision-making. In commercial settings, many environments lack reliable or affordable connectivity. Edge AI addresses this by enabling local data processing, minimizing the need for constant communication with the cloud. This localized approach enhances security. Instead of transmitting large volumes of raw data, only essential information is sent to the cloud. 


Daily Tech Digest - December 26, 2024

Best Practices for Managing Hybrid Cloud Data Governance

Kausik Chaudhuri, CIO of Lemongrass, explains monitoring in hybrid-cloud environments requires a holistic approach that combines strategies, tools, and expertise. “To start, a unified monitoring platform that integrates data from on-premises and multiple cloud environments is essential for seamless visibility,” he says. End-to-end observability enables teams to understand the interactions between applications, infrastructure, and user experience, making troubleshooting more efficient. ... Integrating legacy systems with modern data governance solutions involves several steps. Modern data governance systems, such as data catalogs, work best when fueled with metadata provided by a range of systems. “However, this metadata is often absent or limited in scope within legacy systems,” says Elsberry. Therefore, an effort needs to be made to create and provide the necessary metadata in legacy systems to incorporate them into data catalogs. Elsberry notes a common blocking issue is the lack of REST API integration. Modern data governance and management solutions typically have an API-first approach, so enabling REST API capabilities in legacy systems can facilitate integration. “Gradually updating legacy systems to support modern data governance requirements is also essential,” he says.


These Founders Are Using AI to Expose and Eliminate Security Risks in Smart Contracts

The vulnerabilities lurking in smart contracts are well-known but often underestimated. “Some of the most common issues include Hidden Mint functions, where attackers inflate token supply, or Hidden Balance Updates, which allow arbitrary adjustments to user balances,” O’Connor says. These aren’t isolated risks—they happen far too frequently across the ecosystem. ... “AI allows us to analyze huge datasets, identify patterns, and catch anomalies that might indicate vulnerabilities,” O’Connor explains. Machine learning models, for instance, can flag issues like reentrancy attacks, unchecked external calls, or manipulation of minting functions—and they do it in real-time. “What sets AI apart is its ability to work with bytecode,” he adds. “Almost all smart contracts are deployed as bytecode, not human-readable code. Without advanced tools, you’re essentially flying blind.” ... As blockchain matures, smart contract security is no longer the sole concern of developers. It’s an industry-wide challenge that impacts everyone, from individual users to large enterprises. DeFi platforms increasingly rely on automated tools to monitor contracts and secure user funds. Centralized exchanges like Binance and Coinbase assess token safety before listing new assets. 


Three best change management practices to take on board in 2025

For change management to truly succeed, companies need to move from being change-resistant to change-ready. This means building up "change muscles" -- helping teams become adaptable and comfortable with change over the long term. For Mel Burke, VP of US operations at Grayce, the key to successful change is speaking to both the "head" and the "heart" of your stakeholders. Involve employees in the change process by giving them a voice and the ability to shape it as it happens. ... Change management works best when you focus on the biggest risks first and reduce the chance of major disruptions. Dedman calls this strategy "change enablement," where change initiatives are evaluated and scored on critical factors like team expertise, system dependencies, and potential customer impact. High-scorers get marked red for immediate attention, while lower-risk ones stay green for routine monitoring to keep the process focused and efficient. ... Peter Wood, CTO of Spectrum Search, swears by creating a "success signals framework" that combines data-driven metrics with culture-focused indicators. "System uptime and user adoption rates are crucial," he notes, "but so are team satisfaction surveys and employee retention 12-18 months post-change." 


Corporate Data Governance: The Cornerstone of Successful Digital Transformation

While traditional data governance focuses on the continuous and tactical management of data assets – ensuring data quality, consistency, and security – corporate data governance elevates this practice by integrating it with the organization’s overall governance framework and strategic objectives. It ensures that data management practices are not operating in silos but are harmoniously aligned and integrated with business goals, regulatory requirements, and ethical standards. In essence, corporate data governance acts as a bridge between data management and corporate strategy, ensuring that every data-related activity contributes to the organization’s mission and objectives. ... In the digital age, data is a critical asset that can drive innovation, efficiency, and competitive advantage. However, without proper governance, data initiatives can become disjointed, risky, and misaligned with corporate goals. Corporate data governance ensures that data management practices are strategically integrated with the organization’s mission, enabling businesses to leverage data confidently and effectively. By focusing on alignment, organizations can make better decisions, respond swiftly to market changes, and build stronger relationships with customers. 


What is an IT consultant? Roles, types, salaries, and how to become one

Because technology is continuously changing, IT consultants can provide clients with the latest information about new technologies as they become available, recommending implementation strategies based on their clients’ needs. As a result, for IT consultants, keeping the pulse of the technology market is essential. “Being a successful IT consultant requires knowing how to walk in the shoes of your IT clients and their business leaders,” says Scott Buchholz, CTO of the government and public services sector practice at consulting firm Deloitte. A consultant’s job is to assess the whole situation, the challenges, and the opportunities at an organization, Buchholz says. As an outsider, the consultant can see things clients can’t. ... “We’re seeing the most in-demand types of consultants being those who specialize in cybersecurity and digital transformation, largely due to increased reliance on remote work and increased risk of cyberattacks,” he says. In addition, consultants with program management skills are valuable for supporting technology projects, assessing technology strategies, and helping organizations compare and make informed decisions about their technology investments, Farnsworth says.


Blockchain + AI: Decentralized Machine Learning Platforms Changing the Game

Tech giants with vast computing resources and proprietary datasets have long dominated traditional AI development. Companies like Google, Amazon, and Microsoft have maintained a virtual monopoly on advanced AI capabilities, creating a significant barrier to entry for smaller players and independent researchers. However, the introduction of blockchain technology and cryptocurrency incentives is rapidly changing this paradigm. Decentralized machine learning platforms leverage blockchain's distributed nature to create vast networks of computing power. These networks function like a global supercomputer, where participants can contribute their unused computing resources in exchange for cryptocurrency tokens. ... The technical architecture of these platforms typically consists of several key components. Smart contracts manage the distribution of computational tasks and token rewards, ensuring transparent and automatic execution of agreements between parties. Distributed storage solutions like IPFS (InterPlanetary File System) handle the massive datasets required for AI training, while blockchain networks maintain an immutable record of transactions and model provenance.


DDoS Attacks Surge as Africa Expands Its Digital Footprint

A larger attack surface, however, is not the only reason for the increased DDoS activity in Africa and the Middle East, Hummel says. "Geopolitical tensions in these regions are also fueling a surge in hacktivist activity as real-world political disputes spill over into the digital world," he says. "Unfortunately, hacktivists often target critical infrastructure like government services, utilities, and banks to cause maximum disruption." And DDoS attacks are by no means the only manifestation of the new threats that organizations in Africa are having to contend with as they broaden their digital footprint. ... Attacks on critical infrastructure and financially motived attacks by organized crime are other looming concerns. In the center's assessment, Africa's government networks and networks belonging to the military, banking, and telecom sectors are all vulnerable to disruptive cyberattacks. Exacerbating the concern is the relatively high potential for cyber incidents resulting from negligence and accidents. Organized crime gangs — the scourge of organizations in the US, Europe, and other parts of the world, present an emerging threat to organizations in Africa, the Center has assessed. 


Optimizing AI Workflows for Hybrid IT Environments

Hybrid IT offers flexibility by combining the scalability of the cloud with the control of on-premises resources, allowing companies to allocate their resources more precisely. However, this setup also introduces complexity. Managing data flow, ensuring security, and maintaining operational efficiency across such a blended environment can become an overwhelming task if not addressed strategically. To manage AI workflows effectively in this kind of setup, businesses must focus on harmonizing infrastructure and resources. ... Performance optimization is crucial when running AI workloads across hybrid environments. This requires real-time monitoring of both on-premises and cloud systems to identify bottlenecks and inefficiencies. Implementing performance management tools allows for end-to-end visibility of AI workflows, enabling teams to proactively address performance issues before they escalate. ... Scalability also supports agility, which is crucial for businesses that need to grow and iterate on AI models frequently. Cloud-based services, in particular, allow teams to experiment and test AI models without being constrained by on-premises hardware limitations. This flexibility is essential for staying competitive in fields where AI innovation happens rapidly.


The Cloud Back-Flip

Cloud repatriation is driven by various factors, including high cloud bills, hidden costs, complexity, data sovereignty, and the need for greater data control. In markets like India—and globally—these factors are all relevant today, points out Vishal Kamani – Cloud Business Head, Kyndryl India. “Currently, rising cloud costs and complexity are part of the ‘learning curve’ for enterprises transitioning to cloud operations.” ... While cloud repatriation is not an alien concept anymore, such reverse migration back to on-premises data centres is seen happening only in organisations that are technology-driven and have deep tech expertise, observes Gaurang Pandya, Director, Deloitte India. “This involves them focusing back on the basics of IT infrastructure which does need a high number of skilled employees. The major driver for such reverse migration is increasing cloud prices and performance requirements. In an era of edge computing and 5G, each end system has now been equipped with much more computing resources than it ever had. This increases their expectations from various service providers.” Money is a big reason too- especially when you don’t know where is it going.


Why Great Programmers fail at Engineering

Being a good programmer is about mastering the details — syntax, algorithms, and efficiency. But being a great engineer? That’s about seeing the bigger picture: understanding systems, designing for scale, collaborating with teams, and ultimately creating software that not only works but excels in the messy, ever-changing real world. ... Good programmers focus on mastering their tools — languages, libraries, and frameworks — and take pride in crafting solutions that are both functional and beautiful. They are the “builders” who bring ideas to life one line of code at a time. ... Software engineering requires a keen understanding of design principles and system architecture. Great code in a poorly designed system is like building a solid wall in a crumbling house — it doesn’t matter how good it looks if the foundation is flawed. Many programmers struggle to:Design systems for scalability and maintainability. Think in terms of trade-offs, such as performance vs. development speed. Plan for edge cases and future growth. Software engineering is as much about people as it is about code. Great engineers collaborate with teams, communicate ideas clearly, and balance stakeholder expectations. ... Programming success is often measured by how well the code runs, but engineering success is about how well the system solves a real-world problem.



Quote for the day:

"Ambition is the path to success. Persistence is the vehicle you arrive in." -- Bill Bradley

Daily Tech Digest - October 15, 2024

The NHI management challenge: When employees leave

Non-human identities (NHIs) support machine-to-machine authentication and access across software infrastructure and applications. These digital constructs enable automated processes, services, and applications to authenticate and perform tasks securely, without direct human intervention. Access is granted to NHIs through various types of authentications, including secrets such as access keys, certificates and tokens. ... When an employee exits, secrets can go with them. Those secrets – credentials, NHIs and associated workflows – can be exfiltrated from mental memory, recorded manually, stored in vaults and keychains, on removable media, and more. Secrets that have been exfiltrated are considered “leaked.” ... An equally great risk is that employees, especially developers, create, deploy and manage secrets as part of software stacks and configurations, as one-time events or in regular workflows. When they exit, those secrets can become orphans, whose very existence is unknown to colleagues or to tools and frameworks. ... The lifecycle of NHIs can stretch beyond the boundaries of a single organization, encompassing partners, suppliers, customers and other third parties. 


How Ernst & Young’s AI platform is ‘radically’ reshaping operations

We’re seeing a new wave of AI roles emerging, with a strong focus on governance, ethics, and strategic alignment. Chief AI Officers, AI governance leads, knowledge engineers and AI agent developers are becoming critical to ensuring that AI systems are trustworthy, transparent, and aligned with both business goals and human needs. Additionally, roles like AI ethicists and compliance experts are on the rise, especially as governments begin to regulate AI more strictly. These roles go beyond technical skills — they require a deep understanding of policy, ethics, and organizational strategy. As AI adoption grows, so too will the need for individuals who can bridge the gap between the technology and the focus on human-centered outcomes.” ... Keeping humans at the center, especially as we approach AGI, is not just a guiding principle — it’s an absolute necessity. The EU AI Act is the most developed effort yet in establishing the guardrails to control the potential impacts of this technology at scale. At EY, we are rapidly adapting our corporate policies and ethical frameworks in order to, first, be compliant, but also to lead the way in showing the path of responsible AI to our clients.


The Truth Behind the Star Health Breach: A Story of Cybercrime, Disinformation, and Trust

The email that xenZen used as “evidence” was forged. The hacker altered the HTML code of an email using the common “inspect element” function—an easy trick to manipulate how a webpage appears. This allowed him to make it seem as though the email came directly from the CISO’s official account. ... XenZen’s attack demonstrates how cybercriminals are evolving. They are using psychological warfare to create chaos. In this case, xenZen not only exploited a vulnerability but also fabricated evidence to frame the CISO. The security community needs to stay vigilant and anticipate attacks that may target not just systems but also individuals and organizations through disinformation. ... Making the CISO a scapegoat for security breaches without proper evidence is a growing concern. Organizations must understand the complexities of cybersecurity and avoid jumping to conclusions. Security teams should have the support they need, including legal protection and clear communication channels. Transparency is essential, but so is the careful handling of internal investigations before pointing fingers.


How CIOs and CTOs Are Bridging Cross-Functional Collaboration

Ashwin Ballal, CIO at software company Freshworks, believes that the organizations that fail to collaborate well across departments are leaving money on the table. “Siloed communications create inefficiencies, leading to duplicative work, poor performance, and a negative employee experience. In my experience as a CIO, prioritizing cross-departmental communication has been essential to overcoming these challenges,” says Ballal. His team continually reevaluates the tech stack, collaborating with leaders and users to confirm that the organization is only investing in software that adds value. This approach saves money and helps keep employees engaged by minimizing their interactions with outdated technology. He also uses employees as product beta testers, and their feedback impacts the product roadmap. ... “My recommendation for other CIOs and CTOs is to regularly meet with departmental leaders to understand how technology interacts across the organization. Sending out regular surveys can yield candid feedback on what’s working and what isn’t. Additionally fostering an environment where employees can experiment with new technologies encourages innovation and problem-solving.”


2025 Is the Year of AI PCs; Are Businesses Onboard?

With the rise of real-time computing needs and the proliferation of IoT devices, businesses are realizing the need to move AI closer to where the data is - at the edge. This is where AI PCs come into play. Unlike their traditional counterparts, AI PCs are integrated with neural processing units, NPUs, that enable them to handle AI workloads locally, reducing latency and providing a more secure computing environment. "The anticipated surge in AI PCs is largely due to the supply-side push, as NPUs will be included in more CPU vendor road maps," said Ranjit Atwal, senior research director analyst at Gartner. NPUs allow enterprises to move from reactive to proactive IT strategies. Companies can use AI PCs to predict IT infrastructure failures before they happen, minimizing downtime and saving millions in operational costs. NPU-integrated PCs also allow enterprises to process AI-related tasks, such as machine learning, natural language processing and real-time analytics, directly on the device without relying on cloud-based services. And with generative AI becoming part of enterprise technology stacks, companies investing in AI PCs are essentially future-proofing their operations, preparing for a time when gen AI capabilities become a standard part of business tools.


Australia’s Cyber Security Strategy in Action – Three New Draft Laws Published

Australia is following in the footsteps of other jurisdictions such as the United States by establishing a Cyber Review Board. The Board’s remit will be to conduct no-fault, post-incident reviews of significant cyber security incidents in Australia. The intent is to strengthen cyber resilience, by providing recommendations to Government and industry based on lessons learned from previous incidents. Limited information gathering powers will be granted to the Board, so it will largely rely on cooperation by impacted businesses. ... Mandatory security standards for smart devices - The Cyber Security Bill also establishes a framework under which mandatory security standards for smart devices will be issued. Suppliers of smart devices will be prevented from supplying devices which do not meet these security standards, and will be required to provide statements of compliance for devices manufactured in Australia or supplied to the Australian market. The Secretary of Home Affairs will be given the power to issue enforcement notices (including compliance, stop and recall notices) if a certificate of compliance for a specific device cannot be verified.


The Role of Zero Trust Network Access Tools in Ransomware Recovery

By integrating with existing identity providers, Zero Trust Network Access ensures that only authenticated and authorized users can access specific applications. This identity-driven approach, combined with device posture assessments and real-time threat intelligence, provides a robust defense against unauthorized access during a ransomware recovery. Moreover, ZTNA’s application-layer security means that even if a user’s credentials are compromised, the attacker would only gain access to specific applications rather than the entire network. This granular access control is crucial in containing ransomware attacks and preventing lateral movement across the network. ... As a cloud-native solution, ZTNA can easily scale to meet the demands of organizations of all sizes, from small businesses to large enterprises. This scalability is particularly valuable during a ransomware recovery, where the need for secure access may fluctuate based on the number of systems and users involved. ZTNA’s flexibility also allows it to integrate with various IT environments, including hybrid and multi-cloud infrastructures. This adaptability ensures that organizations can deploy ZTNA without the need for significant changes to their existing setups, making it an ideal solution for dynamic environments.


What Is Server Consolidation and How Can It Improve Data Center Efficiency?

Server consolidation is the process of migrating workloads from multiple underutilized servers into a smaller collection of servers. ... although server consolidation typically focuses on consolidating physical servers, it can also apply to virtual servers. For instance, if you have five virtual hosts running on the same physical server, you might consolidate them into just three or virtual hosts. Doing so would reduce the resources wasted on hypervisor overhead, allowing you to maximize the return on investment from your server hardware. ... To determine whether server consolidation will reduce energy usage, you’ll have to calculate the energy needs of your servers. Typically, power supplies indicate how many watts of electricity they supply to servers. Using this number, you can compare how energy requirements vary between machines. Keep in mind, however, that actual energy consumption will vary depending on factors like CPU clock speed and how active server CPUs are. So, in addition to comparing the wattage ratings on power supplies, you should track how much electricity your servers actually consume, and how that metric changes before and after you consolidate servers.


How DDoS Botent is used to Infect your Network?

The threat posed by DDoS botnets remains significant and complex. As these malicious networks grow more sophisticated, understanding their mechanisms and potential impacts is crucial for organizations. DDoS botnets not only facilitate financial theft and data breaches but also enable large-scale spam and phishing campaigns that can undermine trust and security. To effectively defend against these threats, organizations must prioritize proactive measures, including regular updates, robust security protocols, and vigilant monitoring of network activity. By implementing strategies to identify and mitigate botnet attacks, businesses can safeguard their systems and data from potential harm. Ultimately, a comprehensive understanding of how DDoS botnets operate—and the strategies to combat them—will empower organizations to navigate the challenges of cybersecurity and maintain a secure digital environment. As a CERT-In empanelled organization, Kratikal is equipped to enhance your understanding of potential risks. Our manual and automated Vulnerability Assessment and Penetration Testing (VAPT) services proficiently discover, detect, and assess vulnerabilities within your IT infrastructure. 


Banks Must Try the Flip Side of Embedded Finance: Embedded Fintech

With a one-way-street perspective on embedded finance, the idea is that if payment volume is moving to tech companies then banks should power the back end of the tech experience. This is a good start but the threat from fintech companies to retail banks will only continue to deepen in the future. Customer adoption is higher than ever for some fintechs like Chime and Nubank, for example. A better approach would be for banks to use embedded fintech to improve customer experience by upgrading banks’ tech offerings to retain customers and grow within their customer base. Embedded fintech can help these organizations stay competitive technologically. ... There are many opportunities for innovation with embedded payroll. Banks are uniquely positioned to offer tailored payroll solutions that map to what small businesses today want. Payroll is complex and needs to be compliant to avoid hefty penalties. Embedded payroll lets banks offload costs, burdens and risks associated with payroll. Banks can offer faster payroll with less risk when they hold the accounts for employers and payees. They can also give business customers a fuller picture of their cash flow, offering them peace of mind. 



Quote for the day:

"Pull the string and it will follow wherever you wish. Push it and it will go nowhere at all." -- Dwight D. Eisenhower

Daily Tech Digest - August 29, 2024

The human factor in the industrial metaverse

The virtualisation of factories might ensure additional efficiencies, but it has the potential to fundamentally alter the human dynamics within an organisation. With rising reliance on digital tools, it gets challenging to maintain the human aspects of work. ... Just like evolving innovation is crucial, so is organisational culture. Leaders must promote a culture that supports agility, innovation, and continuous learning to ensure success in a virtual factory environment. This can be achieved by being transparent, encouraging experimentation, and recognising and rewarding an employee’s creativity and adaptability. With the rapid evolution of virtual factories employees must undergo comprehensive training that covers both technical and soft skills to adapt to the virtual environment. While practical, hands-on exercises are crucial for real-world application, it’s also important to have continuous learning with ongoing workshops, online training, and cross-training opportunities. To further enhance knowledge sharing, establishing mentorship and peer-learning programs can ensure a smooth transition, fostering a cohesive and productive workforce.


Challenging The Myths of Generative AI

The productivity myth suggests that anything we spend time on is up for automation — that any time we spend can and should be freed up for the sake of having even more time for other activities or pursuits — which can also be automated. The importance and value of thinking about our work and why we do it is waved away as a distraction. The goal of writing, this myth suggests, is filling a page rather than the process of thought that a completed page represents. ... The prompt myth is a technical myth at the heart of the LLM boom. It was a simple but brilliant design stroke: rather than a window where people paste text and allow the LLM to extend it, ChatGPT framed it as a chat window. We’re used to chat boxes, a window that waits for our messages and gets a (previously human) response in return. In truth, users provide words that dictate what we get back. ... Intelligence myths arise from the reliance on metaphors of thinking in building automated systems. These metaphors – learning, understanding, and dreaming – are helpful shorthand. But intelligence myths rely on hazy connections to human psychology. They often conflate AI systems inspired by models of human thought for a capacity to think.


The New Frontiers of Cyber-Warfare: Insights From Black Hat 2024

Corporate sanctions against nations are just one aspect of the broader issue. Moss also spoke about a new kind of trade war, where nation-states are pushing back against big tech companies and their political and economic agendas – along with the agendas of countries where these companies are based. Moss noted that countries are now using digital protectionist policies to wage what he called "a new way to escalate." He cited India's 2020 ban on TikTok, which resulted in China’s ByteDance reportedly facing up to $6 billion in losses. Moss also discussed the phenomenon of “app diplomacy,” where governments dictate to big tech companies like Apple and Google which apps are permitted in their markets. He mentioned the practice of “tech sorting,” where countries try to maintain strict control over foreign tech through redirection, throttling, or direct censorship. ... Shifting from concerns over AI to the emerging weapons of cyber espionage and warfare, Moss, moderating Black Hat’s wrap-up discussion, brought up the growing threat of hardware attacks. He asked Jos Wetzels, partner at Midnight Blue, to discuss the increasing accessibility of electromagnetic (EM) and laser weapons.


5 best practices for running a successful threat-informed defense in cybersecurity

Assuming organizations are doing vulnerability scanning across systems, applications, attack surfaces, cloud infrastructure, etc., they will come up with lists of tens of thousands of vulnerabilities. Even big, well-resourced enterprises can’t remediate this volume of vulnerabilities in a timely fashion, so leading firms depend upon threat intelligence to guide them into fixing those vulnerabilities most likely to be exploited presently or in the near future. ... As previously mentioned, a threat-informed defense involves understanding adversary TTPs, comparing these TTPs to existing defenses, identifying gaps, and then implementing compensating controls. These last steps equate to reviewing existing detection rules, writing new ones, and then testing them all to make sure they detect what they are supposed to. Rather than depending on security tool vendors to develop the right detection rules, leading organizations invest in detection engineering across multiple toolsets such as XDR, email/web security tools, SIEM, cloud security tools, etc. CISOs I spoke with admit that this can be difficult and expensive to implement. 


Let’s Bring H-A-R-M-O-N-Y Back Into Our Tech Tools

The focus of a platform approach is on harmonized experiences: a state of balance, agreement and even pleasant interaction among the various elements and stakeholders involved in development. There needs to be a way to make it easy and enjoyable to build, test and release at the pace of today’s business without the annoying dependencies that bog down developers along the way — on both the application and infrastructure sides. I believe tool stacks and platforms that use a harmony-focused method can even bring the fun back into development. ... Resilience refers to the ability to withstand and recover from failures and disruptions, and you can’t follow a harmonized approach without it. A resilient architecture is designed to handle unexpected challenges — be they spikes in traffic, hardware malfunctions or software bugs — without compromising core functionality. How do you create resiliency? Through running, testing and debugging your code to catch errors early and often. Building a robust testing foundation can look like having a dedicated testing environment and ephemeral testing features. 


Cybersecurity Maturity: A Must-Have on the CISO’s Agenda

The process of maturation in personnel is often reflected in the way these teams are measured. Less mature teams tend to be measured on activity metrics and KPIs around how many tickets are handled and closed, for example. In more mature organisations the focus has shifted towards metrics like team satisfaction and staff retention. This has come through strongly in our research. Last year 61% of cybersecurity professionals surveyed said that the key metric they used to assess the ROI of cybersecurity automation was how well they were managing the team in terms of employee satisfaction and retention – another indication that it is reaching a more mature adoption stage. Organizations with mature cybersecurity approaches understand that tools and processes need to be guided through the maturity path, but that the reason for doing so is to serve the people working with them. The maturity and skillsets of teams should also be reviewed, and members should be given the opportunity to add their own input. What is their experience of the tools and processes in place? Do they trust the outcomes they are getting from AI- and machine learning-powered tools and processes? 


What can my organisation do about DDoS threats?

"Businesses can prevent attacks using managed DDoS protection services or through implementing robust firewalls to filter malicious traffic and deploying load balancers to distribute traffic evenly when under heavy load,” advises James Taylor, associate director, offensive security practice, at S-RM. “Other defences include rate limiting, network segmentation, anomaly detection systems and implementing responsive incident management plans.” But while firewalls and load balancers may stop some of the more basic DDoS attack types, such as SYN floods or fragmented packet attacks, they are unlikely to handle more sophisticated DDoS attacks which mimic legitimate traffic, warns Donny Chong, product and marketing director at DDoS specialist Nexusguard. “Businesses should adopt a more comprehensive approach to DDoS mitigation such as managed services,” he says. “In this setup, the most effective approach is a hybrid one, combining cloud-based mitigation with on-premises hardware which be managed externally by the DDoS specialist provider. It also combines robust DDoS mitigation with the ability to offload traffic to the designated cloud provider as and when needed.”


How Aspiring Software Developers Can Stand Out in a Tight Job Market: 5 FAQs

While technical skills are critical, the ability to listen to clients, understand their problems and translate technical information into simple language is also important. Without reliable soft skills, clients may doubt your ability to address their needs. Employers also want candidates who can collaborate and work effectively in a team setting. This involves taking initiative, having strong written and verbal communication skills and being proactive about sharing status updates. Demonstrate these skills by discussing how you applied them in college extracurriculars or in the classroom as part of group project work, and how you plan to apply them in the workplace. In a highly competitive job market, doing so may set you apart from other candidates who offer similar technical backgrounds. ... Research the company before applying for a role so you're prepared with thoughtful questions for your interview. For example, you might want to ask about the new hire onboarding process, professional development opportunities, company culture or specific questions regarding a project the interviewer has recently worked on.


Bridging the AI Gap: The Crucial Role of Vectors in Advancing Artificial Intelligence

Vector databases have recently emerged into the spotlight as the go-to method for capturing the semantic essence of various entities, including text, images, audio, and video content. Encoding this diverse range of data types into a uniform mathematical representation means that we can now quantify semantic similarities by calculating the mathematical distance between these representations. This breakthrough enables “fuzzy” semantic similarity searches across a wide array of content types. While vector databases aren’t new and won’t resolve all current data challenges, their ability to perform these semantic searches across vast datasets and feed that information to LLMs unlocks previously unattainable functionality. ... We are in the early stages of leveraging vectors, both in the emerging generative AI space and the classical ML domain. It’s important to recognise that vectors don’t come as an out-of-the-box solution and can’t simply be bolted onto existing AI or ML programs. However, as they become more prevalent and universally adopted, we can expect the development of software layers that will make it easier for less technical teams to apply vector technology effectively.


AI Can Reshape Insight Delivery and Decision-making

Moving on to risk, Tubbs shares that AI plays a pivotal role in the organizational risk mitigation strategy. With AI, the organization can identify potential risks and propose countermeasures that can significantly contribute to business stability. Therefore, Visa can be proactive in fighting fraud and risks, specifically in the payment landscape. Another usage of AI at Visa is in making real-time decisions with real-time analytics. Given the billions of transactions a month, real-time analytics enable the organization to comprehend what the transactions mean and how to make prompt decisions around anomalous behavior. AI also fosters collaboration in the ecosystem and organization by encouraging different teams to work towards a shared objective. Summing up, she refers to the cost-saving aspect of AI and maintains that Visa is driven to automate processes that have taken a significant amount of time historically. Shifting to the other side of good AI, Tubbs affirms that AI can also be used by fraudsters for nefarious reasons. To avoid that, Visa constantly evaluates its models and algorithms. She notes that Visa has a dedicated team to look into the dark web to understand the actions of fraudsters.



Quote for the day:

"Successful and unsuccessful people do not vary greatly in their abilities. They vary in their desires to reach their potential." -- John Maxwell

Daily Tech Digest - July 04, 2024

Understanding collective defense as a route to better cybersecurity

Organizations invoking collective defense to protect their IT and data assets will usually focus on sharing threat intelligence and coordinating threat response actions to counter malicious threat actors. Success depends on defining and implementing a collaborative cybersecurity strategy where organizations, both internally and externally, work together across industries to defend against targeted cyber threats. ... Putting this into practice requires organizations to commit to coordinating their cybersecurity strategies to identify, mitigate and recover from threats and breaches. This should begin with a process that defines the stakeholders who will participate in the collective defense initiative. These can include anything from private companies and government agencies to non-profits and Information Sharing and Analysis Centers (ISACs), among others. The approach will only work if it is based on mutual trust, so there is an important role for the use of mechanisms such as non-disclosure agreements, clearly defined roles and responsibilities and a commitment to operational transparency. 


Meaningful Ways to Reward Your IT Team and Its Achievements

With technology rapidly advancing, it's more important than ever to invest in personalized IT team skill development and employee well-being programs, which are a win-win for employees and the companies they work for, says Carrie Rasmussen, CIO at human resources software provider Dayforce, in an email interview. ... Synchronize rewards to project workflows, Felker recommends. If it's a particularly difficult time for the team -- tight deadlines, major changes, and other pressing issues -- he suggests scheduling rewards prior to the work's completion to boost motivation. "Having the team get a boost mid-stream on a project is likely to create an additional reservoir of mental energy they can draw from as the project continues," Felker says. ... It's also important to celebrate success whenever possible and to acknowledge that the outcome was the direct result of great teamwork. "Five minutes of recognition from the CEO in a company update or other forum motivates not only the IT team but the rest of the organization to strive for recognition," Nguyen says. He also advises promoting significant team achievements on LinkedIn and other major social platforms. "This will aid recruiting and retention efforts."


Deepfake research is growing and so is investment in companies that fight it

Manipulating human likeness, such as creating deepfake images, video and audio of people, has become the most common tactic for misusing generative AI, a new study from Google reveals. The most common reason to misuse the technology is to influence public opinion – including swaying political opinion – but it is also finding its way in scams, frauds or other means of generating profit. ... Impersonations of celebrities or public figures, for instance, are often used in investment scams while AI-generated media can also be generated to bypass identity verification and conduct blackmail, sextortion and phishing scams. As the primary data is media reports, the researchers warn that the perception of AI-generated misuse may be skewed to the ones that attract headlines. But despite concerns that sophisticated or state-sponsored actors will use generative AI, many of the cases of misuse were found to rely on popular tools that require minimal technical skills. ... With the threat of deepfakes becoming widespread, some companies are coming up with novel solutions that protect images online.


Building Finance Apps: Best Practices and Unique Challenges

By making compliance a central focus from day one of the development process, you maximize your ability to meet compliance needs, while also avoiding the inefficient process of retrofitting compliance features into the app later. For example, implementing transaction reporting after the rest of the app has been built is likely to be a much heavier lift than designing the app from the start to support that feature. ... The tech stack (meaning the set of frameworks and tools you use to build and run your app) can have major implications for how easy it is to build the app, how secure and reliable it is, and how well it integrates with other systems or platforms. For that reason, you'll want to consider your stack carefully, and avoid the temptation to go with whichever frameworks or tools you know best or like the most. ... Given the plethora of finance apps available today, it can be tempting to want to build fancy interfaces or extravagant features in a bid to set your app apart. In general, however, it's better to adopt a minimalist approach. Build the features your users actually want — no more, no less. Otherwise, you waste time and development resources, while also potentially exposing your app to more security risks.


OVHcloud blames record-breaking DDoS attack on MikroTik botnet

Earlier this year, OVHcloud had to mitigate a massive packet rate attack that reached 840 Mpps, surpassing the previous record holder, an 809 Mpps DDoS attack targeting a European bank, which Akamai mitigated in June 2020. ... OVHcloud says many of the high packet rate attacks it recorded, including the record-breaking attack from April, originate from compromised MirkoTik Cloud Core Router (CCR) devices designed for high-performance networking. The firm identified, specifically, compromised models CCR1036-8G-2S+ and CCR1072-1G-8S+, which are used as small—to medium-sized network cores. Many of these devices exposed their interface online, running outdated firmware and making them susceptible to attacks leveraging exploits for known vulnerabilities. The cloud firm hypothesizes that attackers might use MikroTik's RouterOS's "Bandwidth Test" feature, designed for network throughput stress testing, to generate high packet rates. OVHcloud found nearly 100,000 Mikrotik devices that are reachable/exploitable over the internet, making up for many potential targets for DDoS actors.


Set Goals and Measure Progress for Effective AI Deployment

Combining human expertise and AI capabilities to augment decision-making is an essential tenet in responsible AI principles. The current age of AI adoption should be considered a “coming together of humans and technology.” Humans will continue to be the custodians and stewards of data, which ties into Key Factor 2 about the need for high-quality data, as humans can help curate the relevant data sets to train an LLM. This is critical, and the “human-in-the-loop” facet should be embedded in all AI implementations to avoid completely autonomous implementations. Apart from data curation, this allows humans to take more meaningful actions when equipped with relevant insights, thus achieving better business outcomes. ... Addressing bias, privacy, and transparency in AI development and deployment is the pivotal metric in measuring its success. Like any technology, laying out guardrails and rules of engagement are core to this factor. Enterprises such as Accenture implement measures to detect and prevent bias in their AI recruitment tools, helping to ensure fair hiring practices. 


Site Reliability Engineering State of the Union for 2024

Automation remains at the core of SRE, with tools for container orchestration and infrastructure management playing a critical role. The adoption of containerization technologies such as Docker and Kubernetes has facilitated more efficient deployment and scaling of applications. In 2024, we can expect further advancements in automation tools that streamline the orchestration of complex microservices architectures, thereby reducing the operational burden on SRE teams. Infrastructure automation and orchestration are pivotal in the realm of SRE, enabling teams to manage complex systems with enhanced efficiency and reliability. The evolution of these technologies, particularly with the advent of containerization and microservices, has significantly transformed how applications are deployed, managed and scaled. ... With the increasing prevalence of cyberthreats and the tightening of regulatory requirements, security and compliance have become integral aspects of SRE. Automated tools for compliance monitoring and enforcement will become indispensable, enabling organizations to adhere to industry standards while minimizing the risk of data breaches and other security incidents.


5 Steps to Refocus Your Digital Transformation Strategy for Strategic Advancement

A strategy built around customer value provides measurable outcomes and drives deeper engagement and loyalty. The digital landscape is riddled with risks and opportunities due to rapid technological advancements, especially in data-centric AI. Businesses must stay agile, continually evaluating the risks and rewards of new technologies while maintaining a sharp focus on how these enhancements serve their customer base. ... Organizations with a customer advisory board should leverage it to gain insights directly from those who use their services or products. Engaging customers from the early stages of planning ensures that their feedback and needs directly influence the transformation strategy, leading to more accurate and beneficial implementations. ... One significant mistake IT leaders make is prioritizing technology over customer needs. While technology is a crucial enabler, it should not dictate the strategy. Instead, it should support and enhance the strategy’s core aim — serving the customer. IT leaders must ensure that digital initiatives align with broader business objectives and directly contribute to customer satisfaction and business efficiency.


OpenSSH Vulnerability “regreSSHion” Grants RCE Access Without User Interaction, Most Dangerous Bug in Two Decades

The good news about the OpenSSH vulnerability is that exploitation attempts have not yet been spotted in the wild. Successfully taking advantage of the exploit required about 10,000 tries to win a race condition using 100 concurrent connections under the researcher’s test conditions, or about six to eight hours to RCE due to obfuscation of ASLR glibc’s address. The attack will thus likely be limited to those wielding botnets when it is uncovered by threat actors. Given the large amount of simultaneous connections needed to induce the race condition, the RCE is also very open to being detected and blocked by firewalls and networking monitoring tools. Qualys’ immediate advice for mitigation also includes updating network-based access controls and segmenting networks where possible. ... “While there is currently no proof of concept demonstrating this vulnerability, and it has only been shown to be exploitable under controlled lab conditions, it is plausible that a public exploit for this vulnerability could emerge in the near future. Hence it’s strongly advised to patch this vulnerability before this becomes the case”.


New paper: AI agents that matter

So are AI agents all hype? It’s too early to tell. We think there are research challenges to be solved before we can expect agents such as the ones above to work well enough to be widely adopted. The only way to find out is through more research, so we do think research on AI agents is worthwhile. One major research challenge is reliability — LLMs are already capable enough to do many tasks that people want an assistant to handle, but not reliable enough that they can be successful products. To appreciate why, think of a flight-booking agent that needs to make dozens of calls to LLMs. If each of those went wrong independently with a probability of, say, just 2%, the overall system would be so unreliable as to be completely useless (this partly explains some of the product failures we’ve seen). ... Right now, however, research is itself contributing to hype and overoptimism because evaluation practices are not rigorous enough, much like the early days of machine learning research before the common task method took hold. That brings us to our paper.



Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown