Showing posts with label Edge AI. Show all posts
Showing posts with label Edge AI. Show all posts

Daily Tech Digest - May 22, 2025


Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous


Consumer rights group: Why a 10-year ban on AI regulation will harm Americans

AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report. These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use. ... Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place. But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause. It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.


Putting agentic AI to work in Firebase Studio

An AI assistant is like power steering. The programmer, the driver, remains in control, and the tool magnifies that control. The developer types some code, and the assistant completes the function, speeding up the process. The next logical step is to empower the assistant to take action—to run tests, debug code, mock up a UI, or perform some other task on its own. In Firebase Studio, we get a seat in a hosted environment that lets us enter prompts that direct the agent to take meaningful action. ... Obviously, we are a long way off from a non-programmer frolicking around in Firebase Studio, or any similar AI-powered development environment, and building complex applications. Google Cloud Platform, Gemini, and Firebase Studio are best-in-class tools. These kinds of limits apply to all agentic AI development systems. Still, I would in no wise want to give up my Gemini assistant when coding. It takes a huge amount of busy work off my shoulders and brings much more possibility into scope by letting me focus on the larger picture. I wonder how the path will look, how long it will take for Firebase Studio and similar tools to mature. It seems clear that something along these lines, where the AI is framed in a tool that lets it take action, is part of the future. It may take longer than AI enthusiasts predict. It may never really, fully come to fruition in the way we envision.


Edge AI + Intelligence Hub: A Match in the Making

The shop floor looks nothing like a data lake. There is telemetry data from machines, historical data, MES data in SQL, some random CSV files, and most of it lacks context. Companies that realize this—or already have an Industrial DataOps strategy—move quickly beyond these issues. Companies that don’t end up creating a solution that works with only telemetry data (for example) and then find out they need other data. Or worse, when they get something working in the first factory, they find out factories 2, 3, and 4 have different technology stacks. ... In comes DataOps (again). Cloud AI and Edge AI have the same problems with industrial data. They need access to contextualized information across many systems. The only difference is there is no data lake in the factory—but that’s OK. DataOps can leave the data in the source systems and expose it over APIs, allowing edge AI to access the data needed for specific tasks. But just like IT, what happens if OT doesn’t use DataOps? It’s the same set of issues. If you try to integrate AI directly with data from your SCADA, historian, or even UNS/MQTT, you’ll limit the data and context to which the agent has access. SCADA/Historians only have telemetry data. UNS/MQTT is report by exception, and AI is request/response based (i.e., it can’t integrate). But again, I digress. Use DataOps.


AI-driven threats prompt IT leaders to rethink hybrid cloud security

Public cloud security risks are also undergoing renewed assessment. While the public cloud was widely adopted during the post-pandemic shift to digital operations, it is increasingly seen as a source of risk. According to the survey, 70 percent of Security and IT leaders now see the public cloud as a greater risk than any other environment. As a result, an equivalent proportion are actively considering moving data back from public to private cloud due to security concerns, and 54 percent are reluctant to use AI solutions in the public cloud citing apprehensions about intellectual property protection. The need for improved visibility is emphasised in the findings. Rising sophistication in cyberattacks has exposed the limitations of existing security tools—more than half (55 percent) of Security and IT leaders reported lacking confidence in their current toolsets' ability to detect breaches, mainly due to insufficient visibility. Accordingly, 64 percent say their primary objective for the next year is to achieve real-time threat monitoring through comprehensive real-time visibility into all data in motion. David Land, Vice President, APAC at Gigamon, commented: "Security teams are struggling to keep pace with the speed of AI adoption and the growing complexity of and vulnerability of public cloud environments. 


Taming the Hacker Storm: Why Millions in Cybersecurity Spending Isn’t Enough

The key to taming the hacker storm is founded on the core principle of trust: that the individual or company you are dealing with is who or what they claim to be and behaves accordingly. Establishing a high-trust environment can largely hinder hacker success. ... For a pervasive selective trusted ecosystem, an organization requires something beyond trusted user IDs. A hacker can compromise a user’s device and steal the trusted user ID, making identity-based trust inadequate. A trust-verified device assures that the device is secure and can be trusted. But then again, a hacker stealing a user’s identity and password can also fake the user’s device. Confirming the device’s identity—whether it is or it isn’t the same device—hence becomes necessary. The best way to ensure the device is secure and trustworthy is to employ the device identity that is designed by its manufacturer and programmed into its TPM or Secure Enclave chip. ... Trusted actions are critical in ensuring a secure and pervasive trust environment. Different actions require different levels of authentication, generating different levels of trust, which the application vendor or the service provider has already defined. An action considered high risk would require stronger authentication, also known as dynamic authentication.


AWS clamping down on cloud capacity swapping; here’s what IT buyers need to know

For enterprises that sourced discounted cloud resources through a broker or value-added reseller (VAR), the arbitrage window shuts, Brunkard noted. Enterprises should expect a “modest price bump” on steady‑state workloads and a “brief scramble” to unwind pooled commitments. ... On the other hand, companies that buy their own RIs or SPs, or negotiate volume deals through AWS’s Enterprise Discount Program (EDP), shouldn’t be impacted, he said. Nothing changes except that pricing is now baselined. To get ahead of the change, organizations should audit their exposure and ask their managed service providers (MSPs) what commitments are pooled and when they renew, Brunkard advised. ... Ultimately, enterprises that have relied on vendor flexibility to manage overcommitment could face hits to gross margins, budget overruns, and a spike in “finance-engineering misalignment,” Barrow said. Those whose vendor models are based on RI and SP reallocation tactics will see their risk profile “changed overnight,” he said. New commitments will now essentially be non-cancellable financial obligations, and if cloud usage dips or pivots, they will be exposed. Many vendors won’t be able to offer protection as they have in the past.


The new C-Suite ally: Generative AI

While traditional GenAI applications focus on structured datasets, a significant frontier remains largely untapped — the vast swathes of unstructured "dark data" sitting in contracts, credit memos, regulatory reports, and risk assessments. Aashish Mehta, Founder and CEO of nRoad, emphasizes this critical gap.
"Most strategic decisions rely on data, but the reality is that a lot of that data sits in unstructured formats," he explained. nRoad’s platform, CONVUS, addresses this by transforming unstructured content into structured, contextual insights. ... Beyond risk management, OpsGPT automates time-intensive compliance tasks, offers multilingual capabilities, and eliminates the need for coding through intuitive design. Importantly, Broadridge has embedded a robust governance framework around all AI initiatives, ensuring security, regulatory compliance, and transparency. Trustworthiness is central to Broadridge’s approach. "We adopt a multi-layered governance framework grounded in data protection, informed consent, model accuracy, and regulatory compliance," Seshagiri explained. ... Despite the enthusiasm, CxOs remain cautious about overreliance on GenAI outputs. Concerns around model bias, data hallucination, and explainability persist. Many leaders are putting guardrails in place: enforcing human-in-the-loop systems, regular model audits, and ethical AI use policies.


Building a Proactive Defence Through Industry Collaboration

Trusted collaboration, whether through Information Sharing and Analysis Centres (ISACs), government agencies, or private-sector partnerships, is a highly effective way to enhance the defensive posture of all participating organisations. For this to work, however, organisations will need to establish operationally secure real-time communication channels that support the rapid sharing of threat and defence intelligence. In parallel, the community will also need to establish processes to enable them to efficiently disseminate indicators of compromise (IoCs) and tactics, techniques and procedures (TTPs), backed up with best practice information and incident reports. These collective defence communities can also leverage the centralised cyber fusion centre model that brings together all relevant security functions – threat intelligence, security automation, threat response, security orchestration and incident response – in a truly cohesive way. Providing an integrated sharing platform for exchanging information among multiple security functions, today’s next-generation cyber fusion centres enable organisations to leverage threat intelligence, identify threats in real-time, and take advantage of automated intelligence sharing within and beyond organisational boundaries. 


3 Powerful Ways AI is Supercharging Cloud Threat Detection

AI’s strength lies in pattern recognition across vast datasets. By analysing historical and real-time data, AI can differentiate between benign anomalies and true threats, improving the signal-to-noise ratio for security teams. This means fewer false positives and more confidence when an alert does sound. ... When a security incidents strike, every second counts. Historically, responding to an incident involves significant human effort – analysts must comb through alerts, correlate logs, identify the root cause, and manually contain the threat. This approach is slow, prone to errors, and doesn’t scale well. It’s not uncommon for incident investigations to stretch hours or days when done manually. Meanwhile, the damage (data theft, service disruption) continues to accrue. Human responders also face cognitive overloads during crises, juggling tasks like notifying stakeholders, documenting events, and actually fixing the problem. ... It’s important to note that AI isn’t about eliminating the need for human experts but rather augmenting their capabilities. By taking over initial investigation steps and mundane tasks, AI frees up human analysts to focus on strategic decision-making and complex threats. Security teams can then spend time on thorough analysis of significant incidents, threat hunting, and improving security posture, instead of constant firefighting. 


The hidden gaps in your asset inventory, and how to close them

The biggest blind spot isn’t a specific asset. It is trusting that what’s on paper is actually live and in production. Many organizations often solely focus on known assets within their documented environments, but this can create a false sense of security. Blind spots are not always the result of malicious intent, but rather of decentralized decision-making, forgotten infrastructure, or evolving technology that hasn’t been brought under central control. External applications, legacy technologies and abandoned cloud infrastructure, such as temporary test environments, may remain vulnerable long after their intended use. These assets pose a risk, particularly when they are unintentionally exposed due to misconfiguration or overly broad permissions. Third-party and supply chain integrations present another layer of complexity.  ... Traditional discovery often misses anything that doesn’t leave a clear, traceable footprint inside the network perimeter. That includes subdomains spun up during campaigns or product launches; public-facing APIs without formal registration or change control; third-party login portals or assets tied to your brand and code repositories, or misconfigured services exposed via DNS. These assets live on the edge, connected to the organization but not owned in a traditional sense. 

Daily Tech Digest - March 27, 2025


Quote for the day:

"Leadership has a harder job to do than just choose sides. It must bring sides together." -- Jesse Jackson


Can AI Fix Digital Banking Service Woes?

For banks in India, an AI-driven system for handling customer complaints can be a game changer by enhancing operational efficiency, boosting customer trust and ensuring strict regulatory compliance. The success of this system hinges on addressing data security, integrating with legacy systems, and multi-lingual challenges while fostering a culture of continuous improvement. "By following this detailed road map, banks can build a resilient AI system that not only improves customer service but also supports broader financial risk management and compliance objectives, said Abhay Johorey, managing director, Protiviti Member Firm for India. An AI chatbot could drive operational efficiency, perform enhanced data analytics and risk management, increase customer trust and have compliance benefits if designed well. A badly executed one could run the risk of providing inaccurate financial information to customers or infringe on their privacy and data. ... "We are entering a transformative era where AI can significantly improve the speed, accuracy and fairness of complaint resolution. AI can categorize complaints based on urgency, complexity or subject matter, ensuring faster escalation to the appropriate teams. AI optimizes complaint routing and assists in decision-making, reducing processing times," the RBI said.


Ethernet roadmap: AI drives high-speed, efficient Ethernet networks

The Ethernet Alliance’s 10th anniversary roadmap references the consortium’s 2024 Technology Exploration Forum (TEF), which highlighted the critical need for collaboration across the Ethernet ecosystem: “Industry experts emphasized the importance of uniting different sectors to tackle the engineering challenges posed by the rapid advancement of AI. This collective effort is ensuring that Ethernet will continue to evolve to provide the network functionality required for next-generation AI networks.” Some of those engineering challenges include congestion management, latency, power consumption, signaling, and the ever-increasing speed of the network. ... “One of the outcomes of [the TEF] event was the realization the development of 400Gb/sec signaling would be an industry-wide problem. It wasn’t solely an application, network, component, or interconnect problem,” stated D’Ambrosia, who is a distinguished engineer with the Datacom Standards Research team at Futurewei Technologies, a U.S. subsidiary of Huawei, and the chair of the IEEE P802.3dj 200Gb/sec, 400Gb/sec, 800Gb/sec and 1.6Tb/sec Task Force. “Overcoming the challenges to support 400 Gb/s signaling will likely require all the tools available for each of the various layers and components.”


Dealing With Data Overload: How to Take Control of Your Security Analytics

Organizations face several challenges when it comes to security analytics. They need to find a better way to optimize high volumes of data, ensure they are getting maximum bang for the buck, and bring balance between cost and visibility. This allows more of the "right" or optimized data to be brought in for advanced analytics, filtering out the noise or useless data that isn't needed for analytics/machine learning. ... If you're a SOC manager, and your team is triaging alerts all day, perhaps you've got one full-time staffer who does nothing but look at Microsoft O365 alerts, and another person who just looks at Proofpoint alerts. The goal is to think about the bigger operational picture. When searching for a solution, it's easy to focus only on your immediate challenges and overlook future ones. As a result, you invest in a fix that solves today's problems but leaves you unprepared for the next ones that arise. You've shot yourself in the foot. ... Organizations tend to buy different tools to solve different problems, when what they need is a data analytics platform that can apply analytics, machine learning, and data science to their data sets. That will provide the intelligence to make business decisions, whether that's to reduce risk or something else. Look for a tool, regardless of what it's called, that can solve the most problems for the least amount of money.


Cyber insurance isn’t always what it seems

Still, insurance is no silver bullet. Policies often come with limitations, high premiums, and strict requirements around security posture. “Insurers scrutinize security postures, enforce stringent requirements, and may deny claims if proper controls are not in place,” he said. Many policies also include exclusions and coverage gaps that add complexity to the decision. When used appropriately, cyber insurance plays a supporting role, not a leading one. “They should complement the defensive capabilities that focus on avoiding and minimizing loss,” Rosenquist said, serving as a safety net rather than a frontline defense. “Cyber insurance can provide important financial relief, but it should never be the first or only line of defense.” ... “Many businesses still believe they’re too small to be targeted, that cyber insurance is only for large companies, or that it’s too expensive. However, the reality is that over 60% of small businesses have been victims of cyberattacks, privacy breaches affect organizations of all sizes, and the cyber insurance market offers competitive, tailored options. Working with a skilled broker brings real value. They offer broad expertise and help build tailored solutions. With the proper guidance, organizations can create programs that address their specific risks and needs,“ explained Tijana Dusper, a licensed broker for insurance and reinsurance at InterOmnia.


RFID Hacking: Exploring Vulnerabilities, Testing Methods, and Protection Strategies

When an RFID reader scans an object, it emits a radio frequency (RF) signal that interacts with nearby RFID tags, potentially up to 1.14 million tags in a single area. The antenna on each tag absorbs this energy, powering the embedded microchip. The chip then encodes its stored data into a binary format (0s and 1s) and transmits it back to the RFID reader using reverse signal modulation. The collected data is then stored and processed, either for human interpretation or automated system operations. ... As with many wireless technologies, RFID technology adheres to certain standards and communication protocols. ... As RFID technology becomes increasingly embedded in everyday operations, from access control and inventory tracking to cashless payments, the risks associated with RFID hacking cannot be ignored. The same features that make RFID efficient and convenient, wireless communication and automatic identification, also make it vulnerable to cyber threats. RFID hacking techniques, such as cloning, skimming, eavesdropping, and relay attacks, allow cybercriminals to intercept sensitive information, manipulate access controls, or even exploit entire systems. Without proper security measures, businesses and individuals risk unauthorized data breaches, financial fraud, and identity theft.


How Organizational Rewiring Can Capture Value from Your AI Strategy

McKinsey’s research indicates that while AI use is accelerating dramatically (78% of organizations now use AI in at least one function, up from 55% a year ago), most organizations are still in early implementation stages. Only 1% of company executives describe their generative AI rollouts as "mature." For retail banking leaders, this reality check suggests both opportunity and urgency. The potential for competitive advantage remains substantial for early transformation leaders, but the window for gaining this advantage is narrowing as adoption accelerates. As McKinsey senior partner Alex Singla observes: "The organizations that are building a genuine and lasting competitive advantage from their AI efforts are the ones that are thinking in terms of wholesale transformative change that stands to alter their business models, cost structures, and revenue streams — rather than proceeding incrementally." For retail banking executives, this means embracing AI as a strategic imperative that requires rethinking fundamental business models, not merely implementing new technology tools. The most successful banking institutions will be those that undertake comprehensive organizational rewiring, driven by active C-suite leadership, clear strategic roadmaps, and a willingness to fundamentally redesign how they operate.


Securing AI at the Edge: Why Trusted Model Updates Are the Next Big Challenge

Edge AI is no longer experimental. It is running live in environments where failure is not an option. Environmental monitoring systems track air quality in realtime across urban areas. Predictive maintenance tools keep industrial equipment running smoothly. Smart traffic networks optimize vehicle flow in congested cities. Autonomous vehicles assist drivers with advanced safety features. Factory automation systems use AI to detect product defects on high-speed production lines. In all these scenarios, AI models must continuously evolve to meet changing demands. But every update carries risks, whether through technical failure, security breaches, or operational disruption. ... These challenges cannot be solved with isolated patches or last-minute fixes. Securing AI updates at the edge requires a fundamental rethink of the entire lifecycle. The update process from cloud-to-edge must be secure from start to finish. Models need protection from the moment they leave development until they are safely deployed. Authenticity must be guaranteed so that no malicious code can slip in. Access control must ensure that only authorized systems handle updates. And because no system is immune to failure, updates need built-in recovery mechanisms that minimize disruption.


Beyond the Black Box: Rethinking Data Centers for Sustainable Growth

To thrive under the growing pressure, the data center sector must rethink its relationship with the communities it enters. Instead of treating public engagement as an afterthought, what if the planning process started with people? Now, reimagine the development timeline. What if the public-facing engagement was prioritized from the very start? Imagine a data center operator purchasing a parcel of land for a new data center campus near a mid-sized city. Instead of presenting a fully formed plan months later, the client begins the conversation by asking the community: “How can we improve things while becoming your neighbor?” While commercial viability is essential, early engagement and collaboration can deliver positive outcomes without substantially increasing costs.  ... For data centers in urban environments where space is limited, the listen-first ethos still holds value. In these cases, the focus might shift to educational initiatives, such as training programs or partnerships with local schools and universities. Early public engagement ensures that urban projects align with the needs and priorities of residents while addressing their concerns. This inclusive approach benefits all stakeholders: for local authorities, it supports broader sustainability and net zero goals, and for communities, it delivers tangible benefits that clarify the data centre’s impact and value to the area.


Generative AI In Business: Managing Risks in The Race for Innovation

The issue is that businesses lack the appropriate processes, guidelines, or formal governance structures needed to regulate AI use, which, at the end of the day, makes them prone to accidental security breaches. In many instances, the culprits are employees who introduce GenAI systems on corporate devices with no understanding of the risks that come with it or their use even permitted based on the company’s existing data security and privacy guidelines. ... Never overestimate the power of employee education, which is essential in times when new innovations are far ahead of education. Put in place an educational program that delves into the risks of AI systems. Include training sessions that give people the tools they need to recognize red flags, such as suspicious AI-generated outputs or unusual system behaviors. In a world of AI-enabled threats, it’s important to empower employees to act as the first line of defense is essential. ... A preemptive approach that leverages tools such as Automated Moving Target Defense (AMTD) can help organizations stay ahead of attackers. By anticipating potential threats and implementing measures to address them before they occur, companies can reduce their vulnerability to AI-enabled exploits. This proactive stance is particularly important given the speed and adaptability of modern cyber threats.


How to Get a Delayed IT Project Back on Track

The best way to launch a project revival is to look backward. "Conduct a thorough project reassessment to identify the root causes of delays, then re-prioritize deliverables using a phased, agile-based approach," suggests Karan Kumar Ratra, an engineering leader at Walmart specializing in e-commerce technology, leadership, and innovation. "Start with high-impact, manageable milestones to restore momentum and stakeholder confidence," he advises in an online interview. "Clear communication, accountability, and aligning leadership with revised goals are critical." ... Recall past team members, yet supplement them with new members with similar skills and project experience, recommends Pundalika Shenoy, automation and modernization project manager at business consulting firm Smartbridge, via email. "Outside perspectives and expertise will help the team." While new team members should be welcomed, try to retain at least some past contributors to ensure project continuity, Rahming advises. Fresh ideas and insights may be what the legacy project needs to succeed but try to retain at least some past contributors to ensure project continuity, Rahming advises. "The new team members may well bring a sense of urgency, enthusiasm and skills ... that weren't present in the previous team at the time of the delay."


Daily Tech Digest - January 31, 2025


Quote for the day:

“If you genuinely want something, don’t wait for it–teach yourself to be impatient.” -- Gurbaksh Chahal


GenAI fueling employee impersonation with biometric spoofs and counterfeit ID fraud

The annual AuthenticID report underlines the surging wave of AI-powered identity fraud, with rising biometric spoofs and counterfeit ID fraud attempts. The 2025 State of Identity Fraud Report also looks at how identity verification tactics and technology innovations are tackling the problem. “In 2024, we saw just how sophisticated fraud has now become: from deepfakes to sophisticated counterfeit IDs, generative AI has changed the identity fraud game,” said Blair Cohen, AuthenticID founder and president. ... “In 2025, businesses should embrace the mentality to ‘think like a hacker’ to combat new cyber threats,” said Chris Borkenhagen, chief digital officer and information security officer at AuthenticID. “Staying ahead of evolving strategies such as AI deepfake-generated documents and biometrics, emerging technologies, and bad actor account takeover tactics are crucial in protecting your business, safeguarding data, and building trust with customers.” ... Face biometric verification company iProov has identified the Philippines as a particular hotspot for digital identity fraud, with corresponding need for financial institutions and consumers to be vigilant. “There is a massive increase at the moment in terms of identity fraud against systems using generative AI in particular and deepfakes,” said iProove chief technology officer Dominic Forrest.


Cyber experts urge proactive data protection strategies

"Every organisation must take proactive measures to protect the critical data it holds," Montel stated. Emphasising foundational security practices, he advised organisations to identify their most valuable information and protect potential attack paths. He noted that simple steps can drastically contribute to overall security. On the consumer front, Montel highlighted the pervasive nature of data collection, reminding individuals of the importance of being discerning about the personal information they share online. "Think before you click," he advised, underscoring the potential of openly shared public information to be exploited by cybercriminals. Adding to the discussion on data resilience, Darren Thomson, Field CTO at Commvault, emphasised the changing landscape of cyber defence and recovery strategies needed by organisations. Thompson pointed out that mere defensive measures are not sufficient; rapid recovery processes are crucial to maintain business resilience in the event of a cyberattack. The concept of a "minimum viable company" is pivotal, where businesses ensure continuity of essential operations even when under attack. With cybercriminal tactics becoming increasingly sophisticated, doing away with reliance solely on traditional backups is necessary. 


Trump Administration Faces Security Balancing Act in Borderless Cyber Landscape

The borderless nature of cyber threats and AI, the scale of worldwide commerce, and the globally interconnected digital ecosystem pose significant challenges that transcend partisanship. As recent experience makes us all too aware, an attack originating in one country, state, sector, or company can spread almost instantaneously, and with devastating impact. Consequently, whatever the ideological preferences of the Administration, from a pragmatic perspective cybersecurity must be a collaborative national (and international) activity, supported by regulations where appropriate. It’s an approach taken in the European Union, whose member states are now subject to the Second Network Information Security Directive (NIS2)—focused on critical national infrastructure and other important sectors—and the financial sector-focused Digital Operational Resilience Act (DORA). Both regulations seek to create a rising tide of cyber resilience that lifts all ships and one of the core elements of both is a focus on reporting and threat intelligence sharing. In-scope organizations are required to implement robust measures to detect cyber attacks, report breaches in a timely way, and, wherever possible, share the information they accumulate on threats, attack vectors, and techniques with the EU’s central cybersecurity agency (ENISA).


Infrastructure as Code: From Imperative to Declarative and Back Again

Today, tools like Terraform CDK (TFCDK) and Pulumi have become popular choices among engineers. These tools allow developers to write IaC using familiar programming languages like Python, TypeScript, or Go. At first glance, this is a return to imperative IaC. However, under the hood, they still generate declarative configurations — such as Terraform plans or CloudFormation templates — that define the desired state of the infrastructure. Why the resurgence of imperative-style interfaces? The answer lies in a broader trend toward improving developer experience (DX), enabling self-service, and enhancing accessibility. Much like the shifts we’re seeing in fields such as platform engineering, these tools are designed to streamline workflows and empower developers to work more effectively. ... The current landscape represents a blending of philosophies. While IaC tools remain fundamentally declarative in managing state and resources, they increasingly incorporate imperative-like interfaces to enhance usability. The move toward imperative-style interfaces isn’t a step backward. Instead, it highlights a broader movement to prioritize developer accessibility and productivity, aligning with the emphasis on streamlined workflows and self-service capabilities.


How to Train AI Dragons to Solve Network Security Problems

We all know AI’s mantra: More data, faster processing, large models and you’re off to the races. But what if a problem is so specific — like network or DDoS security — that it doesn’t have a lot of publicly or privately available data you can use to solve it? As with other AI applications, the quality of the data you feed an AI-based DDoS defense system determines the accuracy and effectiveness of its solutions. To train your AI dragon to defend against DDoS attacks, you need detailed, real-world DDoS traffic data. Since this data is not widely and publicly available, your best option is to work with experts who have access to this data or, even better, have analyzed and used it to train their own AI dragons. To ensure effective DDoS detection, look at real-world, network-specific data and global trends as they apply to the network you want to protect. This global perspective adds valuable context that makes it easier to detect emerging or worldwide threats. ... Predictive AI models shine when it comes to detecting DDoS patterns in real-time. By using machine learning techniques such as time-series analysis, classification and regression, they can recognize patterns of attacks that might be invisible to human analysts. 


How law enforcement agents gain access to encrypted devices

When a mobile device is seized, law enforcement can request the PIN, password, or biometric data from the suspect to access the phone if they believe it contains evidence relevant to an investigation. In England and Wales, if the suspect refuses, the police can give a notice for compliance, and a further refusal is in itself a criminal offence under the Regulation of Investigatory Powers Act (RIPA). “If access is not gained, law enforcement use forensic tools and software to unlock, decrypt, and extract critical digital evidence from a mobile phone or computer,” says James Farrell, an associate at cyber security consultancy CyXcel. “However, there are challenges on newer devices and success can depend on the version of operating system being used.” ... Law enforcement agencies have pressured companies to create “lawful access” solutions, particularly on smartphones, to take Apple as an example. “You also have the co-operation of cloud companies, which if backups are held can sidestep the need to break the encryption of a device all together,” Closed Door Security’s Agnew explains. The security community has long argued against law enforcement backdoors, not least because they create security weaknesses that criminal hackers might exploit. “Despite protests from law enforcement and national security organizations, creating a skeleton key to access encrypted data is never a sensible solution,” CreateFuture’s Watkins argues.


The quantum computing reality check

Major cloud providers have made quantum computing accessible through their platforms, which creates an illusion of readiness for enterprise adoption. However, this accessibility masks a fatal flaw: Most quantum computing applications remain experimental. Indeed, most require deep expertise in quantum physics and specialized programming knowledge. Real-world applications are severely limited, and the costs are astronomical compared to the actual value delivered. ... The timeline to practical quantum computing applications is another sobering reality. Industry experts suggest we’re still 7 to 15 years away from quantum systems capable of handling production workloads. This extended horizon makes it difficult to justify significant investments. Until then, more immediate returns could be realized through existing technologies. ... The industry’s fascination with quantum computing has made companies fear being left behind or, worse, not being part of the “cool kids club”; they want to deliver extraordinary presentations to investors and customers. We tend to jump into new trends too fast because the allure of being part of something exciting and new is just too compelling. I’ve fallen into this trap myself. ... Organizations must balance their excitement for quantum computing with practical considerations about immediate business value and return on investment. I’m optimistic about the potential value in QaaS. 


Digital transformation in banking: Redefining the role of IT-BPM services

IT-BPM services are the engine of digital transformation in banking. They streamline operations through automation technologies like RPA, enhancing efficiency in processes such as customer onboarding and loan approvals. This automation reduces errors and frees up staff for strategic tasks like personalised customer support. By harnessing big data analytics, IT-BPM empowers banks to personalise services, detect fraud, and make informed decisions, ultimately improving both profitability and customer satisfaction. Robust security measures and compliance monitoring are also integral, ensuring the protection of sensitive customer data in the increasingly complex digital landscape. ... IT-BPM services are crucial for creating seamless, multi-channel customer experiences. They enable the development of intuitive platforms, including AI-driven chatbots and mobile apps, providing instant support and convenient financial management. This focus extends to personalised services tailored to individual customer needs and preferences, and a truly integrated omnichannel experience across all banking platforms. Furthermore, IT-BPM fosters agility and innovation by enabling rapid development of new digital products and services and facilitating collaboration with fintech companies.


Revolutionizing data management: Trends driving security, scalability, and governance in 2025

Artificial Intelligence and Machine Learning transform traditional data management paradigms by automating labour-intensive processes and enabling smarter decision-making. In the upcoming years, augmented data management solutions will drive efficiency and accuracy across multiple domains, from data cataloguing to anomaly detection. AI-driven platforms process vast datasets to identify patterns, automating tasks like metadata tagging, schema creation and data lineage mapping. ... In 2025, data masking will not be merely a compliance tool for GDPR, HIPPA, or CCPA; it will be a strategic enabler. With the rise in hybrid and multi-cloud environments, businesses will increasingly need to secure sensitive data across diverse systems. Specific solutions like IBM, K2view, Oracle and Informatica will revolutionize data masking by offering scale-based, real-time, context-aware masking. ... Real-time integration enhances customer experiences through dynamic pricing, instant fraud detection, and personalized recommendations. These capabilities rely on distributed architectures designed to handle diverse data streams efficiently. The focus on real-time integration extends beyond operational improvements. 


Deploying AI at the edge: The security trade-offs and how to manage them

The moment you bring compute nodes into the far edge, you’re automatically exposing a lot of security challenges in your network. Even if you expect them to be “disconnected devices,” they could intermittently connect to transmit data. So, your security footprint is expanded. You must ensure that every piece of the stack you’re deploying at the edge is secure and trustworthy, including the edge device itself. When considering security for edge AI, you have to think about transmitting the trained model, runtime engine, and application from a central location to the edge, opening up the opportunity for a person-in-the-middle attack. ... In military operations, continuous data streams from millions of global sensors generate an overwhelming volume of information. Cloud-based solutions are often inadequate due to storage limitations, processing capacity constraints, and unacceptable latency. Therefore, edge computing is crucial for military applications, enabling immediate responses and real-time decision-making. In commercial settings, many environments lack reliable or affordable connectivity. Edge AI addresses this by enabling local data processing, minimizing the need for constant communication with the cloud. This localized approach enhances security. Instead of transmitting large volumes of raw data, only essential information is sent to the cloud. 


Daily Tech Digest - November 17, 2024

Why Are User Acceptance Tests Such a Hassle?

In the reality of many projects, UAT often becomes irreplaceable and needs to be extensive, covering a larger part of the testing pyramid than recommended ... Automated end-to-end tests often fail to cover third-party integrations due to limited access and support, requiring UAT. For instance, if a system integrates with an analytics tool, any changes to the system may require stakeholders to verify the results on the tool as well. ... In industries such as finance, healthcare, or aviation, where regulatory compliance is critical, UATs must ensure that the software meets all legal and regulatory requirements. ... In projects involving intricate business workflows, many UATs may be necessary to cover all possible scenarios and edge cases. ... This process can quickly become complex when dealing with numerous test cases, engineering teams, and stakeholder groups. This complexity often results in significant manual effort in both testing and collaboration. Even though UATs are cumbersome, most companies do not automate them because they focus on validating business requirements and user experiences, which require subjective assessment. However, automating UAT can save testing hours and the effort to coordinate testing sessions.


The full-stack architect: A new lead role for crystalizing EA value

First, the full-stack architect could ensure the function’s other architects are indeed aligned, not only among themselves, but with stakeholders from both the business and engineering. That last bit shouldn’t be overlooked, Ma says. While much attention gets paid to the notion that architects should be able to work fluently with the business, they should, in fact, work just as fluently with Engineering, meaning that whoever steps into the role should wield deep technical expertise, an attribute vital to earning the respect of engineers, and one that more traditional enterprise architects lack. For both types of stakeholders, then, the full-stack architect could serve as a single point of contact. Less “telephone,” as it were. And it could clarify the value proposition of EA as a singular function — and with respect to the business it serves. Finally, the role would probably make a few other architects unnecessary, or at least allow them to concentrate more fully on their respective principal responsibilities. No longer would they have to coordinate their peers. Ma’s inspiration for the role finds its origin in the full-stack engineer, as Ma sees EA today evolving similarly to how software engineering evolved about 15 years ago. 


Groundbreaking 8-Photon Qubit Chip Accelerates Quantum Computing

Quantum circuits based on photonic qubits are among the most promising technologies currently under active research for building a universal quantum computer. Several photonic qubits can be integrated into a tiny silicon chip as small as a fingernail, and a large number of these tiny chips can be connected via optical fibers to form a vast network of qubits, enabling the realization of a universal quantum computer. Photonic quantum computers offer advantages in terms of scalability through optical networking, room-temperature operation, and the low energy consumption. ... The research team measured the Hong-Ou-Mandel effect, a fascinating quantum phenomenon in which two different photons entering from different directions can interfere and travel together along the same path. In another notable quantum experiment, they demonstrated a 4-qubit entangled state on a 4-qubit integrated circuit (5mm x 5mm). Recently, they have expanded their research to 8 photon experiments using an 8-qubit integrated circuit (10mm x 5mm). The researchers plan to fabricate 16-qubit chips within this year, followed by scaling up to 32-qubits as part of their ongoing research toward quantum computation.


Mastering The Role Of CISO: What The Job Really Entails

A big part of a CISO’s job is working effectively with other senior executives. Success isn’t just about technical prowess; it’s about building relationships and navigating the politics of the C-suite. Whether you’re collaborating with the CEO, CFO, CIO, or CLO, you must be able to work within a broader leadership context to align security goals with business objectives. One of the most important lessons I’ve learned is to involve key stakeholders early and often. Don’t wait until you have a finalized proposal to present; get input and feedback from the relevant parties—especially the CTO, CIO, CLO, and CFO—at every stage. This collaborative approach helps you refine your security plans, ensures they are aligned with the company’s broader strategy, and reduces the likelihood of pushback when it’s time to present your final recommendations. ... While technical expertise forms the foundation of the CISO role, much of the work comes down to creative problem-solving. Being a CISO is like being a puzzle solver—you need to look at your organization’s specific challenges, risks, and goals, and figure out how to put the pieces together in a way that addresses both current and future needs.


Why Future-proofing Cybersecurity Regulatory Frameworks Is Essential

As regulations evolve, ensuring the security and privacy of the personal information used in AI training looks set to become increasingly difficult, which could lead to severe consequences for both individuals and organizations. The same survey went on to reveal that 30% of developers believe that there is a general lack of understanding among regulators who are not equipped with the right set of skills to comprehend the technology they're tasked with regulating. With skills and knowledge in question, alongside rapidly advancing AI and cybersecurity threats, what exactly should regulators keep in mind when creating regulatory frameworks that are both adaptable and effective? It's my view that, firstly, regulators should know all the options on the table when it comes to possible privacy-enhancing technologies (PETs). ... Incorporating continuous learning within the organization is also crucial, as well as allowing employees to participate in industry events and conferences to stay up to speed on the latest developments and to meet with experts. Where possible, we should be creating collaborations with the industry — for example, inviting representatives of tech companies to give internal seminars or demonstrations.


AI could alter data science as we know it - here's why

Davenport and Barkin note that generative AI will take citizen development to a whole new level. "First is through conversational user interfaces," they write. "Virtually every vendor of software today has announced or is soon to introduce a generative AI interface." "Now or in the very near future, someone interested in programming or accessing/analyzing data need only make a request to an AI system in regular language for a program containing a set of particular functions, an automation workflow with key steps and decisions, or a machine-learning analysis involving particular variables or features." ... Looking beyond these early starts, with the growth of AI, RPA, and other tools, "some citizen developers are likely to no longer be necessary, and every citizen will need to change how they do their work," Davenport and Barkin speculate. ... "The rise of AI-driven tools capable of handling data analysis, modeling, and insight generation could force a shift in how we view the role and future of data science itself," said Ligot. "Tasks like data preparation, cleansing, and even basic qualitative analysis -- activities that consume much of a data scientist's time -- are now easily automated by AI systems."


Scaling Small Language Models (SLMs) For Edge Devices: A New Frontier In AI

Small language models (SLMs) are lightweight neural network models designed to perform specialized natural language processing tasks with fewer computational resources and parameters, typically ranging from a few million to several billion parameters. Unlike large language models (LLMs), which aim for general-purpose capabilities across a wide range of applications, SLMs are optimized for efficiency, making them ideal for deployment in resource-constrained environments such as mobile devices, wearables and edge computing systems. ... One way to make SLMs work on edge devices is through model compression. This reduces the model’s size without losing much performance. Quantization is a key technique that simplifies the model’s data, like turning 32-bit numbers into 8-bit, making the model faster and lighter while maintaining accuracy. Think of a smart speaker—quantization helps it respond quickly to voice commands without needing cloud processing. ... The growing prominence of SLMs is reshaping the AI world, placing a greater emphasis on efficiency, privacy and real-time functionality. For everyone from AI experts to product developers and everyday users, this shift opens up exciting possibilities where powerful AI can operate directly on the devices we use daily—no cloud required.


How To Ensure Your Cloud Project Doesn’t Fail

To get the best out of your team requires striking a delicate balance between discipline and freedom. A bunch of “computer nerds” might not produce much value if left completely to their own devices. But they also won’t be innovative if not given freedom to explore and mess around with ideas. When building your Cloud team, look beyond technical skills. Seek individuals who are curious, adaptable, and collaborative. These traits are crucial for navigating the ever-changing landscape of Cloud technology and fostering an environment of continuous innovation. ... Culture plays a pivotal role in successful Cloud adoption. To develop the right culture for Cloud innovation, start by clearly defining and communicating your company's values and goals. You should also work to foster an environment that encourages calculated risk-taking and learning from failures as well as promotes collaboration and knowledge sharing across teams. Finally, make sure to incentivise your culture by recognising and rewarding innovation, not just successful outcomes. ... Having a well-defined culture is just the first step. To truly harness the power of your talent, you need to embed your definition of talent into every aspect of your company's processes.


2025 Tech Predictions – A Year of Realisation, Regulations and Resilience

A number of businesses are expected to move workloads from the public cloud back to on-premises data centres to manage costs and improve efficiencies. This is the essence of data freedom – the ability to move and store data wherever you need it, with no vendor lock-in. Organisations that previously shifted to the public cloud now realise that a hybrid approach is more advantageous for achieving cloud economics. While the public cloud has its benefits, local infrastructure can offer superior control and performance in certain instances, such as for resource-intensive applications that need to remain closer to the edge. ... As these threats become more commonplace, businesses are expected to adopt more proactive cybersecurity strategies and advanced identity validation methods, such as voice authentication. The uptake of AI-powered solutions to prevent and prepare for cyberattacks is also expected to increase. ... Unsurprisingly, the continuous profileration of data into 2025 will see the introduction of new AI-focused roles. Chief AI Officers (CAIOs) are responsible for overseeing the ethical, responsible and effective use of AI across organisations and bridging the gap between technical teams and key stakeholders.


In an Age of AI, Cloud Security Skills Remain in Demand

While identifying and recruiting the right tech and security talent is crucial, cybersecurity experts note that organizations must make a conscientious choice to invest in cloud security, especially as more data is uploaded and stored within SaaS apps and third-party, infrastructure-as-a-service (IaaS) providers such as Amazon Web Services and Microsoft Azure. “To close the cloud security skills gap, organizations should prioritize cloud-specific security training and certifications for their IT staff,” Stephen Kowski, field CTO at security firm SlashNext, told Dice. “Implementing cloud-native security tools that provide comprehensive visibility and protection across multi-cloud environments can help mitigate risks. Engaging managed security service providers with cloud expertise can also supplement in-house capabilities and provide valuable guidance.” Jason Soroko, a senior Fellow at Sectigo, expressed similar sentiments when it comes to organizations assisting in building out their cloud security capabilities and developing the talent needed to fulfill this mission. “To close the cloud security skills gap, organizations should offer targeted training programs, support certification efforts and consider hiring experts to mentor existing teams,” Soroko told Dice. 



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - September 18, 2024

Putting Threat Modeling Into Practice: A Guide for Business Leaders

One of the primary benefits of threat modeling is its ability to reduce the number of defects that make it to production. By identifying potential threats and vulnerabilities during the design phase, companies can implement security measures that prevent these issues from ever reaching the production environment. This proactive approach not only improves the quality of products but also reduces the costs associated with post-production fixes and patches. ... Along similar lines, threat modeling can help meet obligations defined in contracts if those contracts include terms related to risk identification and management. ... Beyond obligations linked to compliance and contracts, many businesses also establish internal IT security goals. For example, they might seek to configure access controls based on the principle of least privilege or enforce zero-trust policies on their networks. Threat modeling can help to put these policies into practice by allowing organizations to identify where their risks actually lie. From this perspective, threat modeling is a practice that the IT organization can embrace because it helps achieve larger goals – namely, those related to internal governance and security strategy.


How Cloud Custodian conquered cloud resource management

Everybody knows the cloud bill is basically rate multiplied by usage. But while most enterprises have a handle on rate, usage is the hard part. You have different application teams provisioning infrastructure. You go through code reviews. Then when you get to five to 10 applications, you get past the point where anyone can possibly know all the components. Now you have containerized workloads on top of more complex microservices architectures. And you want to be able to allow a combination of cathedral (control) and bazaar (freedom of technology choice) governance, especially today with AI and all of the new frameworks and LLMs [large language models]. At a certain point you lose the script to be able to follow all of this in your head. There are a lot of tools to enable that understanding — architectural views, network service maps, monitoring tools — all feeling out different parts of the elephant versus giving an organization a holistic view. They need to know not only what’s in their cloud environment, but what’s being used, what’s conforming to policy, and what needs to be fixed, and how. That’s what Cloud Custodian is for — so you can define the organizational requirements of your applications and map those up against cloud resources as policy.


5 Steps to Identify and Address Incident Response Gaps

To compress the time it takes to address an incident, it’s not enough to stick to the traditional eyes-on-glass model that network operations centers (NOCs) traditionally privilege. It’s too human-intensive and error-prone to effectively triage an increasingly overwhelming volume of data. To go from event to resolution with minimal toil and increased speed, teams can leverage AI and automation to deflect noise, surface only the most critical alerts and automate diagnostics and remediations. Generative AI can amplify that effect: For teams collaborating in ChatOps tools, common diagnostic questions can be used as prompts to get context and accelerate action. ... When an incident hits, teams spend too much time gathering information and looping in numerous people to tackle it. Generative AI can be used to quickly summarize key data about the incident and provide actionable insights at every step of the incident life cycle. It can also supercharge the ability to develop and deploy automation jobs faster, even by non-technical teams: Operators can translate conversational prompts into proposed runbook automation or leverage pre-engineered prompts based on common categories.


DevOps with OpenShift Pipelines and OpenShift GitOps

Unlike some other CI solutions, such as legacy tool Jenkins, Pipelines is built on native Kubernetes technologies and thus is resource efficient since pipelines and tasks are only actively running when needed. Once the pipeline has completed no resources are consumed by the pipeline itself. Pipelines and tasks are constructed using a declarative approach following standard Kubernetes practices. However, OpenShift Pipelines includes a user-friendly interface built into the OpenShift console that enables users to easily monitor the execution of the pipelines and view task logs as needed. The user interface also shows metrics for individual task execution, enabling users to better optimize pipeline performance. In addition, the user interface enables users to quickly create and modify pipelines visually. While users are encouraged to store tasks and Pipeline resources in Git, the ability to visually create and modify pipelines greatly reduces the learning curve and makes the technology approachable for new users. You can leverage pipelines-as-code to provide an experience that is tightly integrated with your backend Git provider, such as GitHub or GitLab. 


Rethinking enterprise architects’ roles for agile transformation

Mounting technical debt and extending the life of legacy systems are key risks CIOs should be paranoid about. The question is, how should CIOs assign ownership to this problem, require that technical debt’s risks are categorized, and ensure there’s a roadmap for implementing remediations? One solution is to assign the responsibility to enterprise architects in a product management capacity. Product managers must define a vision statement that aligns with strategic and end-user needs, propose prioritized roadmaps, and oversee an agile backlog for agile delivery teams. ... Enterprise architects who have a software development background are ideal candidates to assume the delivery leader role and can steer teams toward developing platforms with baked-in security, performance, usability, and other best practices. ... Enterprise architects assuming a sponsorship role in these initiatives can help steer them toward force-multiplying transformations that reduce risks and provide additional benefits in improved experiences and better decision-making. CIOs who want enterprise architects to act as sponsors should provide them with a budget and oversee the development of a charter for managing investment priorities.


The best way to regulate AI might be not to specifically regulate AI. This is why

Most of the potential uses of AI are already covered by existing rules and regulations designed to do things such as protect consumers, protect privacy and outlaw discrimination. These laws are far from perfect, but where they are not perfect the best approach is to fix or extend them rather than introduce special extra rules for AI. AI can certainly raise challenges for the laws we have – for example, by making it easier to mislead consumers or to apply algorithms that help businesses to collude on prices. ... Finally, there’s a lot to be said for becoming an international “regulation taker”. Other jurisdictions such as the European Union are leading the way in designing AI-specific regulations. Product developers worldwide, including those in Australia, will need to meet those new rules if they want to access the EU and those other big markets. If Australia developed its own idiosyncratic AI-specific rules, developers might ignore our relatively small market and go elsewhere. This means that, in those limited situations where AI-specific regulation is needed, the starting point ought to be the overseas rules that already exist. There’s an advantage in being a late or last mover. 


How LLMs on the Edge Could Help Solve the AI Data Center Problem

Anyone interacting with an LLM in the cloud is potentially exposing the organization to privacy questions and the potential for a cybersecurity breach. As more queries and prompts are being done outside the enterprise, there are going to be questions about who has access to that data. After all, users are asking AI systems all sorts of questions about their health, finances, and businesses. To do so, these users often enter personally identifiable information (PII), sensitive healthcare data, customer information, or even corporate secrets. The move toward smaller LLMs that can either be contained within the enterprise data center – and thus not running in the cloud – or that can run on local devices is a way to bypass many of the ongoing security and privacy concerns posed by broad usage of LLMs such as ChatGPT. ... Pruning the models to reach a more manageable number of parameters is one obvious way to make them more feasible on the edge. Further, developers are shifting the GenAI model from the GPU to the CPU, reducing the processing footprint, and building standards for compiling. As well as the smartphone applications noted above, the use cases that lead the way will be those that are achievable despite limited connectivity and bandwidth, according to Goetz.


'Good complexity' can make hospital networks more cybersecure

Because complicated systems have structures, Tanriverdi says, it's difficult but feasible to predict and control what they'll do. That's not feasible for complex systems, with their unstructured connections. Tanriverdi found that as health care systems got more complex, they became more vulnerable. ... The problem, he says, is that such systems offer more data transfer points for hackers to attack, and more opportunities for human users to make security errors. He found similar vulnerabilities with other forms of complexity, including:Many different types of medical services handling health data. Decentralizing strategic decisions to member hospitals instead of making them at the corporate center. The researchers also proposed a solution: building enterprise-wide data governance platforms, such as centralized data warehouses, to manage data sharing among diverse systems. Such platforms would convert dissimilar data types into common ones, structure data flows, and standardize security configurations. "They would transform a complex system into a complicated system," he says. By simplifying the system, they would further lower its level of complication.


Threats by Remote Execution and Activating Sleeper Devices in the Context of IoT and Connected Devices

As the Internet of Things proliferates, the number of connected devices in both civilian and military contexts is increasing exponentially. From smart homes to military-grade equipment, the IoT ecosystem connects billions of devices, all of which can potentially be exploited by adversaries. The pagers in the Hezbollah case, though low-tech compared to modern IoT devices, represent the vulnerability of a system where devices are remotely controllable. In the IoT realm, the stakes are even higher, as everyday devices like smart thermostats, security cameras, and industrial equipment are interconnected and potentially exploitable. In a modern context, this vulnerability could be magnified when applied to smart cities, critical infrastructure, and defense systems. If devices such as power grids, water systems, or transportation networks are connected to the internet, they could be subjected to remote control by malicious actors. ... One of the most alarming aspects of this situation is the suspected infiltration of the supply chain. The pagers used by Hezbollah were reportedly tampered with before being delivered to the group, likely with explosives embedded within the devices.


Detecting vulnerable code in software dependencies is more complex than it seems

A “phantom dependency” refers to a package used in your code that isn’t declared in the manifest. This concept is not unique to any one language (it’s common in JavaScript, NodeJS, and Python). This is problematic because you can’t secure what you can’t see. Traditional SCA solutions focus on manifest files to identify all application dependencies, but those can both be under- or over-representative of the dependencies actually used by the application. They can be under-representative if the analysis starts from a manifest file that only contains a subset of dependencies, e.g., when additional dependencies are installed in a manual, scripted or dynamic fashion. This can happen in Python ML/AI applications, for example, where the choice of packages and versions often depend on operating systems or hardware architectures, which cannot be fully expressed by dependency constraints in manifest files. And they are over-representative if they contain dependencies not actually used. This happens, for example, if you dump the names of all the components contained in a bloated runtime environment into a manifest file



Quote for the day:

"An accountant makes you aware but a leader makes you accountable." -- Henry Cloud