Showing posts with label data monetization. Show all posts
Showing posts with label data monetization. Show all posts

Daily Tech Digest - April 25, 2025


Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." -- Johann Wolfgang von Goethe


Revolutionizing Application Security: The Plea for Unified Platforms

“Shift left” is a practice that focuses on addressing security risks earlier in the development cycle, before deployment. While effective in theory, this approach has proven problematic in practice as developers and security teams have conflicting priorities. ... Cloud native applications are dynamic; constantly deployed, updated and scaled, so robust real-time protection measures are absolutely necessary. Every time an application is updated or deployed, new code, configurations or dependencies appear, all of which can introduce new vulnerabilities. The problem is that it is difficult to implement real-time cloud security with a traditional, compartmentalized approach. Organizations need real-time security measures that provide continuous monitoring across the entire infrastructure, detect threats as they emerge and automatically respond to them. As Tager explained, implementing real-time prevention is necessary “to stay ahead of the pace of attackers.” ... Cloud native applications tend to rely heavily on open source libraries and third-party components. In 2021, Log4j’s Log4Shell vulnerability demonstrated how a single compromised component could affect millions of devices worldwide, exposing countless enterprises to risk. Effective application security now extends far beyond the traditional scope of code scanning and must reflect the modern engineering environment. 


AI-Powered Polymorphic Phishing Is Changing the Threat Landscape

Polymorphic phishing is an advanced form of phishing campaign that randomizes the components of emails, such as their content, subject lines, and senders’ display names, to create several almost identical emails that only differ by a minor detail. In combination with AI, polymorphic phishing emails have become highly sophisticated, creating more personalized and evasive messages that result in higher attack success rates. ... Traditional detection systems group phishing emails together to enhance their detection efficacy based on commonalities in phishing emails, such as payloads or senders’ domain names. The use of AI by cybercriminals has allowed them to conduct polymorphic phishing campaigns with subtle but deceptive variations that can evade security measures like blocklists, static signatures, secure email gateways (SEGs), and native security tools. For example, cybercriminals modify the subject line by adding extra characters and symbols, or they can alter the length and pattern of the text. ... The standard way of grouping individual attacks into campaigns to improve detection efficacy will become irrelevant by 2027. Organizations need to find alternative measures to detect polymorphic phishing campaigns that don’t rely on blocklists and that can identify the most advanced attacks.


Does AI Deserve Worker Rights?

Chalmers et al declare that there are three things that AI-adopting institutions can do to prepare for the coming consciousness of AI: “They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern.” What would “an appropriate level of moral concern” actually look like? According to Kyle Fish, Anthropic’s AI welfare researcher, it could take the form of allowing an AI model to stop a conversation with a human if the conversation turned abusive. “If a user is persistently requesting harmful content despite the model’s refusals and attempts at redirection, could we allow the model simply to end that interaction?” Fish told the New York Times in an interview. What exactly would model welfare entail? The Times cites a comment made in a podcast last week by podcaster Dwarkesh Patel, who compared model welfare to animal welfare, stating it was important to make sure we don’t reach “the digital equivalent of factory farming” with AI. Considering Nvidia CEO Jensen Huang’s desire to create giant “AI factories” filled with millions of his company’s GPUs cranking through GenAI and agentic AI workflows, perhaps the factory analogy is apropos.


Cybercriminals switch up their top initial access vectors of choice

“Organizations must leverage a risk-based approach and prioritize vulnerability scanning and patching for internet-facing systems,” wrote Saeed Abbasi, threat research manager at cloud security firm Qualys, in a blog post. “The data clearly shows that attackers follow the path of least resistance, targeting vulnerable edge devices that provide direct access to internal networks.” Greg Linares, principal threat intelligence analyst at managed detection and response vendor Huntress, said, “We’re seeing a distinct shift in how modern attackers breach enterprise environments, and one of the most consistent trends right now is the exploitation of edge devices.” Edge devices, ranging from firewalls and VPN appliances to load balancers and IoT gateways, serve as the gateway between internal networks and the broader internet. “Because they operate at this critical boundary, they often hold elevated privileges and have broad visibility into internal systems,” Linares noted, adding that edge devices are often poorly maintained and not integrated into standard patching cycles. Linares explained: “Many edge devices come with default credentials, exposed management ports, secret superuser accounts, or weakly configured services that still rely on legacy protocols — these are all conditions that invite intrusion.”


5 tips for transforming company data into new revenue streams

Data monetization can be risky, particularly for organizations that aren’t accustomed to handling financial transactions. There’s an increased threat of security breaches as other parties become aware that you’re in possession of valuable information, ISG’s Rudy says. Another risk is unintentionally using data you don’t have a right to use or discovering that the data you want to monetize is of poor quality or doesn’t integrate across data sets. Ultimately, the biggest risk is that no one wants to buy what you’re selling. Strong security is essential, Agility Writer’s Yong says. “If you’re not careful, you could end up facing big fines for mishandling data or not getting the right consent from users,” he cautions. If a data breach occurs, it can deeply damage an enterprise’s reputation. “Keeping your data safe and being transparent with users about how you use their info can go a long way in avoiding these costly mistakes.” ... “Data-as-a-service, where companies compile and package valuable datasets, is the base model for monetizing data,” he notes. However, insights-as-a-service, where customers provide prescriptive/predictive modeling capabilities, can demand a higher valuation. Another consideration is offering an insights platform-as-a-service, where subscribers can securely integrate their data into the provider’s insights platform.


Are AI Startups Faking It Till They Make It?

"A lot of VC funds are just kind of saying, 'Hey, this can only go up.' And that's usually a recipe for failure - when that starts to happen, you're becoming detached from reality," Nnamdi Okike, co-founder and managing partner at 645 Ventures, told Tradingview. Companies are branding themselves as AI-driven, even when their core technologies lack substantive AI components. A 2019 study by MMC Ventures found 40% of surveyed "AI startups" in Europe showed no evidence of AI integration in their products or services. And this was before OpenAI further raised the stakes with the launch of ChatGPT in 2022. It's a slippery slope. Even industry behemoths have had to clarify the extent of their AI involvement. Last year, tech giant and the fourth-most richest company in the world Amazon pushed back on allegations that its AI-powered "Just Walk Out" technology installed at its physical grocery stores for a cashierless checkout was largely being driven by around 1,000 workers in India who manually checked almost three quarters of the transactions. Amazon termed these reports "erroneous" and "untrue," adding that the staff in India were not reviewing live footage from the stores but simply reviewing the system. The incentive to brand as AI-native has only intensified. 


From deployment to optimisation: Why cloud management needs a smarter approach

As companies grow, so does their cloud footprint. Managing multiple cloud environments—across AWS, Azure, and GCP—often results in fragmented policies, security gaps, and operational inefficiencies. A Multi-Cloud Maturity Research Report by Vanson Bourne states that nearly 70% of organisations struggle with multi-cloud complexity, despite 95% agreeing that multi-cloud architectures are critical for success. Companies are shifting away from monolithic architecture to microservices, but managing distributed services at scale remains challenging. ... Regulatory requirements like SOC 2, HIPAA, and GDPR demand continuous monitoring and updates. The challenge is not just staying compliant but ensuring that security configurations remain airtight. IBM’s Cost of a Data Breach Report reveals that the average cost of a data breach in India reached ₹195 million in 2024, with cloud misconfiguration accounting for 12% of breaches. The risk is twofold: businesses either overprovision resources—wasting money—or leave environments under-secured, exposing them to breaches. Cyber threats are also evolving, with attackers increasingly targeting cloud environments. Phishing and credential theft accounted for 18% of incidents each, according to the IBM report. 


Inside a Cyberattack: How Hackers Steal Data

Once a hacker breaches the perimeter the standard practice is to beachhead, and then move laterally to find the organisation’s crown jewels: their most valuable data. Within a financial or banking organisation it is likely there is a database on their server that contains sensitive customer information. A database is essentially a complicated spreadsheet, wherein a hacker can simply click SELECT and copy everything. In this instance data security is essential, however, many organisations confuse data security with cybersecurity. Organisations often rely on encryption to protect sensitive data, but encryption alone isn’t enough if the decryption keys are poorly managed. If an attacker gains access to the decryption key, they can instantly decrypt the data, rendering the encryption useless. ... To truly safeguard data, businesses must combine strong encryption with secure key management, access controls, and techniques like tokenisation or format-preserving encryption to minimise the impact of a breach. A database protected by Privacy Enhancing Technologies (PETs), such as tokenisation, becomes unreadable to hackers if the decryption key is stored offsite. Without breaching the organisation’s data protection vendor to access the key, an attacker cannot decrypt the data – making the process significantly more complicated. This can be a major deterrent to hackers.


Why Testing is a Long-Term Investment for Software Engineers

At its core, a test is a contract. It tells the system—and anyone reading the code—what should happen when given specific inputs. This contract helps ensure that as the software evolves, its expected behavior remains intact. A system without tests is like a building without smoke detectors. Sure, it might stand fine for now, but the moment something catches fire, there’s no safety mechanism to contain the damage. ... Over time, all code becomes legacy. Business requirements shift, architectures evolve, and what once worked becomes outdated. That’s why refactoring is not a luxury—it’s a necessity. But refactoring without tests? That’s walking blindfolded through a minefield. With a reliable test suite, engineers can reshape and improve their code with confidence. Tests confirm that behavior hasn’t changed—even as the internal structure is optimized. This is why tests are essential not just for correctness, but for sustainable growth. ... There’s a common myth: tests slow you down. But seasoned engineers know the opposite is true. Tests speed up development by reducing time spent debugging, catching regressions early, and removing the need for manual verification after every change. They also allow teams to work independently, since tests define and validate interfaces between components.


Why the road from passwords to passkeys is long, bumpy, and worth it - probably

While the current plan rests on a solid technical foundation, many important details are barriers to short-term adoption. For example, setting up a passkey for a particular website should be a rather seamless process; however, fully deactivating that passkey still relies on a manual multistep process that has yet to be automated. Further complicating matters, some current user-facing implementations of passkeys are so different from one another that they're likely to confuse end-users looking for a common, recognizable, and easily repeated user experience. ... Passkey proponents talk about how passkeys will be the death of the password. However, the truth is that the password died long ago -- just in a different way. We've all used passwords without considering what is happening behind the scenes. A password is a special kind of secret -- a shared or symmetric secret. For most online services and applications, setting a password requires us to first share that password with the relying party, the website or app operator. While history has proven how shared secrets can work well in very secure and often temporary contexts, if the HaveIBeenPawned.com website teaches us anything, it's that site and app authentication isn't one of those contexts. Passwords are too easily compromised.

Daily Tech Digest - February 28, 2025


Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel


Microservice Integration Testing a Pain? Try Shadow Testing

Shadow testing is especially useful for microservices with frequent deployments, helping services evolve without breaking dependencies. It validates schema and API changes early, reducing risk before consumer impact. It also assesses performance under real conditions and ensures proper compatibility with third-party services. ... Shadow testing doesn’t replace traditional testing but rather complements it by reducing reliance on fragile integration tests. While unit tests remain essential for validating logic and end-to-end tests catch high-level failures, shadow testing fills the gap of real-world validation without disrupting users. Shadow testing follows a common pattern regardless of environment and has been implemented by tools like Diffy from Twitter/X, which introduced automated-response comparisons to detect discrepancies effectively. ... The environment where shadow testing is performed may vary, providing different benefits. More realistic environments are obviously better:Staging shadow testing — Easier to set up, avoids compliance and data isolation issues, and can use synthetic or anonymized production traffic to validate changes safely. Production shadow testing — Provides the most accurate validation using live traffic but requires safeguards for data handling, compliance and test workload isolation. 


The rising threat of shadow AI

Creating an Office of Responsible AI can play a vital role in a governance model. This office should include representatives from IT, security, legal, compliance, and human resources to ensure that all facets of the organization have input in decision-making regarding AI tools. This collaborative approach can help mitigate the risks associated with shadow AI applications. You want to ensure that employees have secure and sanctioned tools. Don’t forbid AI—teach people how to use it safely. Indeed, the “ban all tools” approach never works; it lowers morale, causes turnover, and may even create legal or HR issues. The call to action is clear: Cloud security administrators must proactively address the shadow AI challenge. This involves auditing current AI usage within the organization and continuously monitoring network traffic and data flows for any signs of unauthorized tool deployment. Yes, we’re creating AI cops. However, don’t think they get to run around and point fingers at people or let your cloud providers point fingers at you. This is one of those problems that can only be solved with a proactive education program aimed at making employees more productive and not afraid of getting fired. Shadow AI is yet another buzzword to track, but also it’s undeniably a growing problem for cloud computing security administrators. 


Can AI live up to its promise?

The debate about truly transformative AI may not be about whether it can think or be conscious like a human, but rather about its ability to perform complex tasks across different domains autonomously and effectively. It is important to recognize that the value and usefulness of machines does not depend on their ability to exactly replicate human thought and cognitive abilities, but rather on their ability to achieve similar or better results through different methods. Although the human brain has inspired much of the development of contemporary AI, it need not be the definitive model for the design of superior AI. Perhaps by freeing the development of AI from strict neural emulation, researchers can explore novel architectures and approaches that optimize different objectives, constraints, and capabilities, potentially overcoming the limitations of human cognition in certain contexts. ... Some human factors that could be stumbling blocks on the road to transformative AI include: the information overload we receive, the possible misalignment with our human values, the possible negative perception we may be acquiring, the view of AI as our competitor, the excessive dependence on human experience, the possible perception of futility of ethics in AI, the loss of trust, overregulation, diluted efforts in research and application, the idea of human obsolescence, or the possibility of an “AI-cracy”, for example.


The end of net neutrality: A wake-up call for a decentralized internet

We live in a time when the true ideals of a free and open internet are under attack. The most recent repeal of net neutrality regulations is taking us toward a more centralized, controlled version of the internet. In this scenario, a decentralized, permissionless internet offers a powerful alternative to today’s reality. Decentralized systems can address the threat of censorship by distributing content across a network of nodes, ensuring that no single entity can block or suppress information. Decentralized physical infrastructure networks (DePIN) demonstrate how decentralized storage can keep data accessible even when network parts are disrupted or taken offline. This censorship resistance is crucial in regions where governments or corporations try to limit free expression online. Decentralization can also cultivate economic democracy by eliminating intermediaries like ISPs and related fees. Blockchain-based platforms allow smaller, newer players to compete with incumbent services and content companies on a level playing field. The Helium network, for example, uses a decentralized model to challenge traditional telecom monopolies with community-driven wireless infrastructure. In a decentralized system, developers don’t need approval from ISPs to launch new services.


Steering by insights: A C-Suite guide to make data work for everyone

With massive volumes of data to make sense of, having reliable and scalable modern data architectures that can organise and store data in a structured, secure, and governed manner while ensuring data reliability and integrity is critical. This is especially true in the hybrid, multi-cloud environment in which companies operate today. Furthermore, as we face a new “AI summer”, executives are experiencing increased pressure to respond to the tsunami of hype around AI and its promise to enhance efficiency and competitive differentiation. This means companies will need to rely on high-quality, verifiable data to implement AI-powered technologies Generative AI and Large Language Models (LLMs) at an enterprise scale. ... Beyond infrastructure, companies in India need to look at ways to create a culture of data. In today’s digital-first organisations, many businesses require real-time analytics to operate efficiently. To enable this, organisations need to create data platforms that are easy to use and equipped with the latest tools and controls so that employees at every level can get their hands on the right data to unlock productivity, saving them valuable time for other strategic priorities. Building a data culture also needs to come from the top; it is imperative to ensure that data is valued and used strategically and consistently to drive decision-making.


The Hidden Cost of Compliance: When Regulations Weaken Security

What might be a bit surprising, however, is one particular pain point that customers in this vertical bring up repeatedly. What is this mysterious pain point? I’m not sure if it has an official name or not, but many people I meet with share with me that they are spending so much time responding to regulatory findings that they hardly have time for anything else. This is troubling to say the least. It may be an uncomfortable discussion to have, but I’d argue that it is long since past the time we as a security community have this discussion. ... The threats enterprises face change and evolve quickly – even rapidly I might say. Regulations often have trouble keeping up with the pace of that change. This means that enterprises are often forced to solve last year’s or even last decade’s problems, rather than the problems that might actually pose a far greater threat to the enterprise. In my opinion, regulatory agencies need to move more quickly to keep pace with the changing threat landscape. ... Regulations are often produced by large, bureaucratic bodies that do not move particularly quickly. This means that if some part of the regulation is ineffective, overly burdensome, impractical, or otherwise needs adjusting, it may take some time before this change happens. In the interim, enterprises have no choice but to comply with something that the regulatory body has already acknowledged needs adjusting.


Why the future of privileged access must include IoT – securing the unseen

The application of PAM to IoT devices brings unique complexities. The vast variety of IoT devices, many of which have been operational for years, often lack built-in security, user interfaces, or associated users. Unlike traditional identity management, which revolves around human credentials, IoT devices rely on keys and certificates, with each device undergoing a complex identity lifecycle over its operational lifespan. Managing these identities across thousands of devices is a resource-intensive task, exacerbated by constrained IT budgets and staff shortages. ... Implementing a PAM solution for IoT involves several steps. Before anything else, organisations need to achieve visibility of their network. Many currently lack this crucial insight, making it difficult to identify vulnerabilities or manage device access effectively. Once this visibility is achieved, organisations must then identify and secure high-risk privileged accounts to prevent them from becoming entry points for attackers. Automated credential management is essential to replace manual password processes, ensuring consistency and reducing oversight. Policies must be enforced to authorise access based on pre-defined rules, guaranteeing secure connections from the outset. Default credentials – a common exploit for attackers – should be updated regularly, and automation can handle this efficiently. 


Understanding the AI Act and its compliance challenges

There is a clear tension between the transparency obligations imposed on providers of certain AI systems under the AI Act and some of their rights and business interests, such as the protection of trade secrets and intellectual property. The EU legislator has expressly recognized this tension, as multiple provisions of the AI Act state that transparency obligations are without prejudice to intellectual property rights. For example, Article 53 of the AI Act, which requires providers of general-purpose AI models to provide certain information to organizations that wish to integrate the model downstream, explicitly calls out the need to observe and protect intellectual property rights and confidential business information or trade secrets. In practice, a good faith effort from all parties will be required to find the appropriate balance between the need for transparency to ensure safe, reliable and trustworthy AI, while protecting the interests of providers that invest significant resources in AI development. ... The AI Act imposes a number of obligations on AI system vendors that will help in-house lawyers in carrying out this diligence. Under Article 13 of the AI Act, vendors of high-risk AI systems are, for example, required to provide sufficient information to (business) deployers to allow them to understand the high-risk AI system’s operation and interpret its output.


Why fast-learning robots are wearing Meta glasses

The technology acts as a sophisticated translator between human and robotic movement. Using mathematical techniques called Gaussian normalization, the system maps the rotations of a human wrist to the precise joint angles of a robot arm, ensuring natural motions get converted into mechanical actions without dangerous exaggerations. This movement translation works alongside a shared visual understanding — both the human demonstrator’s smartglasses and the robot’s cameras feed into the same artificial intelligence program, creating common ground for interpreting objects and environments. ... The EgoMimic researchers didn’t invent the concept of using consumer electronics to train robots. One pioneer in the field, a former healthcare-robot researcher named Dr. Sarah Zhang, has demonstrated 40% improvements in the speed of training healthcare robots using smartphones and digital cameras; they enable nurses to teach robots through gestures, voice commands, and real-time demonstrations instead of complicated programming. This improved robot training is made possible by AI that can learn from fewer examples. A nurse might show a robot how to deliver medications twice, and the robot generalizes the task to handle variations like avoiding obstacles or adjusting schedules. 


Targeted by Ransomware, Middle East Banks Shore Up Security

The financial services industry in UAE — and the Middle East at large — sees cyber wargaming as an important way to identify weaknesses and develop defenses to the latest threats, Jamal Saleh, director general of the UAE Banks Federation, said in a statement announcing the completion of the event. "The rapid adoption and deployment of advanced technologies in the banking and financial sector have increased risks related to transaction security and digital infrastructure," he said in the statement, adding that the sector is increasingly aware "of the importance of such initiatives to enhance cybersecurity systems and ensure a secure and advanced environment for customers, especially with the rapid developments in modern technology and the rise of cybersecurity threats using advanced artificial intelligence (AI) techniques." ... Ransomware remains a major threat to the financial industry, but attackers have shifted from distributed denial-of-service (DDoS) attacks to phishing, data breaches, and identity-focused attacks, according to Shilpi Handa, associate research director for the Middle East, Turkey, and Africa at business intelligence firm IDC. "We see trends such as increased investment in identity and data security, the adoption of integrated security platforms, and a focus on operational technology security in the finance sector," she says. 

Daily Tech Digest - September 17, 2024

Dedicated Cloud: What It’s For and How It’s Different From Public Cloud

While dedicated cloud services give you a level of architectural control you will not get from public clouds, using them comes with trade-offs, the biggest one being the amount of infrastructure engineering ability needed. But if your team has concluded that a public cloud isn’t a good fit, you probably know that already and have at least some of that ability on hand. ... Ultimately, dedicated cloud is about keeping control and giving yourself options. You can quickly deploy different combinations of resources, interconnecting dedicated infrastructure with public cloud services, and keep fine-tuning and refining as you go. You get full control of your data and your architecture with the freedom to change your mind. The trade-off is that you must be ready to roll up your sleeves and manage operating systems, deploy storage servers, tinker with traffic routing and do whatever else you need to do to get your architecture just right. But again, if you already know that you need more knobs than you can turn using a typical public cloud provider, you are probably ready anyway.


Building a More Sustainable Data Center: Challenges and Opportunities in the AI Era

Sustainability is not just a compliance exercise on reducing the negative impact on the environment, it also can bring financial benefits to an organization. According to Gartner’s Unlock the Business Benefits of Sustainable IT Infrastructure report, “[Infrastructure and operations’] contribution to sustainability strategies tends to focus on environmental impact, but sustainability also can have a significant positive impact on non-environmental factors, such as brand, innovation, resilience and attracting talent.” As a result, boards should embrace the financial opportunities of companies’ Environmental, Sustainability, and Governance (ESG) compliance rather than consider it just another unavoidable compliance expense without a discernable return on investment (ROI). ... To improve data center resilience, Gartner recommends that organizations expand use of renewable energy using a long-term power purchase agreement to contain costs, generate their own power where feasible, and reuse and redeploy equipment as much as possible to maximize the value of the resource.


Data Business Evaluation

Why data businesses? Because they can be phenomenal businesses with extremely high gross margins — as good or better than software-as-a-service (SaaS). Often data businesses can be the best businesses within the industries that they serve. ... Data aggregation can be a valuable way to assemble a data asset as well, but the value typically hinges on the difficulty of assembling the data…if it is too easy to do, others will do it as well and create price competition. Often the value comes in aggregating a long tail of data that is costly to do more than once either for the suppliers or a competitive aggregator. ... The most stable data businesses tend to employ a subscription business model in which customers subscribe to a data set for an extended period of time. Subscriptions models are clearly better when the subscriptions are long term or, at least, auto-renewing. Not surprisingly, the best data businesses are generally syndicated subscription models. On the other end, custom data businesses that produce data for clients in a one-off or project-based manner generally struggle to attain high margins and predictability, but can be solid businesses if the data manufacturing processes are optimized 


Leveraging AI for water management

AI is reshaping the landscape of water management by providing predictive insights, optimising operations, and enabling real-time decision-making. One of AI’s key contributions is its ability to forecast water usage patterns. AI models can accurately predict water demand by analysing historical data and considering variables like weather conditions, population trends, and industrial activities. This helps water utilities allocate resources more effectively, minimising waste while ensuring consistent supply to communities. Water utilities can also integrate AI systems to monitor and optimise their supply networks. ... One of the most critical applications of AI is in water quality monitoring. Traditional methods of detecting water contaminants are labour-intensive and involve periodic testing, which can result in delayed responses to contamination events. AI, on the other hand, can process continuous data streams from IoT-enabled sensors installed in water distribution systems. These sensors monitor variables like pH levels, temperature, and turbidity, detecting changes in water quality in real time. AI algorithms analyse the data, triggering immediate alerts when contaminants or irregularities are detected.


History of Cybersecurity: Key Changes Since the 1990s and Lessons for Today

Most cyber attackers hadn’t considered using the internet to pursue financial gain or cause serious harm to organizations. To be sure, financial crimes based on computer hacking took place in the '90s and early 2000s. But they didn't dominate the news in an endless stream of cautionary tales, and most people thought the 1995 movie Hackers was a realistic depiction of how hacking worked. ... By the mid-2000s, however, internet-based attacks became more harmful and frequent. This was the era when threat actors realized they could build massive botnets and then use them to distribute spam or send scam emails. These attacks could have caused real financial harm, but they weren't exactly original types of criminal activity. They merely conducted traditional criminal activity, like scams, using a new medium: the internet. ... The 2010s were also a time of massive technological change. The advent of cloud computing, widespread adoption of mobile devices, and rollout of Internet of Things (IoT) hardware meant businesses could no longer define clear network perimeters or ensure that sensitive data always remained in their data centers. 


Gateways to havoc: Overprivileged dormant service accounts

Dormant accounts go unnoticed, leaving organizations unaware of their access privileges, the systems they connect to, how to access them, and even of their purpose of existence. Their elevated privileges, lax security measures, and invisibility, make dormant service accounts prime targets for infiltration. By compromising such an account, attackers can gain significant access to systems and sensitive data, often without raising immediate suspicion for extended periods of time. During that time, cyber criminals can elevate privileges, exfiltrate data, disrupt operations, and install malware and backdoors, causing total mayhem completely undetected until it’s too late. The weaknesses that plague dormant accounts make them open doors into an organization’s system. If compromised, an overprivileged dormant account can give way to sensitive data such as customer PII, PHI, intellectual property, and financial records, leading to costly and damaging data breaches. Even without being breached, dormant accounts are significant liabilities, potentially causing operational disruptions and regulatory compliance violations.


Overcoming AI hallucinations with RAG and knowledge graphs

One challenge that has come up in deploying RAG into production environments is that it does not handle searches across lots of documents that contain similar or identical information. When these files are chunked and turned into vector embeddings, each one will have its data available for searching. When each of those files has very similar chunks, finding the right data to match that request is harder. RAG can also struggle when the answer to a query exists across a number of documents that cross reference each other. RAG is not aware of the relationships between these documents. ... Rather than storing data in rows and columns for traditional searches, or as embeddings for vector search, a knowledge graph represents data points as nodes and edges. A node will be a distinct fact or characteristic, and edges will connect all the nodes that have relevant relationships to that fact. In the example of a product catalog, the nodes may be the individual products while the edges will be similar characteristics that each of those products possess, like size or color.


Preparing for the next big cyber threat

In addressing emerging threats, CISOs will have to incorporate controls to counter adversarial AI tactics and foster synergies with data and AI governance teams. Controls to ensure quantum-resistant cryptography in the symmetric space to future-proof encrypted data and transmissions will also be put in place if they are not already. Many organizations — including banks — are already enforcing the use of quantum-resistant cryptography, for instance, with the use of the Advanced Encryption Standard (AES)-256 algorithm because data encrypted by it is not vulnerable to cracking by quantum computers. Zero trust as a mindset and approach will be very important, especially in addressing insecure design components of OT environments used in Industry 4.0. Therefore, one of the key areas of strengthening protection would also be identity and access management (IAM). ... As part of strong cyber resilience, we need sound IR playbooks to effectively draw bridges, we need plan Bs and plan Cs, business continuities as well as table-tops and red teams that involve our supply chain vendors. And finally, response to the ever-evolving threat landscape will entail greater adaptability and agility.


The Impact of AI on The Ethernet Switch Market

Enterprises investing in new infrastructure to support AI will have to choose which technology is best for their particular needs. InfiniBand and Ethernet will likely continue to coexist for the foreseeable future. It’s highly likely that Ethernet will remain dominant in most network environments while InfiniBand will retain its foothold in high-performance computing and specialized AI workloads. ... While InfiniBand has several very strong advantages, advances in Ethernet are quickly closing the gap, making its ubiquity likely to continue. There are multiple other reasons that enterprises are likely to stick with Ethernet, too, such as lower cost, existing in-house talent, prolific integrations with existing infrastructures, and compatibility with legacy applications, among others. ... The Ultra Ethernet Consortium is proactively working to extend Ethernet's life to ensure it remains useful and cost-effective for both current and future technologies. The aim is primarily to reduce the need for drastic shifts to alternative solutions that may constitute heavy lifts and costs in adapting existing networks. 


Making the Complex Simple: Authorization for the Modern Enterprise

Modernizing legacy authorization systems is essential for organizations to enhance security and support their growth and innovation. Modernizing and automating operations allows organizations to overcome the limitations of legacy systems, enhance the protection of sensitive information and stay competitive in today’s digital landscape. Simplifying access control and automating workflows to modernize and optimize operations greatly increases productivity and lowers administrative burdens. Organizations can direct important resources toward more strategic endeavors by automating repetitive operations, which increases output and promotes an agile corporate environment. This change improves operational efficiency and puts businesses in a better position to adapt to changing market demands. Enhancing security is another critical benefit of modernizing authorization systems. Centralized management coupled with advanced role-based access control (RBAC) strengthens an organization’s security posture by preventing unauthorized access. Centralized systems allow for efficient user permissions management, ensuring that only authorized individuals can access sensitive information. 



Quote for the day:

"Motivation will almost always beat mere talent." -- Ralph Augustine Norman

Daily Tech Digest - July 01, 2024

The dangers of voice fraud: We can’t detect what we can’t see

The inherent imperfections in audio offer a veil of anonymity to voice manipulations. A slightly robotic tone or a static-laden voice message can easily be dismissed as a technical glitch rather than an attempt at fraud. This makes voice fraud not only effective but also remarkably insidious. Imagine receiving a phone call from a loved one’s number telling you they are in trouble and asking for help. The voice might sound a bit off, but you attribute this to the wind or a bad line. The emotional urgency of the call might compel you to act before you think to verify its authenticity. Herein lies the danger: Voice fraud preys on our readiness to ignore minor audio discrepancies, which are commonplace in everyday phone use. Video, on the other hand, provides visual cues. There are clear giveaways in small details like hairlines or facial expressions that even the most sophisticated fraudsters have not been able to get past the human eye. On a voice call, those warnings are not available. That’s one reason most mobile operators, including T-Mobile, Verizon and others, make free services available to block — or at least identify and warn of — suspected scam calls.


Provider or partner? IT leaders rethink vendor relationships for value

Vendors achieve partner status in McDaniel’s eyes by consistently demonstrating accountability and integrity; getting ahead of potential issues to ensure there’s no interruptions or problems with the provided products or services; and understanding his operations and objectives. ... McDaniel, other CIOs, and CIO consultants agree that IT leaders don’t need to cultivate partnerships with every vendor; many, if not most, can remain as straight-out suppliers, where the relationship is strictly transactional, fixed-fee, or fee-for-service based. That’s not to suggest those relationships can’t be chummy, but a good personal rapport between the IT team and the supplier’s team is not what partnership is about. A provider-turned-partner is one that gets to know the CIO’s vision and brings to the table ways to get there together, Bouryng says. ... As such, a true partner is also willing to say no to proposed work that could take the pair down an unproductive path. It’s a sign, Bouryng says, that the vendor is more interested in reaching a successful outcome than merely scheduling work to do.


In the AI era, data is gold. And these companies are striking it rich

AI vendors have, sometimes controversially, made deals with organizations like news publishers, social media companies, and photo banks to license data for building general-purpose AI models. But businesses can also benefit from using their own data to train and enhance AI to assist employees and customers. Examples of source material can include sales email threads, historical financial reports, geographic data, product images, legal documents, company web forum posts, and recordings of customer service calls. “The amount of knowledge—actionable information and content—that those sources contain, and the applications you can build on top of them, is really just mindboggling,” says Edo Liberty, founder and CEO of Pinecone, which builds vector database software. Vector databases store documents or other files as numeric representations that can be readily mathematically compared to one another. That’s used to quickly surface relevant material in searches, group together similar files, and feed recommendations of content or products based on past interests. 


Machine Vision: The Key To Unleashing Automation's Full Potential

Machine vision is a class of technologies that process information from visual inputs such as images, documents, computer screens, videos and more. Its value in automation lies in its ability to capture and process large quantities of documents, images and video quickly and efficiently in quantities and speeds far in excess of human capability. ... Machine vision based technologies are even becoming central to the creation of automations themselves. For example, instead of relying on human workers to describe processes that are being automated when designing automations, recordings of the process to be automated are created and then machine vision software, combined with other technologies, is used to capture the process end-to-end and then provide the input to automating a lot of the work needed to program the digital workers (bots). ... Machine vision is integral to maximizing the impact of advanced automation technologies on business operations and paving the way for increased capabilities in the automation space.


Put away your credit cards — soon you might be paying with your face

Biometric purchases using facial recognition are beginning to gain some traction. The restaurant CaliExpress by Flippy, a fully automated fast-food restaurant, is an early adopter. Whole Food stores offer pay-by-palm, an alternative biometric to facial recognition. Given that they are already using biometrics, facial recognition is likely to be available in their stores at some point in the future. ... Just as credit and debit cards have overtaken cash as the dominant means to make purchases, biometrics like facial recognition could eventually become the dominant way to make purchases. There will however be actual costs during such a transition, which will largely be absorbed by consumers in higher prices. The technology software and hardware required to implement such systems will be costly, pushing it out of reach for many small- and medium-size businesses. However, as facial recognition systems become more efficient and reliable, and losses from theft are reduced, an equilibrium will be achieved that will make such additional costs more modest and manageable to absorb.


Technologists must be ready to seize new opportunities

For technologists, this new dynamic represents a profound (and daunting) change. They’re being asked to report on application performance in a more business-focussed, strategic way and to engage in conversations around experience at a business level. They’re operating outside their comfort zone, far beyond the technical reporting and discussions they’ve previously encountered. Of course, technologists are used to rising to a challenge and pivoting to meet the changing needs of their organisations and their senior leaders. We saw this during the pandemic, many will (rightly) be excited about the opportunity to expand their skills and knowledge, and to elevate their standing within their organisations. The challenge that many technologists face, however, is that they currently don’t have the tools and insights they need to operate in a strategic manner. Many don’t have full visibility across their hybrid environments and they’re struggling to manage and optimise application availability, performance and security in an effective and sustainable manner. They can’t easily detect issues, and even when they do, it is incredibly difficult to quickly understand root causes and dependencies in order to fix issues before they impact end user experience. 


Vulnerability management empowered by AI

Using AI will take vulnerability management to the next level. AI not only reduces analysis time but also effectively identifies threats. ... AI-driven systems can identify patterns and anomalies that signify potential vulnerabilities or attacks. Converting the logs into data and charts will make analysis simpler and quicker. Incidents should be identified based on the security risk, and notification should take place for immediate action. Self-learning is another area where AI can be trained with data. This will enable AI to be up-to-date on the changing environment and capable of addressing new and emerging threats. AI will identify high-risk threats and previously unseen threats. Implementing AI requires iterations to train the model, which may be time-consuming. But over time, it becomes easier to identify threats and flaws. AI-driven platforms constantly gather insights from data, adjusting to shifting landscapes and emerging risks. As they progress, they enhance their precision and efficacy in pinpointing weaknesses and offering practical guidance.


Why every company needs a DDoS response plan

Given the rising number of DDoS attacks each year and the reality that DDoS attacks are frequently used in more sophisticated hacking attempts to apply maximum pressure on victims, a DDoS response plan should be included in every company’s cybersecurity tool kit. After all, it’s not just a temporary lack of access to a website or application that is at risk. A business’s failure to withstand a DDoS attack and rapidly recover can result in loss of revenue, compliance failures, and impacts on brand reputation and public perception. Successful handling of a DDoS attack depends entirely on a company’s preparedness and execution of existing plans. Like any business continuity strategy, a DDoS response plan should be a living document that is tested and refined over the years. It should, at the highest level, consist of five stages, including preparation, detection, classification, reaction, and postmortem reflection. Each phase informs the next, and the cycle improves with each iteration.


Reduce security risk with 3 edge-securing steps

Over the past several years web-based SSL VPNs have been targeted and used to gain remote access. You may even want to consider evaluating how your firm allows remote access and how often your VPN solution has been attacked or at risk. ... “The severity of the vulnerabilities and the repeated exploitation of this type of vulnerability by actors means that NCSC recommends replacing solutions for secure remote access that use SSL/TLS with more secure alternatives,” the authority says. “The NCSC recommends internet protocol security (IPsec) with internet key exchange (IKEv2). Other countries’ authorities have recommended the same.” ... Pay extra attention to how credentials that need to be accessed are protected from unauthorized access. Ensure that you use best practice processes to secure passwords and ensure that each user has appropriate passwords and access accordingly. ... When using cloud services, you need to ensure that only those vendors you trust or that you have thoroughly vetted have access to your cloud services. 

The real key to machine learning success is something that is mostly missing from genAI: the constant tuning of the model. “In ML and AI engineering,” Shankar writes, “teams often expect too high of accuracy or alignment with their expectations from an AI application right after it’s launched, and often don’t build out the infrastructure to continually inspect data, incorporate new tests, and improve the end-to-end system.” It’s all the work that happens before and after the prompt, in other words, that delivers success. For genAI applications, partly because of how fast it is to get started, much of this discipline is lost. ... As with software development, where the hardest work isn’t coding but rather figuring out which code to write, the hardest thing in AI is figuring out how or if to apply AI. When simple rules need to yield to more complicated rules, Valdarrama suggests switching to a simple model. Note the continued stress on “simple.” As he says, “simplicity always wins” and should dictate decisions until more complicated models are absolutely necessary.



Quote for the day:

“The vision must be followed by the venture. It is not enough to stare up the steps - we must step up the stairs.” -- Vance Havner

Daily Tech Digest - April 02, 2024

A double-edged sword: GenAI vs GenAI

Every technology indeed presents new avenues for vulnerabilities, and the key lies in maintaining strict discipline in identifying and addressing these vulnerabilities. This calls for the strict application of IT ethos in organisational setups to ensure no misuse of technologies, especially intelligent ones. “It is crucial to continuously test your APIs and applications, relentlessly seeking out any potential vulnerabilities and ensuring they are addressed promptly. This proactive approach is vital in safeguarding your platform against potential threats,” says Sunil Sapra, Co-founder & Chief Growth Officer, Eventus Security. The Government of India has proactively addressed the grave importance of cybersecurity and recently rolled out the much-awaited Digital Personal Data Protection Act 2023. The Act though takes into consideration data protection and data privacy laying emphasis on the ‘consent of the owner’, but it does not draw the spotlight on GenAI that can make or break the existing cyber fortifications. Hence, there is a dire need for strong regulations and control measures guarding the application of GenAI models.


There's more to cloud architecture than GPUs

GPUs require a host chip to orchestrate operations. Although this simplifies the complexity and capability of modern GPU architectures, it’s also less efficient than it could be. GPUs operate in conjunction with CPUs (the host chip), which offload specific tasks to GPUs. Also, these host chips manage the overall operation of software programs. Adding to this question of efficiency is the necessity for inter-process communications; challenges with disassembling models, processing them in parts, and then reassembling the outputs for comprehensive analysis or inference; and the complexities inherent in using GPUs for deep learning and AI. This segmentation and reintegration process is part of distributing computing tasks to optimize performance, but it comes with its own efficiency questions. Software libraries and frameworks designed to abstract and manage these operations are required. Technologies like Nvidia’s CUDA (Compute Unified Device Architecture) provide the programming model and toolkit needed to develop software that can harness GPU acceleration capabilities.


How to Evaluate the Best Data Observability Tools

Some key areas to evaluate for enterprise readiness include:Security– Do they have SOC II certification? Robust role based access controls? Architecture– Do they have multiple deployment options for the level of control over the connection? How does it impact data warehouse/lakehouse performance? Usability– This can be subjective and superficial during a committee POC so it’s important to balance this with the perspective from actual users. Otherwise you might over-prioritize how pretty an alert appears versus aspects that will save you time such as ability to bulk update incidents or being able to deploy monitors-as-code. Scalability– This is important for small organizations and essential for larger ones. We all know the nature of data and data-driven organizations lends itself to fast, and at times unexpected growth. What are the largest deployments? Has this organization proven its ability to grow alongside its customer base? Other key features here include things like ability to support domains, reporting, change logging, and more. These typically aren’t flashy features so many vendors don’t prioritize them.


CISA releases draft rule for cyber incident reporting

According to the proposed rules, CISA plans to use the data it receives to carry out trend and threat analysis, incident response and mitigation, and to inform future strategies to improve resilience. While the rule is not expected to be finalized until 18 months from now or potentially later next year, comments are due 60 days after the proposal is officially published on April 4. One can be sure that the 16 different critical infrastructure sectors and their armies of lawyers will have much to say. The 447-page NOPR details a dizzying array of nuances for specific sectors and cyber incidents. ... The list of exceptions to the cyber incidents that critical infrastructure operators will need to report is around twice as long as the conditions that require reporting an incident, and the final shape of the rule may change as CISA considers comments from industry. The companies affected by the proposed rules include all critical infrastructure entities that exceed the federal government’s threshold for what is a small business. The rules provide a series of different criteria for whether other critical infrastructure sectors will be required to report incidents.


Digital transformation’s fundamental change management mistake

the bigger challenge is often downstream and occurs when digital trailblazers, the people assigned to lead digital transformation initiatives, must work with end-users on process changes and technology adoption. When devops teams release changes to applications, dashboards, and other technology capabilities, end-users experience a productivity dip before people effectively leverage new capabilities. This dip delays when the business can start realizing the value delivered. While there are a number of change management frameworks and certifications, many treat change as separate disciplines from the product management, agile, and devops methodologies CIOs use to plan and deliver digital transformation initiatives.  ... Reducing productivity dips and easing end-user adoption then are practices that must fit the digital and transformation operating model. Let’s consider three areas where CIOs and digital trailblazers can inject change management into their digital transformation initiatives in a way that brings greater effectiveness than if change management were addressed as a separate add-on.


6 keys to navigating security and app development team tensions

Unfortunately, many organizations don’t take the proper steps, leading to the development team viewing security teams as a “roadblock” — a hurdle to overcome. Likewise, the security team’s animosity toward development teams grows as they view developers as not “taking security seriously enough.” ... When you have an AppSec team built just by security people who have never worked in development, that situation will likely cause friction between the two groups because they will probably always speak two languages. And neither group understands the problems and challenges the other team faces. When you have an AppSec team that includes prior developers, you will see a much different relationship between the teams. ... Sometimes, there are unreasonable requests because the security team asks for things that aren’t actual issues to be fixed. This happens when they run an application vulnerability scanner, and the scanner reports a vulnerability that doesn’t exist or expose an actual risk. The security team blindly passed that on to developers to remedy.


Enhancing Business Security and Compliance with Service Mesh

When implementing a service mesh, there are several important factors you should consider for secure and compliant deployment.You should carefully evaluate the security features and capabilities of the chosen service mesh framework. Look for strong authentication methods like mutual TLS and support for role-based access control (RBAC) to ensure secure communication between services. Establish clear policies and configurations for traffic management, such as circuit breaking and request timeouts, to mitigate the risk of cascading failures and improve overall system resilience. Thirdly, consider the observability aspects of the service mesh. Ensure that metrics, logging, and distributed tracing are properly configured to gain insights into service mesh behavior and detect potential security incidents. For example, leverage tools like Prometheus for metrics collection and Grafana for visualization to monitor key security metrics such as error rates and latency. Maintaining regular updates and patches for the service mesh framework is important to address any security vulnerabilities promptly. You should stay informed about the latest security advisories and best practices provided by the service mesh community.


Who should be the head of generative AI — and what they should do

Some generative AI leaders might have a creative background; others could come from tech. Gratton said background matters less than a willingness to experiment. “You want somebody who’s got an experimental mindset, who sees this as a learning opportunity and sees it as an organizational structuring issue,” she said. “The innovation part is what’s really crucial.” ... The head of AI could encourage use of the technology to help with managing employees, Gratton said. This encompasses three key areas: Talent development -  Companies can use chatbots and other tools to recruit people and help them manage their careers. Productivity -  AI can be used to create assessments, give feedback, manage collaboration, and provide skills training. Change management - This includes both internal and external knowledge management. “We have so much knowledge in our organizations … but we don’t know how to find it,” Gratton said. “And it seems to me that this is an area that we’re really focusing on in terms of generative AI.” ... Leaders should remember that buy-in across all career stages and skill levels is essential. Generative AI isn’t just the domain of youth.


Knowledge-Centered Design for Generative AI in Enterprise Solutions

The need for a new design pattern, specifically the Knowledge Centered Design (KCD), arises from the evolution and complexity of AI and machine learning technologies. As these technologies advance, they generate an increasing volume of knowledge and insights. The traditional Human-Centered Design (HCD) focuses on understanding users, their tasks, and environments. However, it may not be fully equipped to handle the intricate dynamics of both human-generated and AI-generated knowledge effectively. The proposed KCD extends HCD by emphasizing the life cycle of knowledge – identifying, acquiring, categorizing, extracting insights – and incorporating feedback loops for continuous improvement. It ensures that both human-based and AI-generated knowledge are effectively integrated into the design process to enhance user experience and productivity. ... The knowledge life cycle process, feedback loop process, and integral components of the KCD pattern, serve as starting baselines that each enterprise can adapt and adjust according to their specific business needs and institutional culture. 


Creating a Data Monetization Strategy

Monetizing customer data involves implementing effective strategies and adhering to best practices to maximize its value. One key approach is to ensure data privacy and security, as customers are increasingly concerned about the usage of their personal information. Companies must establish robust data protection measures, comply with regulations such as GDPR or CCPA, and obtain explicit consent for data collection and utilization. Another strategy is to leverage advanced analytics techniques to derive valuable insights from customer data. By employing ML algorithms, predictive modeling, and artificial intelligence, businesses can uncover patterns, preferences, and trends. ... Blockchain technology is revolutionizing how data is monetized by enhancing security and trust in the digital ecosystem. Blockchain, a decentralized and immutable ledger, provides a robust infrastructure for securely storing and transferring data, making it an ideal solution for data monetization. Additionally, every transaction recorded on the blockchain is encrypted and linked to previous transactions through cryptographic hash functions, further safeguarding the integrity of the data. 



Quote for the day:

"It is during our darkest moments that we must focus to see the light." -- Aristotle Onassis

Daily Tech Digest - March 18, 2024

Generative AI will turn cybercriminals into better con artists. AI will help attackers to craft well-written, convincing phishing emails and websites in different languages, enabling them to widen the nets of their campaigns across locales. We expect to see the quality of social engineering attacks improve, making lures more difficult for targets and security teams to spot. As a result, we may see an increase in the risks and harms associated with social engineering – from fraud to network intrusions. ... AI is driving the democratisation of technology by helping less skilled users to carry out more complex tasks more efficiently. But while AI improves organisations’ defensive capabilities, it also has the potential for helping malicious actors carry out attacks against lower system layers, namely firmware and hardware, where attack efforts have been on the rise in recent years. Historically, such attacks required extensive technical expertise, but AI is beginning to show promise to lower these barriers. This could lead to more efforts to exploit systems at the lower level, giving attackers a foothold below the operating system and the industry’s best software security defences.


Get the Value Out of Your Data

A robust data strategy should have clearly defined outcomes and measurements in place to trace the value it delivers. However, it is important to acknowledge the need for flexibility during the strategic and operational phases. Consequently, defining deliverables becomes crucial to ensure transparency in the delivery process. To achieve this, adopting a data product approach focused on iteratively delivering value to your organization is recommended. The evolution of DevOps, supported by cloud platform technology, has significantly improved the software engineering delivery process by automating development and operational routines. Now, we are witnessing a similar agile evolution in the data management area with the emergence of DataOps. DataOps aims to enhance the speed and quality of data delivery, foster collaboration between IT and business teams, and reduce the associated time and costs. By providing a unified view of data across the organization, DataOps enables faster and more confident data-driven decision-making, ensuring data accuracy, up-to-datedness, and security. It automates and brings transparency to the measurements required for agile delivery through data product management.


Exposure to new workplace technologies linked to lower quality of life

Part of the problem is that IT workers need to stay updated with the newest tech trends and figure out how to use them at work, said Ryan Smith, founder of the tech firm QFunction, also unconnected with the study. The hard part is that new tech keeps coming in, and workers have to learn it, set it up, and help others use it quickly, he said. “With the rise of AI and machine learning and the uncertainty around it, being asked to come up to speed with it and how to best utilize it so quickly, all while having to support your other numerous IT tasks, is exhausting,” he added. “On top of this, the constant fear of layoffs in the job market forces IT workers to keep up with the latest technology trends in order to stay employable, which can negatively affect their quality of life.” ... “As IT has become the backbone of many businesses, that backbone is key to the businesses operations, and in most cases revenue,” he added. “That means it’s key to the business’s survival. IT teams now must be accessible 24 hours a day. In the face of a problem, they are expected to work 24 hours a day to resolve it. ...”


6 best operating systems for Raspberry Pi 5

Even though it has been nearly seven years since Microsoft debuted Windows on Arm, there has been a noticeable lack of ARM-powered laptops. The situation is even worse for SBCs like the Raspberry Pi, which aren’t even on Microsoft’s radar. Luckily, the talented team at WoR project managed to find a way to install Windows 11 on Raspberry Pi boards. ... Finally, we have the Raspberry Pi OS, which has been developed specifically for the RPi boards. Since its debut in 2012, the Raspberry Pi OS (formerly Raspbian) has become the operating system of choice for many RPi board users. Since it was hand-crafted for the Raspberry Pi SBCs, it’s faster than Ubuntu and light years ahead of Windows 11 in terms of performance. Moreover, most projects tend to favor Raspberry Pi OS over the alternatives. So, it’s possible to run into compatibility and stability issues if you attempt to use any other operating system when attempting to replicate the projects created by the lively Raspberry Pi community. You won’t be disappointed with the Raspberry Pi OS if you prefer a more minimalist UI. That said, despite including pretty much everything you need to use to make the most of your RPi SBC, the Raspberry Pi OS isn't as user-friendly as Ubuntu.


Speaking without vocal cords, thanks to a new AI-assisted wearable device

The breakthrough is the latest in Chen's efforts to help those with disabilities. His team previously developed a wearable glove capable of translating American Sign Language into English speech in real time to help users of ASL communicate with those who don't know how to sign. The tiny new patch-like device is made up of two components. One, a self-powered sensing component, detects and converts signals generated by muscle movements into high-fidelity, analyzable electrical signals; these electrical signals are then translated into speech signals using a machine-learning algorithm. The other, an actuation component, turns those speech signals into the desired voice expression. The two components each contain two layers: a layer of biocompatible silicone compound polydimethylsiloxane, or PDMS, with elastic properties, and a magnetic induction layer made of copper induction coils. Sandwiched between the two components is a fifth layer containing PDMS mixed with micromagnets, which generates a magnetic field. Utilizing a soft magnetoelastic sensing mechanism developed by Chen's team in 2021, the device is capable of detecting changes in the magnetic field when it is altered as a result of mechanical forces—in this case, the movement of laryngeal muscles.


We can’t close the digital divide alone, says Cisco HR head as she discusses growth initiatives

At Cisco, we follow a strengths-based approach to learning and development, wherein our quarterly development discussions extend beyond performance evaluations to uplifting ourselves and our teams. We understand that a one-size-fits-all approach is inadequate. To best play to our employees' strengths, we have to be flexible, adaptable, and open to what works best for each individual and team. This enables us to understand individual employees' unique learning needs, enabling us to tailor personalised programs that encompass diverse learning options such as online courses, workshops, mentoring, and gamified experiences, catering to diverse learning styles. As a result, our employees are energized to pursue their passions, contributing their best selves to the workplace. Measuring the quality of work, internal movements, employee retention, patents, and innovation, along with engagement pulse assessments, allows us to gauge the effectiveness of our programs. When it comes to addressing the challenge of retaining talent, it's essential for HR leaders to consider a holistic approach. 


Vector databases: Shiny object syndrome and the case of a missing unicorn

What’s up with vector databases, anyway? They’re all about information retrieval, but let’s be real, that’s nothing new, even though it may feel like it with all the hype around it. We’ve got SQL databases, NoSQL databases, full-text search apps and vector libraries already tackling that job. Sure, vector databases offer semantic retrieval, which is great, but SQL databases like Singlestore and Postgres (with the pgvector extension) can handle semantic retrieval too, all while providing standard DB features like ACID. Full-text search applications like Apache Solr, Elasticsearch and OpenSearch also rock the vector search scene, along with search products like Coveo, and bring some serious text-processing capabilities for hybrid searching. But here’s the thing about vector databases: They’re kind of stuck in the middle. ... It wasn’t that early either — Weaviate, Vespa and Mivlus were already around with their vector DB offerings, and Elasticsearch, OpenSearch and Solr were ready around the same time. When technology isn’t your differentiator, opt for hype. Pinecone’s $100 million Series B funding was led by Andreessen Horowitz, which in many ways is living by the playbook it created for the boom times in tech.


The Role of Quantum Computing in Data Science

Despite its potential, the transition to quantum computing presents several significant challenges to overcome. Quantum computers are highly sensitive to their environment, with qubit states easily disturbed by external influences – a problem known as quantum decoherence. This sensitivity requires that quantum computers be kept in highly controlled conditions, which can be expensive and technologically demanding. Moreover, concerns about the future cost implications of quantum computing on software and services are emerging. Ultimately, the prices will be sky-high, and we might be forced to search for AWS alternatives, especially if they raise their prices due to the introduction of quantum features, as it’s the case with Microsoft banking everything on AI. This raises the question of how quantum computing will alter the prices and features of both consumer and enterprise software and services, further highlighting the need for a careful balance between innovation and accessibility. There’s also a steep learning curve for data scientists to adapt to quantum computing.


AI-Driven API and Microservice Architecture Design for Cloud

Implementing AI-based continuous optimization for APIs and microservices in Azure involves using artificial intelligence to dynamically improve performance, efficiency, and user experience over time. Here's how you can achieve continuous optimization with AI in Azure:Performance monitoring: Implement AI-powered monitoring tools to continuously track key performance metrics such as response times, error rates, and resource utilization for APIs and microservices in real time. Automated tuning: Utilize machine learning algorithms to analyze performance data and automatically adjust configuration settings, such as resource allocation, caching strategies, or database queries, to optimize performance. Dynamic scaling: Leverage AI-driven scaling mechanisms to adjust the number of instances hosting APIs and microservices based on real-time demand and predicted workload trends, ensuring efficient resource allocation and responsiveness. Cost optimization: Use AI algorithms to analyze cost patterns and resource utilization data to identify opportunities for cost savings, such as optimizing resource allocation, implementing serverless architectures, or leveraging reserved instances.


4 ways AI is contributing to bias in the workplace

Generative AI tools are often used to screen and rank candidates, create resumes and cover letters, and summarize several files simultaneously. But AIs are only as good as the data they're trained on. GPT-3.5 was trained on massive amounts of widely available information online, including books, articles, and social media. Access to this online data will inevitably reflect societal inequities and historical biases, as shown in the training data, which the AI bot inherits and replicates to some degree. No one using AI should assume these tools are inherently objective because they're trained on large amounts of data from different sources. While generative AI bots can be useful, we should not underestimate the risk of bias in an automated hiring process -- and that reality is crucial for recruiters, HR professionals, and managers. Another study found racial bias is present in facial-recognition technologies that show lower accuracy rates for dark-skinned individuals. Something as simple as data for demographic distributions in ZIP codes being used to train AI models, for example, can result in decisions that disproportionately affect people from certain racial backgrounds.



Quote for the day:

"The most common way people give up their power is by thinking they don't have any." -- Alice Walker