Showing posts with label security architecture. Show all posts
Showing posts with label security architecture. Show all posts

Daily Tech Digest - June 25, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein



Why data observability is the missing layer of modern networking

You might hear people use these terms interchangeably, but they’re not the same thing. Visibility is about what you can see – dashboard statistics, logs, uptime numbers, bandwidth figures, the raw data that tells you what’s happening across your network. Observability, on the other hand, is about what that data actually means. It’s the ability to interpret, analyse, and act on those insights. It’s not just about seeing a traffic spike but instead understanding why it happened. It’s not just spotting a latency issue, but knowing which apps are affected and where the bottleneck sits. ... Today, connectivity needs to be smart, agile, and scalable. It’s about building infrastructure that supports cloud, remote work, and everything in between. Whether you’re adding a new site, onboarding a remote team, or launching a cloud-hosted app, your network should be able to scale and respond at speed. Then there’s security, a non-negotiable layer that protects your entire ecosystem. Great security isn’t about throwing up walls, it’s about creating confidence. That means deploying zero trust principles, segmenting access, detecting threats in real time, and encrypting data, without making users lives harder. ... Finally, we come to observability. Arguably the most unappreciated of the three but quickly becoming essential. 


6 Key Security Risks in LLMs: A Platform Engineer’s Guide

Prompt injection is the AI-era equivalent of SQL injection. Attackers craft malicious inputs to manipulate an LLM, bypass safeguards or extract sensitive data. These attacks range from simple jailbreak prompts that override safety rules to more advanced exploits that influence backend systems. ... Model extraction attacks allow adversaries to systematically query an LLM to reconstruct its knowledge base or training data, essentially cloning its capabilities. These attacks often rely on automated scripts submitting millions of queries to map the model’s responses. One common technique, model inversion, involves strategically structured inputs that extract sensitive or proprietary information embedded in the model. Attackers may also use repeated, incremental queries with slight variations to amass a dataset that mimics the original training data. ... On the output side, an LLM might inadvertently reveal private information embedded in its dataset or previously entered user data. A common risk scenario involves users unknowingly submitting financial records or passwords into an AI-powered chatbot, which could then store, retrieve or expose this data unpredictably. With cloud-based LLMs, the risk extends further. Data from one organization could surface in another’s responses.


Adopting Agentic AI: Ethical Governance, Business Impact, Talent Demand, and Data Security

Agentic AI introduces a spectrum of ethical challenges that demand proactive governance. Given its capacity for independent decision-making, there is a heightened need for transparent, accountable, and ethically driven AI models. Ethical governance in Agentic AI revolves around establishing robust policies that govern decision logic, bias mitigation, and accountability. Organizations leveraging Agentic AI must prioritize fairness, inclusivity, and regulatory compliance to avoid unintended consequences. ... The integration of Agentic AI into business ecosystems promises not just automation but strategic enhancement of decision-making. These AI agents are designed to process real-time data, predict market shifts, and autonomously execute decisions that would traditionally require human intervention. In sectors such as finance, healthcare, and manufacturing, Agentic AI is optimizing supply chains, enhancing predictive analytics, and streamlining operations with unparalleled accuracy. ... One of the major concerns surrounding Agentic AI is data security. Autonomous decision-making systems require vast amounts of real-time data to function effectively, raising questions about data privacy, ownership, and cybersecurity. Cyber threats aimed at exploiting autonomous decision-making could have severe consequences, especially in sectors like finance and healthcare.


Unveiling Supply Chain Transformation: IIoT and Digital Twins

Digital twins and IIoTs are evolving technologies that are transforming the digital landscape of supply chain transformation. The IIoT aims to connect to actual physical sensors and actuators. On the other hand, DTs are replica copies that virtually represent the physical components. The DTs are invaluable for testing and simulating design parameters instead of disrupting production elements. ... Contrary to generic IoT, which is more oriented towards consumers, the IIoT enables the communication and interconnection between different machines, industrial devices, and sensors within a supply chain management ecosystem with the aim of business optimization and efficiency. The incubation of IIoT in supply chain management systems aims to enable real-time monitoring and analysis of industrial environments, including manufacturing, logistics management, and supply chain. It boosts efforts to increase productivity, cut downtime, and facilitate information and accurate decision-making. ... A supply chain equipped with IIoT will be a main ingredient in boosting real-time monitoring and enabling informed decision-making. Every stage of the supply chain ecosystem will have the impact of IIoT, like automated inventory management, health monitoring of goods and their tracking, analytics, and real-time response to meet the current marketplace. 


The state of cloud security

An important complicating factor in all this is that customers don’t always know what’s happening in cloud data centers. At the same time, De Jong acknowledges that on-premises environments have the same problem. “There’s a spectrum of issues, and a lot of overlap,” he says, something Wesley SwartelĂ© agrees with: “You have to align many things between on-prem and cloud.” Andre Honders points to a specific aspect of the cloud: “You can be in a shared environment with ten other customers. This means you have to deal with different visions and techniques that do not exist on-premises.” This is certainly the case. There are plenty of worst case scenarios to consider in the public cloud. ... However, a major bottleneck remains the lack of qualified personnel. We hear this all the time when it comes to security. And in other IT fields too, as it happens, meaning one could draw a society-wide conclusion. Nevertheless, staff shortages are perhaps more acute in this sector. Erik de Jong sees society as a whole having similar problems, at any rate. “This is not an IT problem. Just ask painters. In every company, a small proportion of the workforce does most of the work.” Wesley SwartelĂ© agrees it is a challenge for organizations in this industry to find the right people. “Finding a good IT professional with the right mindset is difficult.


As AI reshapes the enterprise, security architecture can’t afford to lag behind

Technology works both ways – it enables the attacker and the smart defender. Cybercriminals are already capitalising on its potential, using open source AI models like DeepSeek and Grok to automate reconnaissance, craft sophisticated phishing campaigns, and produce deepfakes that can convincingly impersonate executives or business partners. What makes this especially dangerous is that these tools don’t just improve the quality of attacks; they multiply their volume. That’s why enterprises need to go beyond reactive defenses and start embedding AI-aware policies into their core security fabric. It starts with applying Zero Trust to AI interactions, limiting access based on user roles, input/output restrictions, and verified behaviour. ... As attackers deploy AI to craft polymorphic malware and mimic legitimate user behaviour, traditional defenses struggle to keep up. AI is now a critical part of the enterprise security toolkit, helping CISOs and security teams move from reactive to proactive threat defense. It enables rapid anomaly detection, surfaces hidden risks earlier in the kill chain, and supports real-time incident response by isolating threats before they can spread. But AI alone isn’t enough. Security leaders must strengthen data privacy and security by implementing full-spectrum DLP, encryption, and input monitoring to protect sensitive data from exposure, especially as AI interacts with live systems. 


Identity Is the New Perimeter: Why Proofing and Verification Are Business Imperatives

Digital innovation, growing cyber threats, regulatory pressure, and rising consumer expectations all drive the need for strong identity proofing and verification. Here is why it is more important than ever:Combatting Fraud and Identity Theft: Criminals use stolen identities to open accounts, secure loans, or gain unauthorized access. Identity proofing is the first defense against impersonation and financial loss. Enabling Secure Digital Access: As more services – from banking to healthcare – go digital, strong remote verification ensures secure access and builds trust in online transactions. Regulatory Compliance: Laws such as KYC, AML, GDPR, HIPAA, and CIPA require identity verification to protect consumers and prevent misuse. Compliance is especially critical in finance, healthcare, and government sectors. Preventing Account Takeover (ATO): Even legitimate accounts are at risk. Continuous verification at key moments (e.g., password resets, high-risk actions) helps prevent unauthorized access via stolen credentials or SIM swapping. Enabling Zero Trust Security: Zero Trust assumes no inherent trust in users or devices. Continuous identity verification is central to enforcing this model, especially in remote or hybrid work environments. 


Why should companies or organizations convert to FIDO security keys?

FIDO security keys significantly reduce the risk of phishing, credential theft, and brute-force attacks. Because they don’t rely on shared secrets like passwords, they can’t be reused or intercepted. Their phishing-resistant protocol ensures authentication is only completed with the correct web origin. FIDO security keys also address insider threats and endpoint vulnerabilities by requiring physical presence, further enhancing protection, especially in high-security environments such as healthcare or public administration. ... In principle, any organization that prioritizes a secure IT infrastructure stands to benefit from adopting FIDO-based multi-factor authentication. Whether it’s a small business protecting customer data or a global enterprise managing complex access structures, FIDO security keys provide a robust, phishing-resistant alternative to passwords. That said, sectors with heightened regulatory requirements, such as healthcare, finance, public administration, and critical infrastructure, have particularly strong incentives to adopt strong authentication. In these fields, the risk of breaches is not only costly but can also have legal and operational consequences. FIDO security keys are also ideal for restricted environments, such as manufacturing floors or emergency rooms, where smartphones may not be permitted. 


Data Warehouse vs. Data Lakehouse

Data warehouses and data lakehouses have emerged as two prominent adversaries in the data storage and analytics markets, each with advantages and disadvantages. The primary difference between these two data storage platforms is that while the data warehouse is capable of handling only structured and semi-structured data, the data lakehouse can store unlimited amounts of both structured and unstructured data – and without any limitations. ... Traditional data warehouses have long supported all types of business professionals in their data storage and analytics endeavors. This approach involves ingesting structured data into a centralized repository, with a focus on warehouse integration and business intelligence reporting. Enter the data lakehouse approach, which is vastly superior for deep-dive data analysis. The lakehouse has successfully blended characteristics of the data warehouse and the data lake to create a scalable and unrestricted solution. The key benefit of this approach is that it enables data scientists to quickly extract insights from raw data with advanced AI tools. ... Although a data warehouse supports BI use cases and provides a “single source of truth” for analytics and reporting purposes, it can also become difficult to manage as new data sources emerge. The data lakehouse has redefined how global businesses store and process data. 


AI or Data Governance? Gartner Says You Need Both

Data and analytics leaders, such as chief data officers, or CDOs, and chief data and analytics officers, or CDAOs, play a significant role in driving their organizations' data and analytics, D&A, successes, which are necessary to show business value from AI projects. Gartner predicts that by 2028, 80% of gen AI business apps will be developed on existing data management platforms. Their analysts say, "This is the best time to be in data and analytics," and CDAOs need to embrace the AI opportunity eyed by others in the C-suite, or they will be absorbed into other technical functions. With high D&A ambitions and AI pilots becoming increasingly ubiquitous, focus is shifting toward consistent execution and scaling. But D&A leaders are overwhelmed with their routine data management tasks and need a new AI strategy. ... "We've never been good at governance, and now AI demands that we be even faster, which means you have to take more risks and be prepared to fail. We have to accept two things: Data will never be fully governed. Secondly, attempting to fully govern data before delivering AI is just not realistic. We need a more practical solution like trust models," Zaidi said. He said trust models provide a trust rating for data assets by examining their value, lineage and risk. They offer up-to-date information on data trustworthiness and are crucial for fostering confidence. 

Daily Tech Digest - May 11, 2025


Quote for the day:

"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche



The Human-Centric Approach To Digital Transformation

Involving employees from the beginning of the transformation process is vital for fostering buy-in and reducing resistance. When employees feel they have a say in how new tools and processes will be implemented, they’re more likely to support them. In practice, early involvement can take many forms, including workshops, pilot programs, and regular feedback sessions. For instance, if a company is considering adopting a new project management tool, it can start by inviting employees to test various options, provide feedback, and voice their preferences. ... As companies increasingly adopt digital tools, the need for digital literacy grows. Employees who lack confidence or skills in using new technology are more likely to feel overwhelmed or resistant. Providing comprehensive training and support is essential to ensuring that all employees feel capable and empowered to leverage digital tools. Digital literacy training should cover the technical aspects of new tools and focus on their strategic benefits, helping employees see how these technologies align with broader company goals. ... The third pillar, adaptability, is crucial for sustaining digital transformation. In a human-centered approach, adaptability is encouraged and rewarded, creating a growth-oriented culture where employees feel safe to experiment, take risks, and share ideas. 


Forging OT Security Maturity: Building Cyber Resilience in EMEA Manufacturing

When it comes to OT security maturity, pragmatic measures that are easily implementable by resource-constrained SME manufacturers are the name of the game. Setting up an asset visibility program, network segmentation, and simple threat detection can attain significant value without requiring massive overhauls. Meanwhile, cultural alignment across IT and OT teams is essential. ... “To address evolving OT threats, organizations must build resilience from the ground up,” Mashirova told Industrial Cyber. “They should enhance incident response, invest in OT continuous monitoring, and promote cross-functional collaboration to improve operational resilience while ensuring business continuity and compliance in an increasingly hostile cyber environment.” ... “Manufacturers throughout the region are increasingly recognizing that cyber threats are rapidly shifting toward OT environments,” Claudio Sangaletti, OT leader at medmix, told Industrial Cyber. “In response, many companies are proactively developing and implementing comprehensive OT security programs. These initiatives aim not only to safeguard critical assets but also to establish robust business recovery plans to swiftly address and mitigate the impacts of potential attacks.”


Quantum Leap? Opinion Split Over Quantum Computing’s Medium-Term Impact

“While the actual computations are more efficient, the environment needed to keep quantum machines running, especially the cooling to near absolute zero, is extremely energy-intensive,” he says. When companies move their infrastructure to cloud platforms and transition key platforms like CRM, HCM, and Unified Comms Platform (UCP) to cloud-native versions, they can reduce the energy use associated with running large-scale physical servers 24/7. “If and when quantum computing becomes commercially viable at scale, cloud partners will likely absorb the cooling and energy overhead,” Johnson says. “That’s a win for sustainability and focus.” Alexander Hallowell, principal analyst at Omdia’s advanced computing division, says that unless one of the currently more “out there” technology options proves itself (e.g., photonics or something semiconductor-based), quantum computing is likely to remain infrastructure-intensive and environmentally fragile. “Data centers will need to provide careful isolation from environmental interference and new support services such as cryogenic cooling,” he says. He predicts the adoption of quantum computing within mainstream data center operations is at least five years out, possibly “quite a bit more.” 


Introduction to Observability

Observability has become a concept, in the field of information technology in areas like DevOps and system administration. Essentially, observability involves measuring a system’s states by observing its outputs. This method offers an understanding of how systems behave, enabling teams to troubleshoot problems, enhance performance and ensure system reliability. In today’s IT landscape, the complexity and size of applications have grown significantly. Traditional monitoring techniques have struggled to keep up with the rise of technologies like microservices, containers and serverless architectures. ... Transitioning from monitoring to observability signifies a progression, in the management and upkeep of systems. Although monitoring is crucial for keeping tabs on metrics and reacting to notifications, observability offers a comprehensive perspective and the in-depth analysis necessary for comprehending and enhancing system efficiency. By combining both methods, companies can attain a more effective IT infrastructure. ... Observability depends on three elements to offer a perspective of system performance and behavior: logs, metrics and traces. These components, commonly known as the “three pillars of observability,” collaborate to provide teams, with the information to analyze and enhance their systems.


Cloud Strategy 2025: Repatriation Rises, Sustainability Matures, and Cost Management Tops Priorities

After more than twenty years of trial-and-error, the cloud has arrived at its steady state. Many organizations have seemingly settled on the cloud mix best suited to their business needs, embracing a hybrid strategy that utilizes at least one public and one private cloud. ... Sustainability is quickly moving from aspiration to expectation for businesses. ... Cost savings still takes the top spot for a majority of organizations, but notably, 31% now report equal prioritization between cost optimization and sustainability. The increased attention on sustainability comes as the internal and external regulatory pressures mount for technology firms to meet environmental requirements. There is also the reputational cost at play – scrutiny over sustainability efforts is on the rise from customers and employees alike. ... As organizations maintain a laser focus on cost management, FinOps has emerged as a viable solution for combating cost management challenges. A comprehensive FinOps infrastructure is a game-changer when it comes to an organization’s ability to wrangle overspending and maximize business value. Additionally, FinOps helps businesses activate on timely, data-driven insights, improving forecasting and encouraging cross-functional financial accountability.


Building Adaptive and Future-Ready Enterprise Security Architecture: A Conversation with Yusfarizal Yusoff

Securing Operational Technology (OT) environments in critical industries presents a unique set of challenges. Traditional IT security solutions are often not directly applicable to OT due to the distinctive nature of these environments, which involve legacy systems, proprietary protocols, and long lifecycle assets that may not have been designed with cybersecurity in mind. As these industries move toward greater digitisation and connectivity, OT systems become more vulnerable to cyberattacks. One major challenge is ensuring interoperability between IT and OT environments, especially when OT systems are often isolated and have been built to withstand physical and environmental stresses, rather than being hardened against cyber threats. Another issue is the lack of comprehensive security monitoring in many OT environments, which can leave blind spots for attackers to exploit. To address these challenges, security architects must focus on network segmentation to separate IT and OT environments, implement robust access controls, and introduce advanced anomaly detection systems tailored for OT networks. Furthermore, organisations must adopt specialised OT security tools capable of addressing the unique operational needs of industrial environments. 


CDO and CAIO roles might have a built-in expiration date

“The CDO role is likely to be durable, much due to the long-term strategic value of data; however, it is likely to evolve to encompass more strategic business responsibility,” he says. “The CAIO, on the other hand, is likely to be subsumed into CTO or CDO roles as AI technology folds into core technologies and architectures standardize.” For now, both CIAOs and CDOs have responsibilities beyond championing the use of AI and good data governance, Stone adds. They will build the foundation for enterprise-wide benefits of AI and good data management. “As AI and data literacy take hold across the enterprise, CDOs and CAIOs will shift from internal change enablers and project champions to strategic leaders and organization-wide enablers,” he says. “They are, and will continue to grow more, responsible for setting standards, aligning AI with business goals, and ensuring secure, scalable operations.” Craig Martell, CAIO at data security and management vendor Cohesity, agrees that the CDO position may have a better long-term prognosis than the CAIO position. Good data governance and management will remain critical for many organizations well into the future, he says, and that job may not be easy to fold into the CIO’s responsibilities. “What the chief data officer does is different than what the CIO does,” says Martell, 


Chaos Engineering with Gremlin and Chaos-as-a-Service: An Empirical Evaluation

As organizations increasingly adopt microservices and distributed architectures, the potential for unpredictable failures grows. Traditional testing methodologies often fail to capture the complexity and dynamism of live systems. Chaos engineering addresses this gap by introducing carefully planned disturbances to test system responses under duress. This paper explores how Gremlin can be used to perform such experiments on AWS EC2 instances, providing actionable insights into system vulnerabilities and recovery mechanisms. ... Chaos engineering originated at Netflix with the development of the Chaos Monkey tool, which randomly terminated instances in production to test system reliability. Since then, the practice has evolved with tools like Gremlin, LitmusChaos, and Chaos Toolkit offering more controlled and systematic approaches. Gremlin offers a SaaS-based chaos engineering platform with a focus on safety, control, and observability. ... Chaos engineering using Gremlin on EC2 has proven effective in validating the resilience of distributed systems. The experiments helped identify areas for improvement, including better configuration of health checks and fine-tuning auto-scaling thresholds. The blast radius concept ensured safe testing without risking the entire system.


How digital twins are reshaping clinical trials

While the term "digital twin" is often associated with synthetic control arms, Walsh stressed that the most powerful and regulatory-friendly application lies in randomized controlled trials (RCTs). In this context, digital twins do not replace human subjects but act as prognostic covariates, enhancing trial efficiency while preserving randomization and statistical rigor. "Digital twins make every patient more valuable," Walsh explained. "Applied correctly, this means that trials may be run with fewer participants to achieve the same quality of evidence." ... "Digital twins are one approach to enable highly efficient replication studies that can lower the resource burden compared to the original trial," Walsh clarified. "This can include supporting novel designs that replicate key results while also assessing additional clinical or biological questions of interest." In effect, this strategy allows for scientific reproducibility without repeating entire protocols, making it especially relevant in therapeutic areas with limited eligible patient populations or high participant burden. In early development -- particularly phase 1b and phase 2 -- digital twins can be used as synthetic controls in open-label or single-arm studies. This design is gaining traction among sponsors seeking to make faster go/no-go decisions while minimizing patient exposure to placebos or standard-of-care comparators.


The Great European Data Repatriation: Why Sovereignty Starts with Infrastructure

Data repatriation is not merely a reactive move driven by fear. It’s a conscious and strategic pivot. As one industry leader recently noted in Der Spiegel, “We’re receiving three times as many inquiries as usual.” The message is clear: European companies are actively evaluating alternatives to international cloud infrastructures—not out of nationalism, but out of necessity. The scale of this shift is hard to ignore. Recent reports have cited a 250% user growth on platforms offering sovereign hosting, and inquiries into EU-based alternatives have surged over a matter of months. ... Challenges remain: Migration is rarely a plug-and-play affair. As one European CEO emphasized to The Register, “Migration timelines tend to be measured in months or years.” Moreover, many European providers still lack the breadth of features offered by global cloud platforms, as a KPMG report for the Dutch government pointed out. Yet the direction is clear.  ... Europe’s data future is not about isolation, but balance. A hybrid approach—repatriating sensitive workloads while maintaining flexibility where needed—can offer both resilience and innovation. But this journey starts with one critical step: ensuring infrastructure aligns with European values, governance, and control.

Daily Tech Digest - March 14, 2025


Quote for the day:

“Success does not consist in never making mistakes but in never making the same one a second time.” --George Bernard Shaw


The Maturing State of Infrastructure as Code in 2025

The progression from cloud-specific frameworks to declarative, multicloud solutions like Terraform represented the increasing sophistication of IaC capabilities. This shift enabled organizations to manage complex environments with never-before-seen efficiency. The emergence of programming language-based IaC tools like Pulumi then further blurred the lines between application development and infrastructure management, empowering developers to take a more active role in ops. ... For DevOps and platform engineering leaders, this evolution means preparing for a future where cloud infrastructure management becomes increasingly automated, intelligent and integrated with other aspects of the software development life cycle. It also highlights the importance of fostering a culture of continuous learning and adaptation, as the IaC landscape continues to evolve at a rapid pace. ... Firefly’s “State of Infrastructure as Code (IaC)” report is an annual pulse check on the rapidly evolving state of IaC adoption, maturity and impact. Over the course of the past few editions, this report has become an increasingly crucial resource for DevOps professionals, platform engineers and site reliability engineers (SREs) navigating the complexities of multicloud environments and a changing IaC tooling landscape.


Consent Managers under the Digital Personal Data Protection Act: A Game Changer or Compliance Burden?

The use of Consent Managers provides advantages for both Data Fiduciaries and Data Principals. For Data Fiduciaries, Consent Managers simplify compliance with consent-related legal requirements, making it easier to manage and document user consent in line with regulatory obligations. For Data Principals, Consent Managers offer a streamlined and efficient way to grant, modify, and revoke consent, empowering them with greater control over how their personal data is shared. This enhanced efficiency in managing consent also leads to faster, more secure, and smoother data flows, reducing the complexities and risks associated with data exchanges. Additionally, Consent Managers play a crucial role in helping Data Principals exercise their right to grievance redressal. ... Currently, Data Fiduciaries can manage user consent independently, making the role of Consent Managers optional. If this remains voluntary, many companies may avoid them, reducing their effectiveness. For Consent Managers to succeed, they need regulatory support, flexible compliance measures, and a business model that balances privacy protection with industry participation. ... Rooted in the fundamental right to privacy under Article 21 of the Constitution of India, the DPDPA aims to establish a structured approach to data processing while preserving individual control over personal information.


The future of AI isn’t the model—it’s the system

Enterprise leaders are thinking differently about AI in 2025. Several founders here told me that unlike in 2023 and 2024, buyers are now focused squarely on ROI. They want systems that move beyond pilot projects and start delivering real efficiencies. Mensch says enterprises have developed “high expectations” for AI, and many now understand that the hard part of deploying it isn’t always the model itself—it’s everything around it: governance, observability, security. Mistral, he says, has gotten good at connecting these layers, along with systems that orchestrate data flows between different models and subsystems. Once enterprises grapple with the complexity of building full AI systems—not just using AI models—they start to see those promised efficiencies, Mensch says. But more importantly, C-suite leaders are beginning to recognize the transformative potential. Done right, AI systems can radically change how information moves through a company. “You’re making information sharing easier,” he says. Mistral encourages its customers to break down silos so data can flow across departments. One connected AI system might interface with HR, R&D, CRM, and financial tools. “The AI can quickly query other departments for information,” Mensch explains. “You no longer need to query the team.”


Generative AI is finally finding its sweet spot, says Databricks chief AI scientist

Beyond the techniques, knowing what apps to build is itself a journey and something of a fishing expedition. "I think the hardest part in AI is having confidence that this will work," said Frankle. "If you came to me and said, 'Here's a problem in the healthcare space, here are the documents I have, do you think AI can do this?' my answer would be, 'Let's find out.'" ... "Suppose that AI could automate some of the most boring legal tasks that exist?" offered Frankle, whose parents are lawyers. "If you wanted an AI to help you do legal research, and help you ideate about how to solve a problem, or help you find relevant materials -- phenomenal!" "We're still in very early days" of generative AI, "and so, kind of, we're benefiting from the strengths, but we're still learning how to mitigate the weaknesses." ... In the midst of uncertainty, Frankle is impressed with how customers have quickly traversed the learning curve. "Two or three years ago, there was a lot of explaining to customers what generative AI was," he noted. "Now, when I talk to customers, they're using vector databases." "These folks have a great intuition for where these things are succeeding and where they aren't," he said of Databricks customers. Given that no company has an unlimited budget, Frankle advised starting with an initial prototype, so that investment only proceeds to the extent that it's clear an AI app will provide value.


Australia’s privacy watchdog publishes regulatory strategy prioritizing biometrics

The strategy plan includes a table of activities and estimated timelines, a detailed breakdown of actions in specific categories, and a list of projected long- and short-term outcomes. The goals are ambitious in scope: a desired short-term outcome is to “mature existing awareness about privacy across multiple domains of life” so that “individuals will develop a more nuanced understanding of privacy issues recognising their significance across various aspects of their lives, including personal, professional, and social domains.” Laws, skills training and better security tools are one thing, but changing how people understand their privacy is a major social undertaking. The OAIC’s long-term outcomes seem more rooted in practicality; they include the widespread implementation of enhanced privacy compliance practices for organizations, better public understanding of the OAIC’s role as regulator, and enhanced data handling industry standards. ... AI is a matter of going concern, and compliance for model training and development will be a major focus for the regulator. In late February, Kind delivered a speech on privacy and security in retail that references her decision on the Bunnings case, which led to the publication of guidance on the use of facial recognition technology, focused on four key privacy concepts: necessity/proportionality, consent/transparency, accuracy/bias, and governance.


Hiring privacy experts is tough — here’s why

“Some organizations think, ‘Well, we’re funding security, and privacy is basically the same thing, right?’ And I think that’s really one of my big concerns,” she says. This blending of responsibilities is reflected in training practices, according to Kazi, who notes how many organizations combine security and privacy training, which isn’t inherently problematic, but it carries risks. “One of the questions we ask in our survey is, ‘Do you combine security training and privacy training?’ Some organizations say they do not necessarily see it as a bad thing, but you can … be doing security, but you’re not doing privacy. And so that’s what’s highly concerning is that you can’t have privacy without security, but you could potentially do security well without considering privacy.” As Trovato emphasizes, “cybersecurity people tend to be from Mars and privacy people from Venus”, yet he also observes how privacy and cybersecurity professionals are often grouped together, adding to the confusion about what skills are truly needed. ... “Privacy includes how are we using data, how are you collecting it, who are you sharing it with, how are you storing it — all of these are more subtle component pieces, and are you meeting the requirements of the customer, of the regulator, so it’s a much more outward business focus activity day-to-day versus we’ve got to secure everything and make sure it’s all protected.”


Security Maturity Models: Leveraging Executive Risk Appetite for Your Secure Development Evolution

With developers under pressure to produce more code than ever before, development teams need to have a high level of security maturity to avoid rework. That necessitates having highly skilled personnel working within a strategic, prevention-focused framework. Developer and AppSec teams must work closely together, as opposed to the old model of operating as separate entities. Today, developers need to assume a significant role in ensuring security best practices. The most recent BSIMM report from Black Duck Software, for instance, found that there are only 3.87 AppSec professionals for every 100 developers, which doesn’t bode well for AppSec teams trying to secure an organization’s software all on their own. A critical part of learning initiatives is the ability to gauge the progress of developers in the program, both to ensure that developers are qualified to work on the organization’s most sensitive projects and to assess the effectiveness of the program. This upskilling should be ongoing, and you should always look for areas that can be improved. Making use of a tool like SCW’s Trust Score, which uses benchmarks to gauge progress both internally and against industry standards, can help ensure that progress is being made.


Why thinking like a tech company is essential for your business’s survival

The phrase “every company is a tech company” gets thrown around a lot, but what does that actually mean? To us, it’s not just about using technology — it’s about thinking like a tech company. The most successful tech companies don’t just refine what they already do; they reinvent themselves in anticipation of what’s next. They place bets. They ask: Where do we need to be in five or 10 years? And then, they start moving in that direction while staying flexible enough to adapt as the market evolves. ... Risk management is part of our DNA, but AI presents new types of risks that businesses haven’t dealt with before. ... No matter how good our technology is, our success ultimately comes down to people. And we’ve learned that mindset matters more than skill set. When we launched an AI proof-of-concept project for our interns, we didn’t recruit based on technical acumen. Instead, we looked for curious, self-starting individuals willing to experiment and learn. What we found was eye-opening—these interns thrived despite having little prior experience with AI. Why? Because they asked great questions, adapted quickly, and weren’t afraid to explore. ... Aligning your culture, processes and technology strategy ensures you can adapt to a rapidly changing landscape while staying true to your core purpose.


Realizing the Internet of Everything

The obvious answer to this problem is governance, a set of rules that constrain use and technology to enforce them. The problem, as it is so often with the “obvious,” is that setting the rules would be difficult and constraining use through technology would be difficult to do, and probably harder to get people to believe in. Think about Asimov’s Three Laws of Robotics and how many of his stories focused on how people worked to get around them. Two decades ago, a research lab did a video collaboration experiment that involved a small camera in offices so people could communicate remotely. Half the workforce covered their camera when they got in. I know people who routinely cover their webcams when they’re not on a scheduled video chat or meeting, and you probably do too. So what if the light isn’t on? Somebody has probably hacked in. Social concerns inevitably collide with attempts to integrate technology tightly with how we live. Have we reached a point where dealing with those concerns convincingly is essential in letting technology improve our work, our lives, further? We do have widespread, if not universal, video surveillance. On a walk this week, I found doorbell cameras or other cameras on about a quarter of the homes I passed, and I’d bet there are even more in commercial areas. 


Cloud Security Architecture: Your Guide to a Secure Infrastructure

Threat modeling can be a good starting point, but it shouldn't end with a stack-based security approach. Rather than focusing solely on the technologies, approach security by mapping parts of your infrastructure to equivalent security concepts. Here are some practical suggestions and areas to zoom in on for implementation. ... When protecting workloads in the cloud, consider using some variant of runtime security. Kubernetes users have no shortage of choice here with tools such as Falco, an open-source runtime security tool that monitors your applications and detects anomalous behaviors. However, chances are your cloud provider has some form of dynamic threat detection for your workloads. For example, AWS offers Amazon GuardDuty, which continuously monitors your workloads for malicious activity and unauthorized behavior. ... Implementing two-factor authentication adds an extra layer of protection by requiring a second form of verification, such as an authenticator app or a passkey, in addition to your password. While reaching for your authenticator app every time you log in might seem slightly inconvenient, it's a far better outcome than dealing with the aftermath of a breached account. The minor inconvenience is a small price to pay for the added security it provides.

Daily Tech Digest - November 09, 2023

MIT Physicists Transform Pencil Lead Into Electronic “Gold”

MIT physicists have metaphorically turned graphite, or pencil lead, into gold by isolating five ultrathin flakes stacked in a specific order. The resulting material can then be tuned to exhibit three important properties never before seen in natural graphite. ... “We found that the material could be insulating, magnetic, or topological,” Ju says. The latter is somewhat related to both conductors and insulators. Essentially, Ju explains, a topological material allows the unimpeded movement of electrons around the edges of a material, but not through the middle. The electrons are traveling in one direction along a “highway” at the edge of the material separated by a median that makes up the center of the material. So the edge of a topological material is a perfect conductor, while the center is an insulator. “Our work establishes rhombohedral stacked multilayer graphene as a highly tunable platform to study these new possibilities of strongly correlated and topological physics,” Ju and his coauthors conclude in Nature Nanotechnology


Conscientious Computing – Facing into Big Tech Challenges

The tech industry has driven incredibly rapid innovation by taking advantage of increasingly cheap and more powerful computing – but at what unintended cost? What collateral damage has been created in our era of “move fast and break things”? Sadly, it’s now becoming apparent we have overlooked the broader impacts of our technological solutions. As software proliferates through every facet of life and the scale of it increases, we need to think more about where this leads us from people, planet and financial perspectives. ... The classic Scope, Cost, Time pyramid – but often it’s the **observable ** functional quality that is prioritised. For that I’ll use a somewhat surreal version of an iceberg – as so much of technical (and effectively sustainability debt – a topic for a future blog) is hidden below the water line. Every engineering decision (or indecision) has ethical and sustainability consequences, often invisible from within our isolated bubbles. Just as the industry has had to raise its game on topics such as security, privacy and compliance, we desperately need to raise our game holistically on sustainability.


The CIO’s fatal flaw: Too much leadership, not enough management

So why does leadership get all the buzz? A cynic might suggest that the more respect doing-the-work gets, the more the company might have to pay the people who do that work, which in turn would mean those who manage the work would get paid more than those who think and charismatically express deep and inspirational thoughts. And as there are more people who do work than those who manage it, respecting the work and those who do it would be expensive. Don’t misunderstand. Done properly, leading is a lot of work, and because leading is about people, not processes or tools and technology; it’s time consuming, too. And in fact, when I conduct leadership seminars, the biggest barrier to success for most participants is figuring out and committing to their time budget. Leadership, that is, involves setting direction, making or facilitating decisions, staffing, delegating, motivating, overseeing team dynamics, engineering the business culture, and communicating. Leaders who are committed to improving at their trade must figure out how much time they plan to devote to each of these eight tasks, which is hard enough.


The Next IT Challenge Is All about Speed and Self-Service

One of the most significant roadblocks to rapid cloud adoption is sheer complexity. Provisioning a cloud environment involves dozens of dependent services, intricate configurations, security policies and data governance issues. The cognitive load on IT teams is significant, and the situation is exacerbated by manual processes that are still in place. The vast majority of engineering teams still depend on legacy ticketing systems to request IT for cloud environments, which adds a significant load on IT and also slows engineering teams. This slows down the entire operation, making it difficult for IT and engineering to support business needs effectively. In fact, in one study conducted by Rafay Systems, application developers at enterprises revealed that 25% of organizations reportedly take three months or longer to deploy a modern application or service after its code is complete. The real goal for any IT department is to support the needs of the business. Today, they do that better, faster and more cost-effectively by leveraging cloud technologies to realize all the business benefits of the modern applications being deployed.


The DPDP Act: Bolstering data protection & privacy, making India future-ready

The DPDP Act has a direct impact across industries. Organisations not only need to reassess their existing compliance status and gear up to cope with the new norms but also create a phased action plan for various processes. Moreover, if labeled as SDF, organisations also need to appoint a Data Protection Officer (DPO). In addition, organisations need to devise appropriate data protection and privacy policy framework in alignment with the DPDP Act. Further, consent forms and mechanisms have to be developed to ensure standard procedures as laid out in the legislation. Companies have to additionally invest to adopt the necessary changes in compliance with the law. They need to list down their third-party data handlers, consent types and processes, privacy notices, contract clauses, categorise data, and develop breach management processes. Sharing his perspective on the DPDP Act, Amit Jaju, Senior Managing Director, Ankura Consulting Group (India) says, “The Digital Personal Data Protection Act 2023 has ushered in a new era of data privacy and protection, compelling solution providers to realign their business strategies with its mandates. 


Will AI hurt or help workers? It's complicated

Here's what is certain: CIOs see AI as being useful, but not replacing higher-level workers. JetRockets recently surveyed US CIOs. In its report, How Generative AI is Impacting IT Leaders & Organizations, the custom-software firm found that CIOs are primarily using AI for cybersecurity and threat detection (81%), with predictive maintenance and equipment monitoring (69%) and software development / product development (68%) in second and third place, respectively. Security, you ask? Yes, security. CrowdStrike, a security company, sees a huge demand building for AI-based security virtual assistants. A Gartner study on virtual assistants predicted, "By 2024, 40% of advanced virtual assistants will be industry-domain-specific; by 2025, advanced virtual assistants will provide advisory and intervention roles for 30% of knowledge workers, up from 5% in 2021." By CrowdStrike's reckoning, AI will "help organizations scale their cybersecurity workforce by three times and reduce operating costs by close to half a million dollars." That's serious cash.


From Chaos to Confidence: The Indispensable Role of Security Architecture

Beyond mere firefighting, security architecture embraces the proactive art of strategic defense. It takes a risk-based approach to identifying potential threats, assessing weak points in an organization's IT stack, architecting forward-looking designs and prioritizing security initiatives. By aligning security investments with the organization's risk tolerance and business priorities, security architecture ensures that precious resources are optimally allocated for maximum security defense designed with in-depth zero trust security principles in mind. This reduces enterprise application deployment and operational security costs. It is similar to designing high-rise buildings in a standard manner, following all safety codes and by-laws while still allowing individual apartment owners to design and create their homes as they would prefer. Cyberattacks have become increasingly sophisticated and frequent. As a result, it is imperative for defense systems to have comprehensive, purpose-built architectures and designs in place to protect against such threats. Security architecture provides a complete defense framework by integrating various security components


Top 5 IT disaster scenarios DR teams must test

Failed backups are some of the most frequent IT disasters. Businesses can replace hardware and software, but if the data and all backups are gone, bringing them back might be impossible or incredibly expensive. Sys admins must periodically test their ability to restore from backups to ensure backups are working correctly and the restore process does not have some unseen fatal flaw. At the same time, there should always be multiple generations of backups, with some of those backup sets off site. ... Hardware failure can take many forms, including a system not using RAID, a single disk loss taking down a whole system, faulty network switches and power supply failures. Most hardware-based IT disaster scenarios can be mitigated with relative ease, but at the cost of added complexity and a price tag. One example is a database server. Such a server can be turned into a database cluster with highly available storage and networking. The cost for doing this would easily double the cost of a single nonredundant server. Administrators would also have to undergo training to manage such an environment.


Mastering AI Quality: Strategies for CDOs and Tech Leaders

Most chief data officers (CDOs) work hard to make their data operations into “glass boxes” --transparent, explainable, explorable, trustworthy resources for their companies. Then comes artificial intelligence and machine learning (AI/ML), with their allure of using that data for ever-more impressive strategic leaps, efficiencies, and growth potential. However, there’s a problem. Nearly all AI/ML tools are “black boxes.” They are so inscrutable even their creators are concerned about how they produce their results. The speed and depth at which these tools can process data without human intervention or input presents a danger to technology leaders seeking control of their data and who want to ensure and verify the quality of analytics that use it. Combine this with a push to remove humans from the decision loop and you have a potent recipe for decisions to go off the rails. ... With a human collaborator or a human-designed algorithm, it is generally easy to elicit a meaningful response to the question, “Why is this result what it is?” With AI -- and generative AI in particular -- that may not be the case.


Revamping IT for AI System Support

“It’s important for everybody to understand how fast this [AI] is going to change,” said Eric Schmidt, former CEO and chairman of Google. “The negatives are quite profound.” Among the concerns is that AI firms still had “no solutions for issues around algorithmic bias or attribution, or for copyright disputes now in litigation over the use of writing, books, images, film, and artworks in AI model training. Many other as yet unforeseen legal, ethical, and cultural questions are expected to arise across all kinds of military, medical, educational, and manufacturing uses.” The challenge for companies and for IT is that the law always lags technology. There will be few hard and fast rules for AI as it advances relentlessly. So, AI runs the risk of running off ethical and legal guardrails. In this environment, legal cases are likely to arise that define case law and how AI issues will be addressed. The danger for IT and companies is that they don’t want to be become the defining cases for the law by getting sued. CIOs can take action by raising awareness of AI as a corporate risk management concern to their boards and CEOs.



Quote for the day:

"Holding on to the unchangeable past is a waste of energy and serves no purpose in creating a better future." -- Unknown

Daily Tech Digest - December 15, 2021

Unstructured Data Will Be Key to Analytics in 2022

Many organizations today have a hybrid cloud environment in which the bulk of data is stored and backed up in private data centers across multiple vendor systems. As unstructured (file) data has grown exponentially, the cloud is being used as a secondary or tertiary storage tier. It can be difficult to see across the silos to manage costs, ensure performance and manage risk. As a result, IT leaders realize that extracting value from data across clouds and on-premises environments is a formidable challenge. Multicloud strategies work best when organizations use different clouds for different use cases and data sets. However, this brings about another issue: Moving data is very expensive when and if you need to later move data from one cloud to another. A newer concept is to pull compute toward data that lives in one place. That central place could be a colocation center with direct links to cloud providers. Multicloud will evolve with different strategies: sometimes compute comes to your data, sometimes the data resides in multiple clouds.


Developing Event-Driven Microservices

Microservices increasingly use event-driven architectures for communication and related to this many data-driven systems are also employing an event sourcing pattern of one form or another. This is when data changes are sent via events that describe the data change that are received by interested services. Thus, the data is sourced from the events, and event sourcing in general moves the source of truth for data to the event broker. This fits nicely with the decoupling paradigm of microservices. It is very important to notice that there are actually two operations involved in event sourcing, the data change being made and the communication/event of that data change. There is, therefore, a transactional consideration and any inconsistency or failure causing a lack of atomicity between these two operations must be accounted for. This is an area where TEQ has an extremely significant and unique advantage as it, the messaging/eventing system, is actually part of the database system itself and therefore can conduct both of these operations in the same local transaction and provide this atomicity guarantee.
 

Quantum computing use cases are getting real—what you need to know

Most known use cases fit into four archetypes: quantum simulation, quantum linear algebra for AI and machine learning, quantum optimization and search, and quantum factorization. We describe these fully in the report, as well as outline questions leaders should consider as they evaluate potential use cases. ... Quantum computing has the potential to revolutionize the research and development of molecular structures in the biopharmaceuticals industry as well as provide value in production and further down the value chain. In R&D, for example, new drugs take an average of $2 billion and more than ten years to reach the market after discovery. Quantum computing could make R&D dramatically faster and more targeted and precise by making target identification, drug design, and toxicity testing less dependent on trial and error and therefore more efficient. A faster R&D timeline could get products to the right patients more quickly and more efficiently—in short, it would improve more patients’ quality of life. Production, logistics, and supply chain could also benefit from quantum computing.


How Extended Security Posture Management Optimizes Your Security Stack

XSPM helps the security team to deal with the constant content configuration churn and leverages telemetry to help identify the gaps in security by generating up-to-date emerging threats feeds and providing additional test cases emulating TTPs that attackers would use, saving DevSocOps the time needed to develop those test cases. When running XSPM validation modules, knowing that the tests are timely, current, and relevant enables reflecting on the efficacy of security controls and understanding where to make investments to ensure that the configuration, hygiene and posture are maintained through the constant changes in the environment. By providing visibility and maximizing relevancy, XSPM helps verify that each dollar spent benefits risk reduction and tool efficacy through baselining and trending and automatically generating reports containing detailed recommendations covering security hardening and tool stack optimization; it dramatically facilitates conversations with the board.


Edge computing keeps moving forward, but no standards yet

As powerful as this concept of seemingly unlimited computing resources may be, however, it does raise a significant, practical question. How can developers build applications for the edge when they don’t necessarily know what resources will be available at the various locations in which their code will run? Cloud computing enthusiasts may point out that a related version of this same dilemma faced cloud developers in the past, and they developed technologies for software abstraction that essentially relieved software engineers of this burden. However, most cloud computing environments had a much smaller range of potential computing resources. Edge computing environments, on the other hand, won’t only offer more choices, but also different options across related sites (such as all the towers in a cellular network). The end result will likely be one of the most heterogeneous targets for software applications that has ever existed. Companies like Intel are working to solve some of the heterogeneity issues with software frameworks. 


The Mad Scramble To Lead The Talent Marketplace Market

While this often starts as a career planning or job matching system, early on companies realize it’s a mentoring tool, a way to connect to development programs, a way to promote job-sharing and gig work, and a way for hiring managers to find great staff. In reality, this type of solution becomes “the system for internal mobility and development,” so companies like Allstate, NetApp, and Schneider see it as an entire system for employee growth. Other companies, like Unilever, see it as a way to promote flexible work. These companies use the Talent Marketplace to encourage agile, gig-work and help people find projects or developmental assignments. Internal gig work and cross-functional projects are a massive trend (movement toward Agile), within a given function (IT, HR, Customer Service, Finance) it’s incredibly powerful. And since the marketplace democratizes opportunities, companies like Seagate see this as a diversity platform as well.


Inside the blockchain developer’s mind: Proof-of-stake blockchain consensus

The real innovation in Bitcoin (BTC) was the creation of an elegant system for combining cryptography with economics to leverage electronic coins (now called “cryptocurrencies”) to use incentives to solve problems that algorithms alone cannot solve. People were forced to perform meaningless work to mine blocks, but the security stems not from the performance of work, but the knowledge that this work could not have been achieved without the sacrifice of capital. Were this not the case, then there would be no economic component to the system. The work is a verifiable proxy for sacrificed capital. Because the network has no means of “understanding” money that is external to it, a system needed to be implemented that converted the external incentive into something the network can understand — hashes. The more hashes an account creates, the more capital it must have sacrificed, and the more incentivized it is to produce blocks on the correct fork. Since these people have already spent their money to acquire hardware and run it to produce blocks, their incentivizing punishment is easy because they’ve already been punished!


Why Intuitive Troubleshooting Has Stopped Working for You

With complicated and complex, I’m using specific terminology from the Cynefin model. Cynefin (pronounced kuh-NEV-in) is a well-regarded system management framework that categorizes different types of systems in terms of how understandable they are. It also lays out how best to operate within those different categories — what works in one context won’t work as well in another — and it turns out that these operating models are extremely relevant to engineers operating today’s production software. Broadly, Cynefin describes four categories of system: obvious, complicated, complex, and chaotic. From the naming, you can probably guess that this categorization ranges from systems that are more predictable and understandable, to those that are less — where predictability is defined by how clear the relationship is between cause and effect. Obvious systems are the most predictable; the relationship between cause and effect is clear to anyone looking at the system. Complicated systems have a cause-and-effect relationship that is well understood, but only to those with system expertise. 


2022 will see a rise in application security orchestration and correlation (ASOC)

For organisations that build software, 2022 will be the year of invisible AppSec. When AppSec tools are run automatically, and when results are integrated with existing processes and issue trackers, developers can be fixing security weaknesses as part of their normal workflows. There is no reason for developers to go to separate systems to “do security,” and no reason they should be scrolling through thousand-page PDF reports from the security team, trying to figure out what needs to be done. When security testing is automated and integrated into a secure development process, it becomes a seamless part of application development. At the same time, organisations are coming to recognise that AppSec is a critical part of risk management, and that a properly implemented AppSec programme results in business benefits. Good AppSec equals fewer software vulnerabilities, which equals less risk of catastrophe or embarrassing publicity, but also results in fewer support cases, fewer emergency updates, higher productivity, and happier customers. But how can organisations turn this knowledge into power?


Why Sustainability Is the Next Priority for Enterprise Software

To meet market and consumer demands, every enterprise will need to evolve their sustainability programs into being just as accurate and rigorous as that of financial accounting. Similarly to the The Sarbanes–Oxley Act of 2002​​ mandates practices in financial record keeping and reporting for corporations in the US, we can expect laws and consumer expectations around sustainability impacts to follow suit. In the same way that SaaS platforms, cloud computing, and digital transformation have changed how enterprises sell, hire, and invest, we’re on the cusp of similar changes within sustainability. For example, as recently as the mid-2000s, interviewing for a new corporate job meant printing out resumes, distributing paper benefits pamphlets, and signing forms that had been Xeroxed a half-dozen times. Today, numerous human resources software companies offer streamlined digital solutions for tracking candidates, onboarding new colleagues, and managing benefits. When large organizations are faced with a high volume of data in any area of their business, digitization is the inevitable solution. 



Quote for the day:

"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox

Daily Tech Digest - December 12, 2021

AWS Among 12 Cloud Services Affected by Flaws in Eltima SDK

USB Over Ethernet enables sharing of multiple USB devices over Ethernet, so that users can connect to devices such as webcams on remote machines anywhere in the world as if the devices were physically plugged into their own computers. The flaws are in the USB Over Ethernet function of the Eltima SDK, not in the cloud services themselves, but because of code-sharing between the server side and the end user apps, they affect both clients – such as laptops and desktops running Amazon WorkSpaces software – and cloud-based machine instances that rely on services such as Amazon Nimble Studio AMI, that run in the Amazon cloud. The flaws allow attackers to escalate privileges so that they can launch a slew of malicious actions, including to kick the knees off the very security products that users depend on for protection. Specifically, the vulnerabilities can be used to “disable security products, overwrite system components, corrupt the operating system or perform malicious operations unimpeded,” SentinelOne senior security researcher Kasif Dekel said in a report published on Tuesday.


Rust in the Linux Kernel: ‘Good Enough’

When we first looked at the idea of Rust in the Linux kernel, it was noted that the objective was not to rewrite the kernel’s 25 million lines of code in Rust, but rather to augment new developments with the more memory-safe language than the standard C normally used in Linux development. Part of the issue with using Rust is that Rust is compiled based on LLVM, as opposed to GCC, and subsequently supports fewer architectures. This is a problem we saw play out when the Python cryptography library replaced some old C code with Rust, leading to a situation where certain architectures would not be supported. Hence, using Rust for drivers would limit the impact of this particular limitation. Ojeda further noted that the Rust for Linux project has been invited to a number of conferences and events this past year, and even garnered some support from Red Hat, which joins Arm, Google, and Microsoft in supporting the effort. According to Ojeda, Red Hat says that “there is interest in using Rust for kernel work that Red Hat is considering.”


DeepMind tests the limits of large AI language systems with 280-billion-parameter model

DeepMind, which regularly feeds its work into Google products, has probed the capabilities of this LLMs by building a language model with 280 billion parameters named Gopher. Parameters are a quick measure of a language’s models size and complexity, meaning that Gopher is larger than OpenAI’s GPT-3 (175 billion parameters) but not as big as some more experimental systems, like Microsoft and Nvidia’s Megatron model (530 billion parameters). It’s generally true in the AI world that bigger is better, with larger models usually offering higher performance. DeepMind’s research confirms this trend and suggests that scaling up LLMs does offer improved performance on the most common benchmarks testing things like sentiment analysis and summarization. However, researchers also cautioned that some issues inherent to language models will need more than just data and compute to fix. “I think right now it really looks like the model can fail in variety of ways,” said Rae.


2022 transformations promise better builders, automation, robotics

The Great Resignation is real, and it has affected the logistics industry more than anyone realizes. People don’t want low-paying and difficult jobs when there’s a global marketplace where they can find better work. Automation will be seen as a way to address this, and in 2022, we will see a lot of tech VC investment in automation and robotics. Some say SpaceX and Virgin can deliver cargo via orbit, but I think that’s ridiculous. What we need, (and what I think will be funded in 2022, are more electric and autonomous vehicles like eVTOL, a company that is innovating the “air mobility” market. According to eVTOL’s website, the U.S. Department of Defense has awarded $6 million to the City of Springfield, Ohio, for a National Advanced Air Mobility Center of Excellence. ... In 2022 transformations, grocery will cease to be an in-store retail experience only, and the sector will be as virtual and digitally-driven as the best of them. Things get interesting when we combine locker pickup, virtual grocery, and automated last-mile delivery using autonomous vehicles that can deliver within a mile of the warehouse or store.


Penetration testing explained: How ethical hackers simulate attacks

In a broad sense, a penetration test works in exactly the same way that a real attempt to breach an organization's systems would. The pen testers begin by examining and fingerprinting the hosts, ports, and network services associated with the target organization. They will then research potential vulnerabilities in this attack surface, and that research might suggest further, more detailed probes into the target system. Eventually, they'll attempt to breach their target's perimeter and get access to protected data or gain control of their systems. The details, of course, can vary a lot; there are different types of penetration tests, and we'll discuss the variations in the next section. But it's important to note first that the exact type of test conducted and the scope of the simulated attack needs to be agreed upon in advance between the testers and the target organization. A penetration test that successfully breaches an organization's important systems or data can cause a great deal of resentment or embarrassment among that organization's IT or security leadership


EV charging in underground carparks is hard. Blockchain to the rescue

According to Bharadwaj, the concrete and steel environment effectively acted as a “Faraday cage,” which meant that the EV chargers wouldn’t talk to people’s mobile phones when they tried to initiate charging. You could find yourself stranded, unable to charge your car. “So we had to innovate.” ... As with any EV charging, a payment app connects your car to the EV charger. With Xeal, the use of NFC means the only time you need the Internet is to download the app in the first instance to create a profile that includes their personal and vehicle information and payment details. You then receive a cryptographic token on your mobile phone that authenticates your identity and enables you to access all of Xeal’s public charging stations. The token is time-bound, which means it dissolves after use. To charge your car, you hold your phone up to the charger. Your mobile reads the cryptographic token, automatically bringing up an NFC scanner. It opens the app, authenticates your charging session, starts scanning, and within milliseconds, the charging session starts.


Top 8 AI and ML Trends to Watch in 2022

The scarcity of skilled AI developers or engineers stands as a major barrier to adopting AI technology in many companies. No-code and low-code technologies come to the rescue. These solutions aim to offer simple interfaces, in theory, to develop highly complex AI systems. Today, web design and no-code user interface (UI) tools let users create web pages simply by dragging and dropping graphical elements together. Similarly, no-code AI technology allows developers to create intelligent AI systems by simply merging different ready-made modules and feeding them industrial domain-specific data. Furthermore, NLP, low-code, and no-code technologies will soon enable us to instruct complex machines with our voice or written instructions. These advancements will result in the “democratization” of AI, ML, and data technologies. ... In 2022, with the aid of AI and ML technologies, more businesses will automate multiple yet repetitive processes that involve large volumes of information and data. In the coming years, an increased rate of automation can be seen in various industries using robotic process automation (RPA) and intelligent business process management software (iBPMS). 


The limitations of scaling up AI language models

Large language models like OpenAI’s GPT-3 show an aptitude for generating humanlike text and code, automatically writing emails and articles, composing poetry, and fixing bugs in software. But the dominant approach to developing these models involves leveraging massive computational resources, which has consequences. Beyond the fact that training and deploying large language models can incur high technical costs, the requirements put the models beyond the reach of many organizations and institutions. Scaling also doesn’t resolve the major problem of model bias and toxicity, which often creeps in from the data used to train the models. In a panel during the Conference on Neural Information Processing Systems (NeurIPS) 2021, experts from the field discussed how the research community should adapt as progress in language models continues to be driven by scaled-up algorithms. The panelists explored how to ensure that smaller institutions and can meaningfully research and audit large-scale systems, as well as ways that they can help to ensure that the systems behave as intended.


Here are three ways distributed ledger technology can transform markets

While firms have narrowed their scope to address more targeted pain points, the increased digitalisation of assets is helping to drive interest in the adoption of DLT in new ways. Previous talk of mass disruption of the financial system has given way to more realistic, but still transformative, discussions around how DLT could open doors to a new era of business workflows, enabling transactional exchanges of assets and payments to be recorded, linked, and traced throughout their entire lifecycle. DLT’s true potential rests with its ability to eliminate traditional “data silos”, so that parties no longer need to build separate recording systems, each holding a copy of their version of “the truth”. This inefficiency leads to time delays, increased costs and data quality issues. In addition, the technology can enhance security and resilience, and would give regulators real-time access to ledger transactions to monitor and mitigate risk more effectively. In recent years, we have been pursuing a number of DLT-based opportunities, helping us understand where we believe the technology can deliver maximum value while retaining the highest levels of risk management.


To identity and beyond—One architect's viewpoint

Simple is often better: You can do (almost) anything with technology, but it doesn't mean you should. Especially in the security space, many customers overengineer solutions. I like this video from Google’s Stripe conference to underscore this point. People, process, technology: Design for people to enhance process, not tech first. There are no "perfect" solutions. We need to balance various risk factors and decisions will be different for each business. Too many customers design an approach that their users later avoid. Focus on 'why' first and 'how' later: Be the annoying 7-yr old kid with a million questions. We can't arrive at the right answer if we don't know the right questions to ask. Lots of customers make assumptions on how things need to work instead of defining the business problem. There are always multiple paths that can be taken. Long tail of past best practices: Recognize that best practices are changing at light speed. 



Quote for the day:

"Eventually relationships determine the size and the length of leadership." -- John C. Maxwell