Showing posts with label zero trust. Show all posts
Showing posts with label zero trust. Show all posts

Daily Tech Digest - June 15, 2025


Quote for the day:

“Whenever you find yourself on the side of the majority, it is time to pause and reflect.” -- Mark Twain



Gazing into the future of eye contact

Eye contact is a human need. But it also offers big business benefits. Brain scans show that eye contact activates parts of the brain linked to reading others’ feelings and intentions, including the fusiform gyrus, medial prefrontal cortex, and amygdala. These brain regions help people figure out what others are thinking or feeling, which we all need for trusting business and work relationships. ... If you look into the camera to simulate eye contact, you can’t see the other person’s face or reactions. This means both people always appear to be looking away, even if they are trying to pay attention. It is not just awkward — it changes how people feel and behave. ... The iContact Camera Pro is a 4K webcam that uses a retractable arm that places the camera right in your line of sight so that you can look at the person and the camera at the same time. It lets you adjust video and audio settings in real time. It’s compact and folds away when not in use. It’s also easy to set up with a USB-C connection and works with Zoom, Microsoft Teams, Google Meet, and other major platforms. ... Finally, there’s Casablanca AI, software that fixes your gaze in real time during video calls, so it looks like you’re making eye contact even when you’re not. It works by using AI and GAN technology to adjust both your eyes and head angle, keeping your facial expressions and gestures natural, according to the company.


New York passes a bill to prevent AI-fueled disasters

“The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” said Senator Gounardes. “The people that know [AI] the best say that these risks are incredibly likely […] That’s alarming.” The RAISE Act is now headed for New York Governor Kathy Hochul’s desk, where she could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, New York’s AI safety bill would require the world’s largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York’s attorney general to bring civil penalties of up to $30 million. The RAISE Act aims to narrowly regulate the world’s largest companies — whether they’re based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill’s transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources (seemingly, more than any AI model available today), and are being made available to New York residents.


The ZTNA Blind Spot: Why Unmanaged Devices Threaten Your Hybrid Workforce

The risks are well-documented and growing. But many of the traditional approaches to securing these endpoints fall short—adding complexity without truly mitigating the threat. It’s time to rethink how we extend Zero Trust to every user, regardless of who owns the device they use. ... The challenge of unmanaged endpoints is no longer theoretical. In the modern enterprise, consultants, contractors, and partners are integral to getting work done—and they often need immediate access to internal systems and sensitive data. BYOD scenarios are equally common. Executives check dashboards from personal tablets, marketers access cloud apps from home desktops, and employees work on personal laptops while traveling. In each case, IT has little to no visibility or control over the device’s security posture. ... To truly solve the BYOD and contractor problem, enterprises need a comprehensive ZTNA solution that applies to all users and all devices under a single policy framework. The foundation of this approach is simple: trust no one, verify everything, and enforce policies consistently. ... The shift to hybrid work is permanent. That means BYOD and third-party access are not exceptions—they’re standard operating procedures. It’s time for enterprises to stop treating unmanaged devices as an edge case and start securing them as part of a unified Zero Trust strategy.


3 reasons I'll never trust an SSD for long-term data storage

SSDs rely on NAND flash memory, which inevitably wears out after a finite number of write cycles. Every time you write data to an SSD and erase it, you use up one write cycle. Most manufacturers specify the write endurance for their SSDs, which is usually in terabytes written (TBW). ... When I first started using SSDs, I was under the impression that I could just leave them on the shelf for a few years and access all my data whenever I wanted. But unfortunately, that's not how NAND flash memory works. The data stored in each cell leaks over time; the electric charge used to represent a bit can degrade, and if you don't power on the drive periodically to refresh the NAND cells, those bits can become unreadable. This is called charge leakage, and it gets worse with SSDs using lower-end NAND flash memory. Most consumer SSDs these days use TLC and QLC NAND flash memory, which aren't as great as SLC and MLC SSDs at data retention. ... A sudden power loss during critical write operations can corrupt data blocks and make recovery impossible. That's because SSDs often utilize complex caching mechanisms and intricate wear-leveling algorithms to optimize performance. During an abrupt shutdown, these processes might fail to complete correctly, leaving your data corrupted.


Beyond the Paycheck: Where IT Operations Careers Outshine Software Development

On the whole, working in IT tends to be more dynamic than working as a software developer. As a developer, you're likely to spend the bulk of your time writing code using a specific set of programming languages and frameworks. Your day-to-day, month-to-month, and year-to-year work will center on churning out never-ending streams of application updates. The tasks that fall to IT engineers, in contrast, tend to be more varied. You might troubleshoot a server failure one day and set up a RAID array the next. You might spend part of your day interfacing with end users, then go into strategic planning meetings with executives. ... IT engineers tend to be less abstracted from end users, with whom they often interact on a daily basis. In contrast, software engineers are more likely to spend their time writing code while rarely, if ever, watching someone use the software they produce. As a result, it can be easier in a certain respect for someone working in IT as compared to software development to feel a sense of satisfaction.  ... While software engineers can move into adjacent types of roles, like site reliability engineering, IT operations engineers arguably have a more diverse set of easily pursuable options if they want to move up and out of IT operations work.


Europe is caught in a cloud dilemma

The European Union is worried about its reliance on the leading US-based cloud providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These large-scale players hold an unrivaled influence over the cloud sector and manage vital infrastructure essential for driving economies and fostering innovation. European policymakers have raised concerns that their heavy dependence exposes the continent to vulnerabilities, constraints, and geopolitical uncertainties. ... Europe currently lacks cloud service providers that can challenge those global Goliaths. Despite efforts like Gaia-X that aim to change this, it’s not clear if Europe can catch up anytime soon. It will be a prohibitively expensive undertaking to build large-scale cloud infrastructure in Europe that is both cost-efficient and competitive. In a nutshell, Europe’s hope to adopt top-notch cloud technology without the countries that currently dominate the industry is impractical, considering current market conditions. ... Often companies view cloud integration as merely a checklist or set of choices to finalize their cloud migration. This frequently results in tangled networks and isolated silos. Instead, businesses should overhaul their existing cloud environment with a comprehensive strategy that considers both immediate needs and future goals as well as the broader geopolitical landscape.


Applying Observability to Leadership to Understand and Explain your Way of Working

Leadership observability means observing yourself as you lead. Alex Schladebeck shared at OOP conference how narrating thoughts, using mind maps, asking questions, and identifying patterns helped her as a leader to explain decisions, check bias, support others, and understand her actions and challenges. Employees and other leaders around you want to understand what leads to your decisions, Schladebeck said. ... Heuristics give us our "gut feeling". And that’s useful, but it’s better if we’re able to take a step back and get explicit about how we got to that gut feeling, Schladebeck mentioned. If we categorise and label things and explain what experiences lead us to our gut feeling, then we have the option of checking our bias and assumptions, and can help others to develop the thinking structures to make their own decisions, she explained ... Schladebeck recommends that leaders narrate their thoughts to reflect on, and describe their own work to the ones they are leading. They can do this by asking themselves questions like, "Why do I think that?", "What assumptions am I basing this on?", "What context factors am I taking into account?" Look for patterns, categories, and specific activities, she advised, and then you can try to explain these things to others around you. To visualize her thinking as a leader, Schladebeck uses mind maps.


Data Mesh: The Solution to Financial Services' Data Management Nightmare

Data mesh is not a technology or architecture, but an organizational and operational paradigm designed to scale data in complex enterprises. It promotes domain-oriented data ownership, where teams manage their data as a product, using a self-service infrastructure and following federated governance principles. In a data mesh, any team or department within an organization becomes accountable for the quality, discoverability, and accessibility of the data products they own. The concept emerged around five years ago as a response to the bottlenecks and limitations created by centralized data engineering teams acting as data gatekeepers. ... In a data mesh model, data ownership and stewardship are assigned to the business domains that generate and use the data. This means that teams such as credit risk, compliance, underwriting, or investment analysis can take responsibility for designing and maintaining the data products that meet their specific needs. ... Data mesh encourages clear definitions of data products and ownership, which helps reduce the bottlenecks often caused by fragmented data ownership or overloaded central teams. When combined with modern data technologies — such as cloud-native platforms, data virtualization layers, and orchestration tools — data mesh can help organizations connect data across legacy mainframes, on-premises databases, and cloud systems.


Accelerating Developer Velocity With Effective Platform Teams

Many platform engineering initiatives fail, not because of poor technology choices, but because they miss the most critical component: genuine collaboration. The most powerful internal developer platforms aren’t just technology stacks; they’re relationship accelerators that fundamentally transform the way teams work together. Effective platform teams have a deep understanding of what a day in the life of a developer, security engineer or operations specialist looks like. They know the pressures these teams face, their performance metrics and the challenges that frustrate them most. ... The core mission of platform teams is to enable faster software delivery by eliminating complexity and cognitive load. Put simply: Make the right way the easiest way. Developer experience extends beyond function; it’s about creating delight and demonstrating that the platform team cares about the human experience, not just technical capabilities. The best platforms craft natural, intuitive interfaces that anticipate questions and incorporate error messages that guide, rather than confuse. Platform engineering excellence comes from making complex things appear simple. It’s not about building the most sophisticated system; it’s about reducing complexity so developers can focus on creating business value.


AI agents will be ambient, but not autonomous - what that means for us

Currently, the AI assistance that users receive is deterministic; that is, humans are expected to enter a command in order to receive an intended outcome. With ambient agents, there is a shift in how humans fundamentally interact with AI to get the desired outcomes they need; the AI assistants rely instead on environmental cues. "Ambient agents we define as agents that are triggered by events, run in the background, but they are not completely autonomous," said Chase. He explains that ambient agents benefit employees by allowing them to expand their magnitude and scale themselves in ways they could not previously do. ... When talking about these types of ambient agents with advanced capabilities, it's easy to become concerned about trusting AI with your data and with executing actions of high importance. To tackle that concern, it is worth reiterating Chase's definition of ambient agents -- they're "not completely autonomous." ... "It's not deterministic," added Jokel. "It doesn't always give you the same outcome, and we can build scaffolding, but ultimately you still ant a human being sitting at the keyboard checking to make sure that this decision is the right thing to do before it gets executed, and I think we'll be in that state for a relatively long period of time."





Daily Tech Digest - June 06, 2025


Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley


The intersection of identity security and data privacy laws for a safe digital space

The integration of identity security with data privacy has become essential for corporations, governing bodies, and policymakers. Compliance regulations are set by frameworks such as the Digital Personal Data Protection (DPDP) Bill and the CERT-In directives – but encryption and access control alone are no longer enough. AI-driven identity security tools flag access combinations before they become gateways to fraud, monitor behavior anomalies in real-time, and offer deep, contextual visibility into both human and machine identities. All these factors combined bring about compliance-free, trust-building resilient security: proactive security that is self-adjusting, overcoming various challenges encountered today. By aligning intelligent identity security tools with privacy regulations, organisations gain more than just protection—they earn credibility. ... The DPDP Act tracks closely to global benchmarks such as GDPR and data protection regulations in Singapore and Australia which mandate organisations to implement appropriate security measures to protect personal data and amp up response to data breaches. They also assert that organisations that embrace and prioritise data privacy and identity security stand to gain the optimum level of reduced risk and enhanced trust from customers, partners and regulators.


Who needs real things when everything can be a hologram?

Meta founder and CEO Mark Zuckerberg said recently on Theo Von’s “This Past Weekend” podcast that everything is shifting to holograms. A hologram is a three-dimensional image that represents an object in a way that allows it to be viewed from different angles, creating the illusion of depth. Zuckerberg predicts that most of our physical objects will become obsolete and replaced by holographic versions seen through augmented reality (AR) glasses. The conversation floated the idea that books, board games, ping-pong tables, and even smartphones could all be virtualized, replacing the physical, real-world versions. Zuckerberg also expects that somewhere between one and two billion people could replace their smartphones with AR glasses within four years. One potential problem with that prediction: the public has to want to replace physical objects with holographic versions. So far, Apple’s experience with Apple Vision Pro does not imply that the public is clamoring for holographic replacements. ... I have no doubt that holograms will increasingly become ubiquitous in our lives. But I doubt that a majority will ever prefer a holographic virtual book over a physical book or even a physical e-book reader. The same goes for other objects in our lives. I also suspect both Zuckerberg’s motives and his predictive powers.


How AI Is Rewriting the CIO’s Workforce Strategy

With the mystique fading, enterprises are replacing large prompt-engineering teams with AI platform engineers, MLOps architects, and cross-trained analysts. A prompt engineer in 2023 often becomes a context architect by 2025; data scientists evolve into AI integrators; business-intelligence analysts transition into AI interaction designers; and DevOps engineers step up as MLOps platform leads. The cultural shift matters as much as the job titles. AI work is no longer about one-off magic, it is about building reliable infrastructure. CIOs generally face three choices. One is to spend on systems that make prompts reproducible and maintainable, such as RAG pipelines or proprietary context platforms. Another is to cut excessive spending on niche roles now being absorbed by automation. The third is to reskill internal talent, transforming today’s prompt writers into tomorrow’s systems thinkers who understand context flows, memory management, and AI security. A skilled prompt engineer today can become an exceptional context architect tomorrow, provided the organization invests in training. ... Prompt engineering isn’t dead, but its peak as a standalone role may already be behind us. The smartest organizations are shifting to systems that abstract prompt complexity and scale their AI capability without becoming dependent on a single human’s creativity.


Biometric privacy on trial: The constitutional stakes in United States v. Brown

The divergence between the two federal circuit courts has created a classic “circuit split,” a situation that almost inevitably calls for resolution by the U.S. Supreme Court. Legal scholars point out that this split could not be more consequential, as it directly affects how courts across the country treat compelled access to devices that contain vast troves of personal, private, and potentially incriminating information. What’s at stake in the Brown decision goes far beyond criminal law. In the digital age, smartphones are extensions of the self, containing everything from personal messages and photos to financial records, location data, and even health information. Unlocking one’s device may reveal more than a house search could have in the 18th century, and the very kind of search the Bill of Rights was designed to restrict. If the D.C. Circuit’s reasoning prevails, biometric security methods like Apple’s Face ID, Samsung’s iris scans, and various fingerprint unlock systems could receive constitutional protection when used to lock private data. That, in turn, could significantly limit law enforcement’s ability to compel access to devices without a warrant or consent. Moreover, such a ruling would align biometric authentication with established protections for passcodes. 


GenAI controls and ZTNA architecture set SSE vendors apart

“[SSE] provides a range of security capabilities, including adaptive access based on identity and context, malware protection, data security, and threat prevention, as well as the associated analytics and visibility,” Gartner writes. “It enables more direct connectivity for hybrid users by reducing latency and providing the potential for improved user experience.” Must-haves include advanced data protection capabilities – such as unified data leak protection (DLP), content-aware encryption, and label-based controls – that enable enterprises to enforce consistent data security policies across web, cloud, and private applications. Securing Software-as-a-Service (SaaS) applications is another important area, according to Gartner. SaaS security posture management (SSPM) and deep API integrations provide real-time visibility into SaaS app usage, configurations, and user behaviors, which Gartner says can help security teams remediate risks before they become incidents. Gartner defines SSPM as a category of tools that continuously assess and manage the security posture of SaaS apps. ... Other necessary capabilities for a complete SSE solution include digital experience monitoring (DEM) and AI-driven automation and coaching, according to Gartner. 


5 Risk Management Lessons OT Cybersecurity Leaders Can’t Afford to Ignore

A weak or shared passwords, outdated software, and misconfigured networks are consistently leveraged by malicious actors. Seemingly minor oversights can create significant gaps in an organization’s defenses, allowing attackers to gain unauthorized access and cause havoc. When the basics break down, particularly in converged IT/OT environments where attackers only need one foothold, consequences escalate fast. ... One common misconception in critical infrastructure is that OT systems are safe unless directly targeted. However, the reality is far more nuanced. Many incidents impacting OT environments originate as seemingly innocuous IT intrusions. Attackers enter through an overlooked endpoint or compromised credential in the enterprise network and then move laterally into the OT environment through weak segmentation or misconfigured gateways. This pattern has repeatedly emerged in the pipeline sector. ... Time and again, post-mortems reveal the same pattern: organizations lacking in tested procedures, clear roles, or real-world readiness. A proactive posture begins with rigorous risk assessments, threat modeling, and vulnerability scanning—not once, but as a cycle that evolves with the threat landscape. This plan should outline clear procedures for detecting, containing, and recovering from cyber incidents. 


You Can Build Authentication In-House, But Should You?

Auth isn’t a static feature. It evolves — layer by layer — as your product grows, your user base diversifies, and enterprise customers introduce new requirements. Over time, the simple system you started with is forced to stretch well beyond its original architecture. Every engineering team that builds auth internally will encounter key inflection points — moments when the complexity, security risk, and maintenance burden begin to outweigh the benefits of control. ... Once you’re selling into larger businesses, SSO becomes a hard requirement for enterprises. Customers want to integrate with their own identity providers like Okta, Microsoft Entra, or Google Workspace using protocols like SAML or OIDC. Implementing these protocols is non-trivial, especially when each customer has their own quirks and expectations around onboarding, metadata exchange, and user mapping. ... Once SSO is in place, the following enterprise requirement is often SCIM (System for Cross-domain Identity Management). SCIM, also known as Directory Sync, enables organizations to provision automatically and deprovision user accounts through their identity provider. Supporting it properly means syncing state between your system and theirs and handling partial failures gracefully. ... The newest wave of complexity in modern authentication comes from AI agents and LLM-powered applications. 


Developer Joy: A Better Way to Boost Developer Productivity

Play isn’t just fluff; it’s a tool. Whether it’s trying something new in a codebase, hacking together a prototype, or taking a break to let the brain wander, joy helps developers learn faster, solve problems more creatively, and stay engaged. ... Aim to reduce friction and toil, the little frustrations that break momentum and make work feel like a slog. Long build and test times are common culprits. At Gradle, the team is particularly interested in improving the reliability of tests by giving developers the right tools to understand intermittent failures. ... When we’re stuck on a problem, we’ll often bang our head against the code until midnight, without getting anywhere. Then in the morning, suddenly it takes five minutes for the solution to click into place. A good night’s sleep is the best debugging tool, but why? What happens? This is the default mode network at work. The default mode network is a set of connections in your brain that activates when you’re truly idle. This network is responsible for many vital brain functions, including creativity and complex problem-solving. Instead of filling every spare moment with busywork, take proper breaks. Go for a walk. Knit. Garden. "Dead time" in these examples isn't slacking, it’s deep problem-solving in disguise.


Get out of the audit committee: Why CISOs need dedicated board time

The problem is the limited time allocated to CISOs in audit committee meetings is not sufficient for comprehensive cybersecurity discussions. Increasingly, more time is needed for conversations around managing the complex risk landscape. In previous CISO roles, Gerchow had a similar cadence, with quarterly time with the security committee and quarterly time with the board. He also had closed door sessions with only board members. “Anyone who’s an employee of the company, even the CEO, has to drop off the call or leave the room, so it’s just you with the board or the director of the board,” he tells CSO. He found these particularly important for enabling frank conversations, which might centre on budget, roadblocks to new security implementations or whether he and his team are getting enough time to implement security programs. “They may ask: ‘How are things really going? Are you getting the support you need?’ It’s a transparent conversation without the other executives of the company being present.” ... In previous CISO roles, Gerchow had a similar cadence, with quarterly time with the security committee and quarterly time with the board. He also had closed door sessions with only board members. “Anyone who’s an employee of the company, even the CEO, has to drop off the call or leave the room, so it’s just you with the board or the director of the board,” he tells CSO.


Mind the Gap: AI-Driven Data and Analytics Disruption

The Holy Grail of metadata collection is extracting meaning from program code: data structures and entities, data elements, functionality, and lineage. For me, this is one of the most potentially interesting and impactful applications of AI to information management. I’ve tried it, and it works. I loaded an old C program that had no comments but reasonably descriptive variable names into ChatGPT, and it figured out what the program was doing, the purpose of each function, and gave a description for each variable. Eventually this capability will be used like other code analysis tools currently used by development teams as part of the CI/CD pipeline. Run one set of tools to look for code defects. Run another to extract and curate metadata. Someone will still have to review the results, but this gets us a long way there. ... Large language models can be applied in analytics a couple different ways. The first is to generate the answer solely from the LLM. Start by ingesting your corporate information into the LLM as context. Then, ask it a question directly and it will generate an answer. Hopefully the correct answer. But would you trust the answer? Associative memories are not the most reliable for database-style lookups. Imagine ingesting all of the company’s transactions then asking for the total net revenue for a particular customer. Why would you do that? Just use a database. 

Daily Tech Digest - May 10, 2025


Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.



Building blocks – what’s required for my business to be SECURE?

Zero Trust Architecture involves a set of rules that will ensure that you will not let anyone in without proper validation. You will assume there is a breach. You will reduce privileges to their minimum and activate them only as needed and you will make sure that devices connecting to your data are protected and monitored. Enclave is all about aligning your data’s sensitivity with your cybersecurity requirements. For example, to download a public document, no authentication is required, but to access your CRM, containing all your customers’ data, you will require a username, password, an extra factor of authentication, and to be in the office. You will not be able to download the data. Two different sensitivities, two experiences. ... The leadership team is the compass for the rest of the company – their north star. To make the right decision during a crisis, you much be prepared to face it. And how do you make sure that you’re not affected by all this adrenaline and stress that is caused by such an event? Practice. I am not saying that you must restore all your company’s backups every weekend. I am saying that once a month, the company executives should run through the plan. ... Most plans that were designed and rehearsed five years ago are now full of holes. 


Beyond Culture: Addressing Common Security Frustrations

A majority of security respondents (58%) said they have difficulty getting development to prioritize remediation of vulnerabilities, and 52% reported that red tape often slows their efforts to quickly fix vulnerabilities. In addition, security respondents pointed to several specific frustrations related to their jobs, including difficulty understanding security findings, excessive false positives and testing happening late in the software development process. ... If an organization sees many false positives, that could be a sign that they haven’t done all they can to ensure their security findings are high fidelity. Organizations should narrow the focus of their security efforts to what matters. That means traditional static application security testing (SAST) solutions are likely insufficient. SAST is a powerful tool, but it loses much of its value if the results are unmanageable or lack appropriate context. ... Although AI promises to help simplify software development processes, many organizations still have a long road ahead. In fact, respondents who are using AI were significantly more likely than those not using AI to want to consolidate their toolchain, suggesting that the proliferation of different point solutions running different AI models could be adding complexity, not taking it away.


Significant Gap Exists in UK Cyber Resilience Efforts

A persistent lack of skilled cybersecurity professionals in the civil service is one reason for the persistent gap in resilience, parliamentarians wrote. "Government has been unwilling to pay the salaries necessary to hire the experienced and skilled people it desperately needs to manage its cybersecurity effectively." Government figures show the workforce has grown and there are plans to recruit more experts - but a third of cybersecurity roles are either vacant "or filled by expensive contractors," the report states. "Experience suggests government will need to be realistic about how many of the best people it can recruit and retain." The report also faults government departments for not taking sufficient ownership over cybersecurity. The prime minister's office for years relied on departments to perform a cybersecurity self-assessment, until in 2023 when it launched GovAssure, a program to bring in independent assessors. GovAssure turned the self-assessments on their head, finding that the departments that ranked themselves the highest through self-assessment were among the less secure. Continued reliance on legacy systems have figured heavily in recent critiques of British government IT, and it does in the parliamentary report, as well. "It is unacceptable that the center of government does not know how many legacy IT systems exist in government and therefore cannot manage the associated cyber risks."


How CIOs Can Boost AI Returns With Smart Partnerships

CIOs face an overwhelming array of possibilities, making prioritization critical. The CIO Playbook 2025 helps by benchmarking priorities across markets and disciplines. Despite vast datasets, data challenges persist as only a small, relevant portion is usable after cleansing. Generative AI helps uncover correlations humans might miss, but its outputs require rigorous validation for practical use. Static budgets, growing demands and a shortage of skilled talent further complicate adoption. Unlike traditional IT, AI affects sales, marketing and customer service, necessitating cross-departmental collaboration. For example, Lenovo's AI unifies customer service channels such as email and WhatsApp, creating seamless interactions. ... First, go slow to go fast. Spend days or months - not years - exploring innovations through POCs. A customer who builds his or her own LLM faces pitfalls; using existing solutions is often smarter. Second, prioritize cross-collaboration, both internally across departments and externally with the ecosystem. Even Lenovo, operating in 180 markets, relies on partnerships to address AI's layers - the cloud, models, data, infrastructure and services. Third, target high-ROI functions such as customer service, where CIOs expect a 3.6-fold return, to build boardroom support for broader adoption.


How to Stop Increasingly Dangerous AI-Generated Phishing Scams

With so many avenues of attack being used by phishing scammers, you need constant vigilance. AI-powered detection platforms can simultaneously analyze message content, links, and user behavior patterns. Combined with sophisticated pattern recognition and anomaly identification techniques, these systems can spot phishing attempts that would bypass traditional signature-based approaches. ... Security awareness programs have progressed from basic modules to dynamic, AI-driven phishing simulations reflecting real-world scenarios. These simulations adapt to participant responses, providing customized feedback and improving overall effectiveness. Exposing team members to various sophisticated phishing techniques in controlled environments better prepares them for the unpredictable nature of AI-powered attacks. AI-enhanced incident response represents another promising development. AI systems can quickly determine an attack's scope and impact by automating phishing incident analysis, allowing security teams to respond more efficiently and effectively. This automation not only reduces response time but also helps prevent attacks from spreading by rapidly isolating compromised systems. 


Immutable Secrets Management: A Zero-Trust Approach to Sensitive Data in Containers

We address the critical vulnerabilities inherent in traditional secrets management practices, which often rely on mutable secrets and implicit trust. Our solution, grounded in the principles of Zero-Trust security, immutability, and DevSecOps, ensures that secrets are inextricably linked to container images, minimizing the risk of exposure and unauthorized access. We introduce ChaosSecOps, a novel concept that combines Chaos Engineering with DevSecOps, specifically focusing on proactively testing and improving the resilience of secrets management systems. Through a detailed, real-world implementation scenario using AWS services and common DevOps tools, we demonstrate the practical application and tangible benefits of this approach. The e-commerce platform case study showcases how immutable secrets management leads to improved security posture, enhanced compliance, faster time-to-market, reduced downtime, and increased developer productivity. Key metrics demonstrate a significant reduction in secrets-related incidents and faster deployment times. The solution directly addresses all criteria outlined for the Global Tech Awards in the DevOps Technology category, highlighting innovation, collaboration, scalability, continuous improvement, automation, cultural transformation, measurable outcomes, technical excellence, and community contribution.


The Network Impact of Cloud Security and Operations

Network security and monitoring also change. With cloud-based networks, the network staff no longer has all its management software under its direct control. It now must work with its various cloud providers on security. In this environment, some small company network staff opt to outsource security and network management to their cloud providers. Larger companies that want more direct control might prefer to upskill their network staff on the different security and configuration toolsets that each cloud provider makes available. ... The move of applications and systems to more cloud services is in part fueled by the growth of citizen IT. This is when end users in departments have mini IT budgets and subscribe to new IT cloud services, of which IT and network groups aren't always aware. This creates potential security vulnerabilities, and it forces more network groups to segment networks into smaller units for greater control. They should also implement zero-trust networks that can immediately detect any IT resource, such as a cloud service, that a user adds, subtracts or changes on the network. ... Network managers are also discovering that they need to rewrite their disaster recovery plans for cloud. The strategies and operations that were developed for the internal network are still relevant. 


Three steps to integrate quantum computing into your data center or HPC facility

Just as QPU hardware has yet to become commoditized, the quantum computing stack remains in development, with relatively little consistency in how machines are accessed and programmed. Savvy buyers will have an informed opinion on how to leverage software abstraction to accomplish their key goals. With the right software abstractions, you can begin to transform quantum processors from fragile, research-grade tools into reliable infrastructure for solving real-world problems. Here are three critical layers of abstraction that make this possible. First, there’s hardware management. Quantum devices need constant tuning to stay in working shape, and achieving that manually takes serious time and expertise. Intelligent autonomy provided by specialist vendors can now handle the heavy lifting – booting, calibrating, and keeping things stable – without someone standing by to babysit the machine. Then there’s workload execution. Running a program on a quantum computer isn’t just plug-and-play. You usually have to translate your high-level algorithm into something that works with the quirks of the specific QPU being used, and address errors along the way. Now, software can take care of that translation and optimization behind the scenes, so users can just focus on building quantum algorithms and workloads that address key research or business needs.


Where Apple falls short for enterprise IT

First, enterprise tools in many ways could be considered a niche area of software. As a result, enterprise functionality doesn’t get the same attention as more mainstream features. This can be especially obvious when Apple tries to bring consumer features into enterprise use cases — like managed Apple Accounts and their intended integration with things like Continuity and iCloud, for example — and things like MDM controls for new features such a Apple Intelligence and low-level enterprise-specific functions like Declarative Device Management. The second reason is obvious: any piece of software that isn’t ready for prime time — and still makes it into a general release — is a potential support ticket when a business user encounters problems. ... Deployment might be where the lack of automation is clearest, but the issue runs through most aspects of Apple device and user onboarding and management. Apple Business Manager doesn’t offer any APIs that vendors or IT departments can tap into to automate routine tasks. This can be anything from redeploying older devices, onboarding new employees, assigning app licenses or managing user groups and privileges. Although Apple Business Manager is a great tool and it functions as a nexus for device management and identity management, it still requires more manual lifting than it should.


Getting Started with Data Quality

Any process to establish or update a DQ program charter must be adaptable. For example, a specific project management or a local office could start the initial DQ offering. As other teams see the program’s value, they would show initiative. In the meantime, the charter tenets change to meet the situation. So, any DQ charter documentation must have the flexibility to transform into what is currently needed. Companies must keep track of any charter amendments or additions to provide transparency and accountability. Expect that various teams will have overlapping or conflicting needs in a DQ program. These people will need to work together to find a solution. They will need to know the discussion rules to consistently advocate for the DQ they need and express their challenges. Ambiguity will heighten dissent. So, charter discussions and documentation must come from a well-defined methodology. As the white paper notes, clarity, consistency, and alignment sit at the charter’s core. While getting there can seem challenging, an expertly structured charter template can prompt critical information to show the way. ... The best practices documented by the charter stem from clarity, consistency, and alignment. They need to cover the DQ objectives mentioned above and ground DQ discussions.

Daily Tech Digest - February 08, 2025


Quote for the day:

“There is no failure except in no longer trying.” -- Chris Bradford


Google's DMARC Push Pays Off, but Email Security Challenges Remain

Large email senders are not the only groups quickening the pace of DMARC adoption. The latest Payment Card Industry Data Security Standard (PCI DSS) version 4.0 requires DMARC for all organizations that handle credit card information, while the European Union's Digital Operational Resilience Act (DORA) makes DMARC a necessity for its ability to report on and block email impersonation, Red Sift's Costigan says. "Mandatory regulations and legislation often serve as the tipping point for most organizations," he says. "Failures to do reasonable, proactive cybersecurity — of which email security and DMARC is obviously a part — are likely to meet with costly regulatory actions and the prospect of class action lawsuits." Overall, the authentication specification is working as intended, which explains its arguably rapid adoption, says Roger Grimes, a data-driven-defense evangelist at security awareness and training firm KnowBe4. Other cybersecurity standards, such as DNSSEC and IPSEC, have been around longer, but DMARC adoption has outpaced them, he maintains. "DMARC stands alone as the singular success as the most widely implemented cybersecurity standard introduced in the last decade," Grimes says.


Can Your Security Measures Be Turned Against You?

Over-reliance on certain security products might also allow attackers to extend their reach across various organizations. For example, the recent failure of CrowdStrike’s endpoint detection and response (EDR) tool, which caused widespread global outages, highlights the risks associated with depending too heavily on a single security solution. Although this incident wasn’t the result of a cyber attack, it clearly demonstrates the potential issues that can arise from such reliance. For years, the cybersecurity community has been aware of the risks posed by vulnerabilities in security products. A notable example from 2015 involved a critical flaw in FireEye’s email protection system, which allowed attackers to execute arbitrary commands and potentially take full control of the device. More recently, a vulnerability in Proofpoint’s email security service was exploited in a phishing campaign that impersonated major corporations like IBM and Disney. Windows SmartScreen is designed to shield users from malicious software, phishing attacks, and other online threats. Initially launched with Internet Explorer, SmartScreen has been a core part of Windows since version 8. 


Why Zero Trust Will See Alert Volumes Rocket

As the complexity of zero trust environments grows, so does the need for tools to handle the data explosion. Hypergraphs and generative AI are emerging as game-changers, enabling SOC teams to connect disparate events and uncover hidden patterns. Telemetry collected in zero trust environments is a treasure trove for analytics. Every interaction, whether permitted or denied, is logged, providing the raw material for identifying anomalies. The cybersecurity industry have set standards for exchanging and documenting threat intelligence. By leveraging structured frameworks like MITRE ATT&CK, MITRE DEFEND, and OCSF, activities can be enriched with contextual information enabling better detection and decision-making. Hypergraphs go beyond traditional graphs by representing relationships between multiple events or entities. They can correlate disparate events. For example, a scheduled task combined with denied AnyDesk traffic and browsing to MegaUpload might initially seem unrelated. However, hypergraphs can connect these dots, revealing the signature of a ransomware attack like Akira. By analysing historical patterns, hypergraphs can also predict attack patterns, allowing SOC teams to anticipate the next steps of an attacker and defend proactively.


Capable Protection: Enhancing Cloud-Native Security

Much like in a game of chess, anticipating your opponent’s moves and strategizing accordingly is key to security. Understanding the value and potential risks associated with NHIs and Secrets is the first step towards securing your digital environment. Remediation prioritization plays a crucial role in managing NHIs. The identification and classification process of NHIs enables businesses to react promptly and adequately to any potential vulnerabilities. Furthermore, awareness and education are fundamental to minimize human-induced breaches. ... Cybersecurity must adapt. The traditional, human-centric approach to cybersecurity is inadequate. Integrating an NHI management strategy into your cybersecurity plan is therefore a strategic move. Not only does it enhance an organization’s security posture, but it also facilitates regulatory compliance. Coupled with the potential for substantial cost savings, it’s clear that NHI management is an investment with significant returns. For many organizations, the challenge today lies in striking a balance between speed and security. Rapid deployment of applications and digital services is essential for maintaining competitive advantage, yet this can often be at odds with the need for adequate cybersecurity. 


Attackers Exploit Cryptographic Keys for Malware Deployment

Microsoft recommends developers avoid using machine keys copied from public sources and rotate keys regularly to mitigate risks. The company also removed key samples from its documentation and provided a script for security teams to identify and replace publicly disclosed keys in their environments. Microsoft Defender for Endpoint also includes an alert for publicly exposed ASP.NET machine keys, though the alert itself does not indicate an active attack. Organizations running ASP.NET applications, especially those deployed in web farms, are urged to replace fixed machine keys with auto-generated values stored in the system registry. If a web-facing server has been compromised, rotating the machine keys alone may not eliminate persistent threats. Microsoft said recommends conducting a full forensic investigation to detect potential backdoors or unauthorized access points. In high-risk cases, security teams should consider reformatting and reinstalling affected systems to prevent further exploitation, the report said. Organizations should also implement best practices such as encrypting sensitive configuration files, following secure DevOps procedures and upgrading applications to ASP.NET 4.8. 


The race to AI in 2025: How businesses can harness connectivity to pick up pace

When it comes to optimizing cloud workloads and migrating to available data centers, connectivity is the “make or break” technology. This is why Internet Exchanges (IXs) – physical platforms where multiple networks interconnect to exchange traffic directly with one another via peering – have become indispensable. An IX allows businesses to bypass the public Internet and find the shortest and fastest network pathways for their data, dramatically improving performance and reducing latency for all participants. Importantly, smart use of an IX facility will enable businesses to connect seamlessly to data centers outside of their “home” region, removing geography as a barrier and easing the burden on data center hubs. This form of connectivity is becoming increasingly popular, with the number of IXs in the US surging by more than 350 percent in the past decade. The use of IXs itself is nothing new, but what is relatively new is the neutral model they now employ. A neutral IX isn’t tied to a specific carrier or data center, which means businesses have more connectivity options open to them, increasing redundancy and enhancing resilience. Our own research in 2024 revealed that more than 80 percent of IXs in the US are now data center and carrier-neutral, making it the dominant interconnection model.


The hidden threat of neglected cloud infrastructure

Left unattended for over a decade, malicious actors could have reregistered this bucket to deliver malware or launch devastating supply chain attacks. Fortunately, researchers notified CISA, which promptly secured the vulnerable resource. The incident illustrates how even organizations dedicated to cybersecurity can fall prey to the dangers of neglected digital infrastructure.This story is not an anomaly. It indicates a systemic issue that spans industries, governments, and corporations. ... Entities attempting to communicate with these abandoned assets include government organizations (such as NASA and state agencies in the United States), military networks, Fortune 100 companies, major banks, and universities. The fact that these large organizations were still relying on mismanaged or forgotten resources is a testament to the pervasive nature of this oversight. The researchers emphasized that this issue isn’t specific to AWS, the organizations responsible for these resources, or even a single industry. It reflects a broader systemic failure to manage digital assets effectively in the cloud computing age. The researchers noted the ease of acquiring internet infrastructure—an S3 bucket, a domain name, or an IP address—and a corresponding failure to institute strong governance and life-cycle management for these resources.  


DevOps Evolution: From Movement to Platform Engineering in the AI Era

After nearly 20 years of DevOps, Grabner sees an opportunity to address historical confusion while preserving core principles. “We want to solve the same problem – reduce friction while improving developer and operational efficiency. We want to automate, monitor, and share.” Platform engineering represents this evolution, enabling organizations to scale DevOps best practices through self-service capabilities. “Platform engineering allows us to scale DevOps best practices in an enterprise organization,” Grabner explains. “What platform engineering does is provide self-services to engineers so they can do everything we wanted DevOps to do for us.” At Dynatrace Perform 2025, the company announced several innovations supporting this evolution. The enhanced Davis AI engine now enables preventive operations, moving beyond reactive monitoring to predict and prevent incidents before they occur. This includes AI-powered generation of artifacts for automated remediation workflows and natural language explanations with contextual recommendations. The evolution is particularly evident in how observability is implemented. “Traditionally, observability was always an afterthought,” Grabner explains. 


Bridging the IT Gap: Preparing for a Networking Workforce Evolution

People coming out of university today are far more likely to be experienced in Amazon Web Services (AWS) and Azure than in Border Gateway Protocol (BGP) and Ethernet virtual private network (EVPN). They have spent more time with Kubernetes than with a router or switch command line. Sure, when pressed into action and supported by senior staff or technical documentation, they can perform. But the industry is notorious for its bespoke solutions, snowflake workflows, and poor documentation. None of this ought to be a surprise. At least part of the allure of the cloud for many is that it carries the illusion of pushing problems to another team. Of course, this is hardly true. No company should abdicate architectural and operational responsibility entirely. But in our industry’s rush to new solutions, there are countless teams for which this was an unspoken objective. Regardless, what happens to companies when the people skilled enough to manage the complexity are no longer on call? Perhaps you’re a pessimist and feel that the next generation of IT pros is somehow less capable than in the past. The NASA engineers who landed a man on the moon may have similar things to say about today’s rocket scientists who rely heavily on tools to do the math for them.


A View on Understanding Non-Human Identities Governance

NHIs inherently require connections to other systems and services to fulfill their purpose. This interconnectivity means every NHI becomes a node in a web of interdependencies. From an NHI governance perspective, this necessitates maintaining an accurate and dynamic inventory of these connections to manage the associated risks. For example, if a single NHI is compromised, what does it connect to, and what would an attacker be able to access to laterally move into? Proper NHI governance must include tools to map and monitor these relationships. While there are many ways to go about this manually, what we actually want is an automated way to tell what is connected to what, what is used for what, and by whom. When thinking in terms of securing our systems, we can leverage another important fact about all NHIs in a secured application to build that map, they all, necessarily, have secrets. ... Essentially, two risks make understanding the scope of a secret critical for enterprise security. First is that misconfigured or over-privileged secrets can inadvertently grant access to sensitive data or critical systems, significantly increasing the attack surface. Imagine accidentally giving write privileges to a system that can access your customer's PII. That is a ticking clock waiting for a threat actor to find and exploit it.


Daily Tech Digest - December 21, 2024

The New Paradigm – The Rise of the Virtual Architect

We’re on the brink of a new paradigm in Enterprise Architecture—one where architects will have unprecedented access to knowledge, insights, and tools through what I call the Virtual Architect. The Virtual Architect isn’t limited to financial services. I’ve seen interest across industries like insurance and telecoms, where clients are eager to deploy such solutions. Why? Because it promises to provide accurate, real-time information, support colleagues, and even generate designs. Yes, you read that right—design generation is on the table. Naturally, this raises a big question: does this mean architects will be replaced? We’ll get to that in a moment. ... But here’s the catch: how do we ensure the designs generated by a Virtual Architect are accurate? The old saying applies—it’s only as good as the quality of the data and designs you feed in. That is where ongoing training and validation from architects remain crucial. So, will the Virtual Architect replace human architects? I don’t believe so, not in the near future. Designing systems is just one aspect of an architect’s role. Stakeholder engagement, strategic thinking, and soft skills are equally important—and these are areas where AI still falls short. For now, the Virtual Architect is an enhancement, not a replacement. 


IT/OT convergence propels zero-trust security efforts

Companies want flexibility in how end users and business applications access and interact with OT systems. ... Enterprises also want to extract data from OT systems, which requires network connectivity. For example, manufacturers can pull real-time data from their assembly lines so that specialized analytics applications can identify opportunities for efficiency and predict disruptions to production. While converging OT onto IT networks can drive innovation, it exposes OT systems to the threats that proliferate the digital world. Companies often need new security solutions to protect OT. EMA’s latest research report, “Zero Trust Networking: How Network Teams Support Cybersecurity,” revealed that IT/OT convergence drives 38% of enterprise zero-trust security strategies. ... IT/OT convergence leads enterprises to set different priorities for zero-trust solution requirements. When modernizing secure remote access solutions for zero trust, OT-focused companies have a stronger need for granular policy management capabilities. These companies are more likely to have a secure remote access solution that can cut off network access in response to anomalous behavior or changes in the state of a device. When implementing zero-trust network segmentation, OT-focused companies are more likely to seek a solution with dynamic and adaptive segmentation controls. 


Why Enterprises Still Grapple With Data Governance

“Even in highly regulated industries where the acceptance and understanding of the concept and value of governance more broadly are ingrained into the corporate culture, most data governance programs have progressed very little past an expensive [check] boxing exercise, one that has kept regulatory queries to a minimum but returned very little additional business value on the investment,” says Willis in an email interview. ... Why the disconnect? Data teams don’t feel they can spend time understanding stakeholders or even challenging business stakeholder needs. Though executive support is critical, data governance professionals are not making the most out of that support. One often unacknowledged problem is culture. “Unfortunately, in many organizations, the predominant attitude towards governance and risk management is that [they are] a burden of bureaucracy that slows innovation,” says Willis. “Data governance teams too frequently perpetuate that mindset, over-rotating on data controls and processes where the effort to execute is misaligned to the value they release.” One way to begin improving the effectiveness of data governance is to reassess the organization’s objectives and approach.


What Is Next-Generation Data Protection and Why Should Enterprise Tech Buyers Care?

Next-generation data protection was created to combat today’s most sophisticated and dangerous cyberattacks. It expands the purview of what is protected and how it is protected within an enterprise data infrastructure. This new approach also adds preemptive and predictive capabilities that help mitigate the effects of massive cyberattacks. Moreover, next-generation data protection is the last line of defense against the most vicious, unscrupulous cyber criminals who want nothing more than to take down and harm large companies, either for monetary gain or respect amongst fellow criminals. Therefore, understanding and implementing next-generation data protection is vital. ... To make data protection highly effective today for the datasets that seem most critical, it has to be highly integrated and orchestrated. You don’t want a manual process making a weak spot for your organization. To resolve this issue, one of the breakthrough capabilities of next-generation data protection is automated cyber protection. Automated cyber protection seamlessly integrates cyber storage resilience into a cyber security operation center (SOC) and data center-wide cyber security applications, such as SIEM and SOAR cyber applications. 


Federal Cyber Operations Would Downgrade Under Shutdown

The pending shutdown could trigger major cutbacks to critical technology services across the federal government, including DHS's Science and Technology Directorate, which provides technical expertise to address emerging threats impacting DHS, first responders and private sector organizations. During a lapse in appropriations, just 31 of its staff members would be retained, representing a staggering 94% reduction in its workforce. The shutdown could lead to longer airport lines, furloughs for hundreds of thousands of federal workers. Brian Fox, CTO of software supply chain management firm Sonatype, previously told Information Security Media Group that CISA plays a critical role in safeguarding government infrastructure during periods of political turbulence. "It's no secret that times of uncertainty, change and disruption are prime opportunities for threat actors to increase efforts to infiltrate systems," Fox said. The shutdown is set to begin at 12:01 a.m. on Saturday, December 21, unless lawmakers can pass a short-term spending bill, after the House rejected a compromise package Thursday night following online remarks from President-elect Donald Trump and his billionaire government efficiency advisor, Elon Musk.


Why cybersecurity is critical to energy modernization

Connected infrastructures for renewables, in many cases, are operated by new companies or even residential users. They don’t have a background in managing reliability and, generally, have very limited or no cybersecurity expertise. Despite this, they all oversee internet-connected systems that are digitally controlled and therefore vulnerable to hacking. The cumulated power controlled by many connected parties also poses a risk of blackouts. The concern is about the suppliers, especially for consumer equipment, as it is not possible to impose security regulations on consumers. The Cyber Resilience Act tries to address suppliers but is likely not sufficient. ... International collaboration is crucial in addressing the cybersecurity risks posed by interconnected energy grids. By sharing knowledge, harmonizing standards, and coordinating joint incident response efforts, countries can collectively enhance their preparedness and resilience. There are various formal international collaborations, such as ENTSO-E and the DSO Entity SEEG, coordination groups like WG8 in NIS, and partnerships between experts and authorities in groups like NCCS. International exercises led by organizations like ENISA and NATO further support these initiatives.


US Ban on TP-Link Routers More About Politics Than Exploitation Risk

While no researcher has called out a specific backdoor or zero-day vulnerability in TP-Link routers, restricting products from a country that is a political and economic rival is not unreasonable, says Thomas Pace, CEO of extended Internet of Things (IoT) security firm NetRise and a former head of cybersecurity for the US Department of Energy. ... Companies and consumers should do their due diligence, keep their devices up to date with the latest security patches, and consider whether the manufacturer of their critical hardware may have secondary motives, says Phosphorus Cybersecurity's Shankar. "The vast majority of successful attacks on IoT are enabled by preventable issues like static, unchanged default passwords, or unpatched firmware, leaving systems exposed," he says. "For business operators and consumer end-users, the key takeaway is clear: adopting basic security hygiene is a critical defense against both opportunistic and sophisticated attacks. Don’t leave the front door open." For companies worried about the origin of their networking devices or the security their supply chain, finding a trusted third party to manage the devices is a reasonable option. In reality, though, almost every device should be monitored and not trusted, says NetRise's Pace.


The Next Big Thing: How Generative AI Is Reshaping DevOps in the Cloud

One of the biggest impacts of AI on DevOps is in Continuous Integration and Continuous Delivery (CI/CD) pipelines. These pipelines help automate how code changes are managed and deployed to production environments. Automation in this area makes operations more efficient. However, as codebases grow and get more complex, these pipelines often need manual tuning and adjustments to run smoothly. AI impacts this by making pipelines smarter. It can analyze historical data, like build times, test results, and deployment patterns. By doing this, it can adjust how pipelines are set up to minimize bottlenecks and use resources better. For example, AI can decide which tests to run first. It chooses tests that are more likely to find bugs from code changes. This helps to speed up the process of testing and deploying code. ... Security has always been very important for cloud-native apps and DevOps teams. With Generative AI, we can now move from reactive to proactive when it comes to system vulnerabilities. Instead of just waiting for security issues to appear, AI helps DevOps teams spot and prevent potential risks ahead of time. AI-powered security tools can perform data analysis on a company’s cloud system. 


US order is a reminder that cloud platforms aren’t secure out of the box

Affected IT departments are ordered to implement a set of baseline configurations set out by the Secure Cloud Business Applications (SCuBA) project for certain software as a service (SaaS) platforms. So far, the directive notes, the only final configuration baseline set is for Microsoft 365. There is also a baseline configuration for Google Workspace listed on the SCuBA website that isn’t mentioned in this week’s directive. However, the order does say that in the future, CISA may release additional SCuBA Secure Configuration Baselines for other cloud products. When the baselines are issued, they will also will fall under the scope of this week’s directive. ... Coincidentally, the CISA directive comes the same week as CSO reported that Amazon has halted its deployment of M365 for a full year, as Microsoft tries to fix a long list of security problems that Amazon identified. A CISA spokesperson said he couldn’t comment on why the directive was issued this week, but Dubrovsky believes it’s “more of a generic warning” to federal departments, and not linked to an event. Asked how private-sector CISOs should secure cloud platforms, Dubrovsky said they should start with cybersecurity basics. That includes implementing tough identity and access management policies, including MFA, and performing network monitoring and alerting for abnormalities, before going into the cloud.


The value of generosity in leadership

For the first time we have five generations in the workforce, which means that needs, priorities, and sources of meaning vary. Generosity becomes much more important because you cannot achieve everything by yourself. You can only do that by empowering others and giving them the tools, opportunities, and trust they need to succeed. And then, hopefully, they can together fulfill the organization’s purpose, objectives, and dreams. ... The opposite of a generous leader is a narcissistic leader, who is focused on themselves. Narcissistic leaders are not as effective as leaders who have higher EQs [emotional quotients], who are more generous and recognize that the team’s performance is a result of something beyond themselves. But for one reason or another, narcissistic leaders continue to rise to the top. ... That link between being generous with yourself and being generous with others is so important. When I’ve seen leaders really unlock a new level of leadership, and generosity in leadership, it comes from first and foremost understanding how to lead themselves, and specifically, how to control the amygdala hijack that can send you below the line. Those are very real physiological tendencies that can create what appears to be a zero-sum context based on winning and losing. 



Quote for the day:

"Small daily imporevement over time lead to stunning results." -- Robin Sherman

Daily Tech Digest - November 28, 2024

Agentic AI: The Next Frontier for Enterprises

Agentic AI represents a significant leap forward. "These systems can perform complex reasoning, integrate with vast enterprise datasets and execute processes autonomously. For instance, a task like merging customer accounts, which traditionally required ticket creation and days of manual effort, can now be completed in seconds with agentic AI," said Arun Kumar Parameswaran ... Salesforce's Agentforce, unveiled at Dreamforce 2024, represents a significant milestone. Built on the company's Atlas reasoning engine and using models such as OpenAI's GPT-4 and Google's Gemini, Agentforce combines advanced AI with Salesforce's extensive ecosystem of customer engagement data. Agentforce marks the "third wave of AI," said Marc Benioff, CEO of Salesforce. He predicts a massive 1 billion AI agents by 2026. Unlike earlier waves, which focused on predictive analytics and conversational bots, this phase emphasizes intelligent agents capable of autonomous decision-making. Salesforce has amassed years of customer engagement data, workflows and metadata, making Agentforce a precision tool that understands and anticipates customer needs.


Get started with bootable containers and image mode for RHEL

Bootable containers, also provided as image mode for Red Hat Enterprise Linux, represent an innovation in merging containerization technology with full operating system deployment. At their core, bootable containers are OCI (Open Container Initiative) container images that contain a complete Linux system, including the kernel and hardware support. This approach has several characteristics, namely:Immutability: The entire system is treated as an immutable unit, reducing configuration drift and enhancing security (other than /etc and /var, all directories are mounted read-only once deployed on a physical or virtual machine). Atomic updates: System updates can be performed as atomic operations, simplifying rollbacks and ensuring system consistency. Standardized tooling: Leverages existing OCI container tools and workflows, reducing the learning curve for teams familiar with containerization, and the ability to design a complete OS environment using a Containerfile as a blueprint. This is a wonderful benefit for a variety of use cases, including edge computing and IoT devices (where consistent, easily updatable system images are crucial), as well as on general cloud-native infrastructure to enable infrastructure-as-code practices at the OS level.


Traditional EDR won't cut it: why you need zero trust endpoint security

The development of EDR tools was the next step in cyber resiliency after antivirus began falling behind in its ability to stop malware. The struggle began when the rate at which new malware was created and distributed far outweighed the rate at which they could be logged and prevented from causing harm. The most logical step to take was to develop a cybersecurity tool that could identify malware by actions taken, not just by code. ... cybercriminals are now using AI to streamline their malware generation process, creating malware at faster speeds and improving its ability to run without detection. Another crucial problem with traditional EDRs and other detection-based tools is that they do not act until the malware is already running in the environment, which leads them to fail customers and miss cyberattacks until it is already too late. ... With application allowlisting, you create a list of the applications and software you trust and need and block everything else from running. Allowlisting is a zero trust method of application control that prevents known and unknown threats from running on your devices, preventing cyberattacks, like ransomware, from detonating.


AI and the future of finance: How AI is empowering NBFCs

AI in Non-Banking Financial Companies can be used for one of the first applications – the evaluation of credit risk. Until now, lenders relied mainly on credit scoring models and legacy data on a client. However, such models often do not grasp the complexity of a person’s business’s financial profile, a common problem in countries with large informal economies. AI, on the other hand, can analyse large amounts of data, from historical transaction information to phone use and even social behaviour. AI algorithms are able to analyse this data at astonishing speed, recognising trends and yielding more precise forecasts about the borrower’s capability to pay back loans. This enables NBFCs to offer credit to a wider and more diverse client base, which ultimately drives financial inclusion. ... The function of AI extends beyond just providing transactional support. With the help of sophisticated machine-learning models, NBFCs are able to offer personalised financial products that are tailored to the financial behaviour of individual preferences, lifestyles, and conditions. ... By using advanced analytics and machine-learning models, NBFCs are able to identify new opportunities to grow. 


Achieving Success in the New Era of AI-Driven Data Management

AI-driven personalization is essential for companies looking to stand out in a competitive marketplace. By leveraging vast amounts of customer data, AI helps businesses create highly tailored experiences that adapt to individual user preferences, increasing engagement and loyalty. Recent research shows "that 81 percent of customers prefer companies that offer a personalized experience." ... AI-driven data analytics has significant ethical, privacy, and regulatory challenges. Ethical considerations, such as bias detection and mitigation, are necessary to ensure AI models provide fair and accurate outcomes. Implementing governance frameworks and transparency in AI decision-making builds trust by making algorithms' logic accessible and accountable, minimizing the risk of unintended discrimination in data-driven insights. Data privacy and security are equally critical. The increased use of techniques like differential privacy raises expectations of high privacy standards. Differential privacy adds carefully calibrated "noise" to data sets — random variations designed to prevent the re-identification of individuals while still allowing accurate aggregate insights. 


Riding the wave of digital transformation: Insights and lessons from Japan’s journey

Availability and accessibility of digital infrastructure is often inadequate in developing countries, preventing digital services from reaching everyone. Japan’s experience in this domain ranges from formulating national strategies for digital infrastructure development to providing affordable high-speed internet access, and to integrating and standardizing different systems. The key takeaway here is the importance of sustaining robust infrastructure investment over a period of time and providing room for digital system scalability and flexibility. ... With this in mind, Japan embraced innovative approaches to enhance people’s digital skills. Some cities like Kitakyushu are training staff to use minimal coding tools—software that allows them to design applications with simple codes— as well as providing other training on digital transformation to equip staff at various levels within local governments with relevant skills. ... Digital transformation relies on coordinated efforts: the Japanese central government established supportive policies and frameworks, while local governments translated these into actionable initiatives for public benefit. 


When Hackers Meet Tractors: Surprising Roles in IoT Security

IoT encompasses the billions of connected devices we use daily - everything from smart home gadgets to fitness trackers. IIoT focuses on industrial applications, such as manufacturing robots, energy grid systems and autonomous vehicles. While these technologies bring remarkable efficiencies, they also expand the potential attack surface for cybercriminals. Ransomware, data breaches, and system takeovers are no longer just concerns for tech companies - they’re threats to every industry that relies on connectivity. ... Breaking into IoT and IIoT cybersecurity may seem daunting, but the pathway is more accessible than you might think. Leverage transferable skills. Many professionals transition into IoT/IIoT roles by building on their existing cybersecurity expertise. For instance, knowledge of network security or ethical hacking can be adapted to these environments. It is also beneficial to pursue specialized certifications that can demonstrate your expertise and open doors in niche fields. ... GICSP is designed specifically for professionals working in industrial environments, such as manufacturing, energy, or transportation. It bridges the gap between IT, OT (Operational Technology), and IIoT, emphasizing the secure operation of industrial control systems.


How to Ensure Business Continuity for Banks and Financial Services

A business continuity plan is only as effective as the people behind it. Creating a culture of safety and preparedness throughout a financial services organization is key to a successful crisis response. Regular training sessions, disaster simulations, and frequent updates to the BCP keep teams ready and capable of responding efficiently. Facilities teams must have a clear understanding of their roles and responsibilities during a disruption. From decision-makers to on-the-ground personnel, each team member should know exactly what steps to take to restore operations. Clear protocols ensure that recovery efforts can be executed quickly, minimizing service interruptions and maintaining a seamless customer experience. Disasters may be inevitable, but with the right facilities management strategies in place, financial service companies can be well-prepared to respond effectively and ensure business continuity. From conducting risk assessments to leveraging technology and building strong vendor partnerships, proactive facilities management can be the difference between a rapid recovery and prolonged downtime. Now is the time to assess the current state of facilities, ensure teams are trained, and confirm that business continuity plans are robust. 


Enterprises Ill-prepared to Realize AI’s Potential

To build more AI infrastructure readiness, skilled talent will be key to overcoming a deficit in workers needed to maintain IT infrastructure, Patterson suggests. In fact, only 31% of companies believed their talent was in a “high state of readiness” to fully make use of AI. In addition, 24% of those surveyed did not believe their companies held enough talent to address the “growing demand for AI,” the Cisco report revealed. Expanding the AI talent pool will require forming a learning culture for innovation, he says. That includes talent development and forming clear career paths. Leadership feels the pressure to achieve AI readiness, but workers are hesitant to use AI, according to the Cisco AI readiness report. “While organizations face pressure from leadership to bring in AI, the disconnect is likely due to hesitancy among workers within the organization who must take steps to gain new skills for AI or fear AI taking over their jobs,” Patterson says. ... “If you can’t secure AI, you won’t be able to successfully deploy AI,” he says. Meanwhile, tech professionals should develop a holistic view of the infrastructure required to adopt AI while incorporating observability and security, according to Patterson. A holistic view of infrastructure will bring “easier operations, resiliency, and efficiency at scale,” Patterson says.


The Role of Edge-to-Cloud Infrastructure in Shaping Digital Transformation

Unlike the cloud transporting data to the cloud for processing, Edge infrastructure brings the distributed computing network closer to the users–and is powered by local, small computing power near the end- user and relies on the cloud only as a ‘director’ of operations. This Edge-to-cloud computing model allows IoT devices to stay small and affordable. It also allows localized computing power to expedite data processing across many applications without relying on high throughput and consistent connectivity to a cloud hyper-scale or other data center hundreds or thousands of miles away. ... The key to edge computing is handling sizeable amounts of data that IoT devices can produce in conjunction with the existing inbuilding systems that would be difficult, risky, or cost-prohibitive to supplant.  Given IoT devices and existing systems often provide raw and isolated data – IoT platforms consolidate, aggregate, and then analyze data in real-time, or farm it out to external tools in the cloud for specific needs (work order management, MOPs, etc.). The key here is not just about real-time context, given that IoT platforms provide a database of historical information – true actionable outcomes can be driven from data.  



Quote for the day:

"Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others" -- Jack Welch