Showing posts with label tech talk. Show all posts
Showing posts with label tech talk. Show all posts

Daily Tech Digest - April 23, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


MLOps vs. DevOps: Key Differences — and Why They Work Better Together

Arguably, the greatest difference between DevOps and MLOps is that DevOps is, by most definitions, an abstract philosophy, whereas MLOps comes closer to prescribing a distinct set of practices. Ultimately, the point of DevOps is to encourage software developers to collaborate more closely with IT operations teams, based on the idea that software delivery processes are smoother when both groups work toward shared goals. In contrast, collaboration is not a major focus for MLOps. You could argue that MLOps implies that some types of collaboration between different stakeholders — such as data scientists, AI model developers, and model testers — need to be part of MLOps workflows. ... Another key difference is that DevOps centers solely on software development. MLOps is also partly about software development to the extent that model development entails writing software. However, MLOps also addresses other processes — like model design and post-deployment management — that don't overlap closely with DevOps as traditionally defined. ... Differing areas of focus lead to different skill requirements for DevOps versus MLOps. To thrive at DevOps, you must master DevOps tools and concepts like CI/CD and infrastructure-as-code (IaC).


Transforming quality engineering with AI

AI-enabled quality engineering promises to be a game changer, driving a level of precision and efficiency that is beyond the reach of traditional testing. AI algorithms can analyse historical data to identify patterns and predict quality issues, enabling organisations to take early action; machine learning tools detect anomalies with great accuracy, ensuring nothing is missed. Self-healing test scripts update automatically, without manual intervention. Machine Learning models automate test selection, picking the most relevant ones, while reducing both manual effort and errors. In addition, AI can prioritise test cases based on criticality, thus optimising resources and improving testing outcomes. Further, it can integrate with CI/CD pipelines, providing real-time feedback on code quality, and distributing updates automatically to ensure software applications are always ready for deployment. ... AI brings immense value to quality engineering, but also presents a few challenges. To function effectively, algorithms require high-quality datasets, which may not always be available. Organisations will likely need to invest significant resources in acquiring AI talent or building skills in-house. There needs to be a clear plan for integrating AI with existing testing tools and processes. Finally, there are concerns such as protecting data privacy and confidentiality, and implementing Responsible AI.


The Role of AI in Global Governance

Aurora drew parallels with transformative technologies such as electricity and the internet. "If AI reaches some communities late, it sets them far behind," he said. He pointed to Indian initiatives such as Bhashini for language inclusion, e-Sanjeevani for telehealth, Karya for employment through AI annotation and farmer.ai in Baramati, which boosted farmers' incomes by 30% to 40%. Schnorr offered a European perspective, stressing that AI's transformative impact on economies and societies demands trustworthiness. Reflecting on the EU's AI Act, he said its dual aim is fostering innovation while protecting rights. "We're reviewing the Act to ensure it doesn't hinder innovation," Schnorr said, advocating for global alignment through frameworks such as the G7's Hiroshima Code of Conduct and bilateral dialogues with India. He underscored the need for rules to make AI human-centric and accessible, particularly for small and medium enterprises, which form the backbone of both German and Indian economies. ... Singh elaborated on India's push for indigenous AI models. "Funding compute is critical, as training models is resource-intensive. We have the talent and datasets," he said, citing India's second-place ranking in GitHub AI projects per the Stanford AI Index. "Building a foundation model isn't rocket science - it's about providing the right ingredients."


Cisco ThousandEyes: resilient networks start with global insight

To tackle the challenges that arise from (common or uncommon) misconfigurations and other network problems, we need an end-to-end topology, Vaccaro reiterates. ThousandEyes (and Cisco as a whole) have recently put a lot of extra work into this. We saw a good example of this recently during Mobile World Congress. There, ThousandEyes announced Connected Devices. This is intended for service providers and extends their insight into the performance of their customers’ networks in their home environments. The goal, as Vaccaro describes it, is to help service providers see deeper so that they can catch an outage or other disruption quickly, before it impacts customers who might be streaming their favorite show or getting on a work call. ... The Digital Operational Resilience Act (DORA) will be no news to readers who are active in the financial world. You can see DORA as a kind of advanced NIS2, only directly enforced by the EU. It is a collection of best practices that many financial institutions must adhere to. Most of it is fairly obvious. In fact, we would call it basic hygiene when it comes to resilience. However, one component under DORA will have caused financial institutions some stress and will continue to do so: they must now adhere to new expectations when it comes to the services they provide and the resilience of their third-party ICT dependencies.


A Five-Step Operational Maturity Model for Benchmarking Your Team

An operational maturity model is your blueprint for building digital excellence. It gives you the power to benchmark where you are, spot the gaps holding you back and build a roadmap to where you need to be. ... Achieving operational maturity starts with knowing where you are and defining where you want to go. From there, organizations should focus on four core areas: Stop letting silos slow you down. Unify data across tools and teams to enable faster incident resolution and improve collaboration. Integrated platforms and a shared data view reduce context switching and support informed decision-making. Because in today’s fast-moving landscape, fragmented visibility isn’t just inefficient — it’s dangerous. ... Standardize what matters. Automate what repeats. Give your teams clear operational frameworks so they can focus on innovation instead of navigation. Eliminate alert noise and operational clutter that’s holding your teams back. Less noise, more impact. ... Deploy automation and AI across the incident lifecycle, from diagnostics to communication. Prioritize tools that integrate well and reduce manual tasks, freeing teams for higher-value work. ... Use data and automation to minimize disruptions and deliver seamless experiences. Communicate proactively during incidents and apply learnings to prevent future issues.


The Future is Coded: How AI is Rewriting the Rules of Decision Theaters

At the heart of this shift is the blending of generative AI with strategic foresight practices. In the past, planning for the future involved static models and expert intuition. Now, AI models (including advanced neural networks) can churn through reams of historical data and real-time information to project trends and outcomes with uncanny accuracy. Crucially, these AI-powered projections don’t operate in a vacuum – they’re designed to work with human experts. By integrating AI’s pattern recognition and speed with human intuition and domain expertise, organizations create a powerful feedback loop. ... The fusion of generative AI and foresight isn’t confined to tech companies or futurists’ labs – it’s already reshaping industries. For instance, in finance, banks and investment firms are deploying AI to synthesize market signals and predict economic trends with greater accuracy than traditional econometric models. These AI systems can simulate how different strategies might play out under various future market conditions, allowing policymakers in central banks or finance ministries to test interventions before committing to them. The result is a more data-driven, preemptive strategy – allowing decision-makers to adjust course before a forecasted risk materializes. 


More accurate coding: Researchers adapt Sequential Monte Carlo for AI-generated code

The researchers noted that AI-generated code can be powerful, but it can also often lead to code that disregards the semantic rules of programming languages. Other methods to prevent this can distort models or are too time-consuming. Their method makes the LLM adhere to programming language rules by discarding code outputs that may not work early in the process and “allocate efforts towards outputs that more most likely to be valid and accurate.” ... The researchers developed an architecture that brings SMC to code generation “under diverse syntactic and semantic constraints.” “Unlike many previous frameworks for constrained decoding, our algorithm can integrate constraints that cannot be incrementally evaluated over the entire token vocabulary, as well as constraints that can only be evaluated at irregular intervals during generation,” the researchers said in the paper. Key features of adapting SMC sampling to model generation include proposal distribution where the token-by-token sampling is guided by cheap constraints, important weights that correct for biases and resampling which reallocates compute effort towards partial generations. ... AI models have made engineers and other coders work faster and more efficiently. It’s also given rise to a whole new kind of software engineer: the vibe coder. 


You Can't Be in Recovery Mode All the Time — Superna CEO

The proactive approach, he explains, shifts their position in the security lifecycle: "Now we're not responding with a very tiny blast radius and instantly recovering. We are officially left-of-the-boom; we are now ‘the incident never occurred.’" Next, Hesterberg reveals that the next wave of innovation focuses on leveraging the unique visibility his company has in terms of how critical data is accessed. “We have a keen understanding of where your critical data is and what users, what servers, and what services access that data.” From a scanning, patching, and upgrade standpoint, Hesterberg shares that large organizations often face the daunting task of addressing hundreds or even thousands of systems flagged for vulnerabilities daily. To help streamline this process, he says that his team is working on a new capability that integrates with the tools these enterprises already depend on. This upcoming feature will surface, in a prioritized way, the specific servers or services that interact with an organization's most critical data, highlighting the assets that matter most. By narrowing down the list, Hesterberg notes, teams can focus on the most potentially dangerous exposures first. Instead of trying to patch everything, he says, “If you know the 15, 20, or 50 that are most dangerous, potentially most dangerous, you're going to prioritize them.” 


When confusion becomes a weapon: How cybercriminals exploit economic turmoil

Defending against these threats doesn’t start with buying more tools. It starts with building a resilient mindset. In a crisis, security can’t be an afterthought – it must be a guiding principle. Organizations relying on informal workflows or inconsistent verification processes are unknowingly widening their attack surface. To stay ahead, protocols must be defined before uncertainty takes hold. Employees should be trained not just to spot technical anomalies, but to recognize emotional triggers embedded in legitimate looking messages. Resilience, at its core, is about readiness. Not just to respond, but to also anticipate. Organizations that view economic disruption as a dual threat, both financial and cyber, will position themselves to lead with control rather than react in chaos. This means establishing behavioral baselines, implementing layered authentication, and adopting systems that validate not just facilitate. As we navigate continued economic uncertainty, we are reminded once again that cybersecurity is no longer just about technology. It’s about psychology, communication, and foresight. Defending effectively means thinking tactically, staying adaptive, and treating clarity as a strategic asset.


The productivity revolution – enhancing efficiency in the workplace

In difficult economic times, when businesses are tightening the purse strings, productivity improvements may often be overlooked in favour of cost reductions. However, cutting costs is merely a short-term solution. By focusing on sustainable productivity gains, businesses will reap dividends in the long term. To achieve this, organisations must turn their focus to technology. Some technology solutions, such as cloud computing, ERP systems, project management and collaboration tools, produce significant flexibility or performance advantages compared to legacy approaches and processes. Whilst an initial expense, the long-term benefits are often multiples of the investment – cost reductions, time savings, employee motivation, to name just a few. And all of those technology categories are being enhanced with artificial intelligence – for example adding virtual agents to help us do more, quickly. ... At a time when businesses and labour markets are struggling with employee retention and availability, it has become more critical than ever for organisations to focus on effective training and wellbeing initiatives. Minimising staff turnover and building up internal skill sets is vital for businesses looking to improve their key outputs. Getting this right will enable organisations to build smarter and more effective productivity strategies.


Daily Tech Digest - August 24, 2024

India Nears Its Quantum Moment — Completion Of First Quantum Computer Expected Soon

Despite the progress, significant scientific challenges remain. Qubits are inherently unstable and susceptible to disturbances, leading to ‘decoherence’. Researchers worldwide are striving to overcome this through error-corrected qubits. “You have to show that by using such a system, you are actually solving some problem which is of relevance to industry or science or society and show that it is better, faster and cheaper,” Dr. Vijayaraghavan told India Today. “That of course will be the first holy grail of useful quantum computers. We are not there yet.” In Bengaluru, startup QpiAI is also venturing into quantum computing. Led by CEO and chairman Dr. Nagendra Nagaraja, the company is constructing a 25-qubit quantum computer, with plans to unveil it by the end of the year, according to the news service. With $6 million in funding, QpiAI intends to offer the platform to customers via cloud services and supply systems to top institutes and research groups across India. “Our vision is to integrate AI and quantum computing in enterprises,” Dr. Nagaraja told India Today


How Seeing Is Believing With Your Leadership Abilities

One of the standout points in my discussion with Cherches was his approach to communicating complex ideas across different functions within an organization. He stresses the importance of translating concepts into the "language" of the audience. Whether through analogies, stories, or visual diagrams, the goal is to make the abstract tangible. Cherches illustrates this by introducing an example. "We need to communicate in the language of our stakeholders. For example, I teach in the HR master's program at NYU, and I always emphasize that if you need funding for an HR initiative, you have to translate that into the language of money for the CFO. It's about finding the right visual and verbal tools to resonate with different audiences." This is where visual leadership shines—bridging gaps between different departments and creating a common language everyone can understand. In today's business environment, where cross-functional and asynchronous collaboration is critical, leaders who can translate their vision into visual terms are more likely to gain buy-in and drive initiatives forward.


5 things I wish I knew as a CFO before starting a digital transformation

One of our biggest missteps was not thoroughly defining what we intended to achieve from different perspectives — IT, employees, customers and the executive team. We knew having to use something new would have pain points, but we didn’t understand the impact of going from a customized environment to a more standard platform. The business didn’t understand the advantages either — their work might be slightly less efficient or different, but the processes would now be scalable, more stable and completely standardized across the different business units. ... In hindsight, we greatly underestimated the effort to cleanse and prepare our data for migration. Now that the project is well on its way, I always hear about the importance of data cleansing and preparation. But I never heard it from anyone upfront. We could have spent a year restructuring data hierarchies to align with the new system before even starting implementation. ... Not every part of the project will be a success or an upgrade. But there will be incredible success stories, efficiencies, new capabilities or insights. Often, they’re unexpected, like the impact that pricing changes had on our business, even though they weren’t in our original scope. 


Linus Torvalds talks AI, Rust adoption, and why the Linux kernel is 'the only thing that matters'

Torvalds said, "There is some stability with old kernels, and we do backport for patches and fixes to them, but some fixes get missed because people don't think they're important enough, and then it turns out they were important enough." Besides, if you stick with an old kernel for too long when you finally need to update to a newer one, it can be a massive pain to do so. So, "to all the Chinese embedded Linux vendors who are still using the Linux 4.9 kernel," Torvalds said, wagging his finger, "Stop." In addition, Hohndel said that when patching truly ancient kernels, the Linux kernel team can only say, "Sorry, we can't help you with that. It was so long ago that we don't even remember how to fix it." Switching to a more modern topic, the introduction of the Rust language into Linux, Torvalds is disappointed that its adoption isn't going faster. "I was expecting updates to be faster, but part of the problem is that old-time kernel developers are used to C and don't know Rust. They're not exactly excited about having to learn a new language that is, in some respects, very different. So there's been some pushback on Rust."


EU AI Act Tightens Grip on High-Risk AI Systems: Five Critical Questions for U.S. Companies

the EU AI Act applies to U.S. companies across the entire AI value chain that develop, use, import, or distribute AI Systems in the EU market. Further, a U.S. company is subject to the EU AI Act where it operates AI Systems that produce output used in the EU market. In other words, even if a U.S. company develops or uses a “High-Risk” AI System for job screening or online proctoring purposes, the EU AI Act still governs if outputs produced by such AI System are used in the EU for recruiting or admissions purposes. In another use case, if a U.S. auto OEM incorporates an AI system to support self-driving functionalities and distributes the vehicle under its own brand in the EU, such OEM is subject to the EU AI Act. ... In addition, for those AI systems classified as “High-Risk” under the “Specific Use Cases” in Annex III, they must also complete a conformity assessment to certify that such AI systems comply with the EU AI Act. Where AI Systems are themselves “Regulated Productsor related safety components,” the EU AI Act seeks to harmonize and streamline the processes to reduce market entrance costs and timelines. 


ServiceOps: Balancing Speed and Risk in DevOps

The integration between ITSM and AIOps tools automates identification of risky changes by analyzing risk information from the service history and operational data in a single pane of glass. AI models correlate past changes and determine their impact on operational variables such as service availability and health. This information decreases time spent on change requests by helping teams quickly understand the risk factors and the scope of impact by using powerful service dependency maps from AIOps tools. This AI-driven assessment also provides great feedback to DevOps and SRE teams, enabling them to deploy faster and with greater confidence. ... A conversational interface for change risk assessment can make risk insights understandable and actionable for teams tasked with delivering high-quality software rapidly. Imagine giving teams tasked with approving software changes access to a chat-based interface for asking questions and getting answers tailored to the specific environments where their software will be deployed. They could get answers to questions like, “What are the risky changes?” and “Can I look at change collisions?” The pace of change driven by DevOps presents significant challenges to IT service and IT operations teams. Both need to accelerate change without risking downtime. 


AI Assistants: Picking the Right Copilot

Not all assistants are meant for tech professionals. Others with a focus on consumer benefits are emerging. ... A good AI assistant should offer a responsive chat feature to indicate its understanding of its environment. Jupyter, Tabnine, and Copilot all offer a native chat UI for the user. The chat experience influences how well a professional feels the AI assistant is working. How well it interprets prompts and how accurate the suggestions are all start with the conversational assistant experience, so technical professionals should note their experiences to see which assistant works best for their projects. Professionals should also consider the frequency of the work in which the AI assistant is being applied. The frequency can indicate the degree of value being created — more frequency gives an AI assistant an opportunity to learn user preferences and past account history, which plays into its recommendations. The result is better productivity with AI, learning quickly where to best explore and experiment with crafting applications. Considering solution frequency can also reveal the cost of the technology against the value received. 


Researchers propose a smaller, more noise-tolerant quantum factoring circuit for cryptography

The MIT researchers found a clever way to compute exponents using a series of Fibonacci numbers that requires simple multiplication, which is reversible, rather than squaring. Their method needs just two quantum memory units to compute any exponent. "It is kind of like a ping-pong game, where we start with a number and then bounce back and forth, multiplying between two quantum memory registers," Vaikuntanathan adds. They also tackled the challenge of error correction. The circuits proposed by Shor and Regev require every quantum operation to be correct for their algorithm to work, Vaikuntanathan says. But error-free quantum gates would be infeasible on a real machine. They overcame this problem using a technique to filter out corrupt results and only process the right ones. The end-result is a circuit that is significantly more memory-efficient. Plus, their error correction technique would make the algorithm more practical to deploy. "The authors resolve the two most important bottlenecks in the earlier quantum factoring algorithm. Although still not immediately practical, their work brings quantum factoring algorithms closer to reality," adds Regev.


Power of communication in leadership transition

When change is on the horizon, the worst thing a leader can do is ignore or suppress employees' natural reactions. Uncertainty leads to rumours and speculation. Instead, leaders should create an environment of open communication, where teams feel comfortable voicing their concerns, asking questions, and sharing their thoughts on the new leader’s vision and the upcoming changes. Being honest and transparent is key to building trust. Open communication can help ease fears, address worries, and empower employees to embrace changes and contribute to the organisation’s success. It’s important to clearly explain what is happening, why it’s happening, and how it may affect different roles. Avoiding the temptation to sugar-coat negative news is also crucial. Listening is just as important as speaking. Leaders should avoid getting defensive or dismissive when employees share their concerns. ... To effectively reassure employees, leaders need to understand the root causes of their anxiety. Whether concerns are about job security, changes in responsibilities, or shifts in the company’s culture, employees need to know that their concerns are being heard and taken seriously.


What are the most in-demand skills in tech right now?

Martyn said that while there are many approaches to gain new skills, she advises learners to understand the areas where they have a natural aptitude and explore their preferred learning style. “With the right attitude and an understanding of their natural aptitude, I recommend reaching out for support to a leader or coach to support in the creation of a formal learning and development plan starting with some small learning objectives and building over time,” she said. “The technical, business and cognitive skills required for success will evolve over time but putting the right routines in place to consistently retrospect on your skill level, generate new ideas, identify opportunities for learning and execute a learning plan is a fundamental skill that will support continuous growth in the long term.” Pareek said that mastery of digital technologies such as AI and data analytics is becoming increasingly important both in specialist roles and more generally, so adaptability and resilience is key. “Building a robust professional network and engaging in collaboration can unlock new opportunities, while mentorship provides valuable guidance. ...”



Quote for the day:

"One of the sad truths about leadership is that, the higher up the ladder you travel, the less you know." -- Margaret Heffernan

Daily Tech Digest - June 14, 2024

State Machine Thinking: A Blueprint For Reliable System Design

State machines are instrumental in defining recovery and failover mechanisms. By clearly delineating states and transitions, engineers can identify and code for scenarios where the system needs to recover from an error, failover to a backup system or restart safely. Each state can have defined recovery actions, and transitions can include logic for error handling and fallback procedures, ensuring that the system can return to a safe state after encountering an issue. My favorite phrase to advocate here is: “Even when there is no documentation, there is no scope for delusion.” ... Having neurodivergent team members can significantly enhance the process of state machine conceptualization. Neurodivergent individuals often bring unique perspectives and problem-solving approaches that are invaluable in identifying states and anticipating all possible state transitions. Their ability to think outside the box and foresee various "what-if" scenarios can make the brainstorming process more thorough and effective, leading to a more robust state machine design. This diversity in thought ensures that potential edge cases are considered early in the design phase, making the system more resilient to unexpected conditions.


How to Build a Data Stack That Actually Puts You in Charge of Your Data

Sketch a data stack architecture that delivers the capabilities you've deemed necessary for your business. Your goal here should be to determine what your ideal data stack looks like, including not just which types of tools it will include, but also which personnel and processes will leverage those tools. As you approach this, think in a tool-agnostic way. In other words, rather than looking at vendor solutions and building a stack based on what's available, think in terms of your needs. This is important because you shouldn't let tools define what your stack looks like. Instead, you should define your ideal stack first, and then select tools that allow you to build it. ... Another critical consideration when evaluating tools is how much expertise and effort are necessary to get tools to do what you need them to do. This is important because too often, vendors make promises about their tools' capabilities — but just because a tool can theoretically do something doesn't mean it's easy to do that thing with that tool. A data discovery tool that requires you to install special plugins or write custom code to work with a legacy storage system you depend on.


IT leaders go small for purpose-built AI

A small AI approach has worked for Dayforce, a human capital management software vendor, says David Lloyd, chief data and AI officer at the company. Dayforce uses AI and related technologies for several functions, with machine learning helping to match employees at client companies to career coaches. Dayforce also uses traditional machine learning to identify employees at client companies who may be thinking about leaving their jobs, so that the clients can intervene to keep them. Not only are smaller models easier to train, but they also give Dayforce a high level of control over the data they use, a critical need when dealing with employee information, Lloyd says. When looking at the risk of an employee quitting, for example, the machine learning tools developed by Dayforce look at factors such as the employee’s performance over time and the number of performance increases received. “When modeling that across your entire employee base, looking at the movement of employees, that doesn’t require generative AI, in fact, generative would fail miserably,” he says. “At that point you’re really looking at things like a recurrent neural network, where you’re looking at the history over time.”


Why businesses need ‘agility and foresight’ to stay ahead in tech

In the current IT landscape, one of the most pressing challenges is the evolving threat of cyberattacks, particularly those augmented by GenAI. As GenAI becomes more sophisticated, it introduces new complexities for cybersecurity with cybercriminals leveraging it to create advanced attack vectors. ... Several transformative technologies are reshaping our industry and the world at large. At the forefront of these innovations is GenAI. Over the past two years, GenAI has moved from theory to practice. While GenAI has fostered many creative ideas in 2023 of how it will transform business, GenAI projects are starting to become business-ready with visible productivity gains becoming evident. Transformative technology also holds a strong promise to have a profound impact on cybersecurity, offering advanced capabilities for threat detection and incident response from a cybersecurity standpoint. Organisations will need to use their own data for training and fine-tuning models, conducting inference where data originates. Although there has been much discussion about zero trust within our industry, we’re now seeing it evolve from a concept to a real technology. 


Who Should Run Tests? On the Future of QA

QA is a funny thing. It has meant everything from “the most senior engineer who puts the final stamp on all code” to “the guy who just sort of clicks around randomly and sees if anything breaks.” I’ve seen seen QA operating in all different levels of the organization, from engineers tightly integrated with each team to an independent, almost outside organization. A basic question as we look at shifting testing left, as we put more testing responsibility with the product teams, is what the role of QA should be in this new arrangement. This can be generalized as “who should own tests?” ... If we’re shifting testing left now, that doesn’t mean that developers will be running tests for the first time. Rather, shifting left means giving developers access to a complete set of highly accurate tests, and instead of just guessing from their understanding of API contracts and a few unit tests that their code is working, we want developers to be truly confident that they are handing off working code before deploying it to production. It’s a simple, self-evident principle that when QA finds a problem, that should be a surprise to the developers. 


Implementing passwordless in device-restricted environments

Implementing identity-based passwordless authentication in workstation-independent environments poses several unique challenges. First and foremost is the issue of interoperability and ensuring that authentication operates seamlessly across a diverse array of systems and workstations. This includes avoiding repetitive registration steps which lead to user friction and inconvenience. Another critical challenge, without the benefit of mobile devices for biometric authentication, is implementing phishing and credential theft-resistant authentication to protect against advanced threats. Cost and scalability also represent significant hurdles. Providing individual hardware tokens to each user is expensive in large-scale deployments and introduces productivity risks associated with forgotten, lost, damaged or shared security keys. Lastly, the need for user convenience and accessibility cannot be understated. Passwordless authentication must not only be secure and robust but also user-friendly and accessible to all employees, irrespective of their technical expertise. 


Modern fraud detection need not rely on PII

A fraud detection solution should also retain certain broad data about the original value, such as whether an email domain is free or corporate, whether a username contains numbers, whether a phone number is premium, etc. However, pseudo-anonymized data can still be re-identified, meaning if you know two people’s names you can tell if and how they have interacted. This means it is still too sensitive for machine learning (ML) since models can almost always be analyzed to regurgitate the values that go in. The way to deal with that is to change the relationships into features referencing patterns of behavior, e.g., the number of unique payees from an account in 24 hours, the number of usernames associated with a phone number or device, etc. These features can then be treated as fully anonymized, exported and used in model training. In fact, generally, these behavioral features are more predictive than the original values that went into them, leading to better protection as well as better privacy. Finally, a fraud detection system can make good use of third-party data that is already anonymized. 


Deepfakes: Coming soon to a company near you

Deepfake scams are already happening, but the size of the problem is difficult to estimate, says Jake Williams, a faculty member at IANS Research, a cybersecurity research and advisory firm. In some cases, the scams go unreported to save the victim’s reputation, and in other cases, victims of other types of scams may blame deepfakes as a convenient cover for their actions, he says. At the same time, any technological defenses against deepfakes will be cumbersome — imagine a deepfakes detection tool listening in on every phone call made by employees — and they may have a limited shelf life, with AI technologies rapidly advancing. “It’s hard to measure because we don’t have effective detection tools, nor will we,” says Williams, a former hacker at the US National Security Agency. “It’s going to be difficult for us to keep track of over time.” While some hackers may not yet have access to high-quality deepfake technology, faking voices or images on low-bandwidth video calls has become trivial, Williams adds. Unless your Zoom meeting is of HD or better quality, a face swap may be good enough to fool most people.


A Deep Dive Into the Economics and Tactics of Modern Ransomware Threat Actors

A common trend among threat actors is to rely on older techniques but allocate more resources and deploy them differently to achieve greater success. Several security solutions organizations have long relied on, such as multi-factor authentication, are now vulnerable to circumvention with very minimal effort. Specifically, organizations need to be aware of the forms of MFA factors they support, such as push notifications, pin codes, FIDO keys and legacy solutions like SMS text messages. The latter is particularly concerning because SMS messaging has long been considered an insecure form of authentication, managed by third-party cellular providers, thus lying outside the control of both employees and their organizations. In addition to these technical forms of breaches, the tried-and-true method of phishing is still viable. Both white hat and black hat tools continue to be enhanced to exploit common MFA replay techniques. Like other professional tools used by security testers like Cobalt Strike used by threat actors to maintain persistence on compromised systems, MFA bypass/replay tools have also gotten more professional. 


Troubleshooting Windows with Reliability Monitor

Reliability Monitor zeroes in on and tracks a limited set of errors and changes on Windows 10 and 11 desktops (and earlier versions going back to Windows Vista), offering immediate diagnostic information to administrators and power users trying to puzzle their way through crashes, failures, hiccups, and more. ... There are many ways to get to Reliability Monitor in Windows 10 and 11. At the Windows search box, if you type reli you’ll usually see an entry that reads View reliability history pop up on the Start menu in response. Click that to open the Reliability Monitor application window. ... Knowing the source of failures can help you take action to prevent them. For example, certain critical events show APPCRASH as the Problem Event Name. This signals that some Windows app or application has experienced a failure sufficient to make it shut itself down. Such events are typically internal to an app, often requiring a fix from its developer. Thus, if I see a Microsoft Store app that I seldom or never use throwing crashes, I’ll uninstall that app so it won’t crash any more. This keeps the Reliability Index up at no functional cost.



Quote for the day:

"Success is a state of mind. If you want success start thinking of yourself as a sucess." -- Joyce Brothers

Daily Tech Digest - May 09, 2024

Red Hat delivers accessible, open source Generative AI innovation with Red Hat Enterprise Linux AI

RHEL AI builds on this open approach to AI innovation, incorporating an enterprise-ready version of the InstructLab project and the Granite language and code models along with the world’s leading enterprise Linux platform to simplify deployment across a hybrid infrastructure environment. This creates a foundation model platform for bringing open source-licensed GenAI models into the enterprise. RHEL AI includes:Open source-licensed Granite language and code models that are supported and indemnified by Red Hat. A supported, lifecycled distribution of InstructLab that provides a scalable, cost-effective solution for enhancing LLM capabilities and making knowledge and skills contributions accessible to a much wider range of users. Optimised bootable model runtime instances with Granite models and InstructLab tooling packages as bootable RHEL images via RHEL image mode, including optimised Pytorch runtime libraries and accelerators for AMD Instinct™ MI300X, Intel and NVIDIA GPUs and NeMo frameworks. 


Regulators are coming for IoT device security

Up to now, the IoT industry has relied mainly on security by obscurity and the results have been predictable: one embarrassing compromise after another. IoT devices find themselves recruited into botnets, connected locks get trivially unlocked, and cars can get remotely shut down while barreling down the highway at 70mph. Even Apple, who may have the most sophisticated hardware security team on the planet, has faced some truly terrible security vulnerabilities. Regulators have taken note, and they are taking action. In September 2022, NIST fired a warning shot by publishing a technical report that surveyed the state of IoT security and made a series of recommendations. This was followed by a voluntary regulatory scheme—the Cyber Trust Mark, published by the FCC in the US—as well as a draft regulation of European Union’s upcoming Cyber Resilience Act (CRA). Set to begin rolling out in 2025, the CRA will create new cybersecurity requirements to sell a device in the single market. Standard bodies have not stayed idle.The Connectivity Standards Alliance published the IoT Device Security Specification in March of this year, after more than a year of work by its Product Security Working Group.


Australia revolutionises data management challenges

In Australia, the importance of data literacy is growing rapidly. It is now more essential than ever to be able to comprehend and effectively communicate data as valuable information. The significance of data literacy cannot be overemphasised. Highlighting the importance of data literacy across government agencies is key to unlocking the true power of data. Understanding which data to use for problem-solving, employing critical thinking to comprehend and tackle data strengths and limitations, strategically utilising data to shape policies and implement effective programmes, regulations, and services, and leveraging data to craft a captivating narrative are all essential components of this process. Nevertheless, the ongoing challenge lies in ensuring that employees have the ability to interpret and utilise data effectively. Individuals who are inexperienced with data may find it challenging to effectively work with data, comprehend intricate datasets, analyse patterns, and extract valuable insights. Organisations are placing a strong emphasis on data literacy initiatives, aiming to turn individuals with limited data knowledge into experts in the field. 


Navigating Architectural Change: Overcoming Drift and Erosion in Software Systems

Architectural drift involves the introduction of design decisions that were not part of the original architectural plan, yet these decisions do not necessarily contravene the foundational architecture. In contrast, architectural erosion occurs when new design considerations are introduced that directly conflict with or undermine the system's intended architecture, effectively violating its guiding principles. ... In software engineering terms, a system may start with a clean architecture but, due to architectural drift, evolve into a complex tangle of multiple architectural paradigms, inconsistent coding practices, redundant components, and dependencies. On the other hand, architectural erosion could be likened to making alterations or additions that compromise the structural integrity of the house. For instance, deciding to remove or alter key structural elements, such as knocking down a load-bearing wall to create an open-plan layout without proper support, or adding an extra floor without considering the load-bearing capacity of the original walls.


Strong CIO-CISO relations fuel success at Ally

We identify the value we are creating and capturing before we kick off a technology project, and it’s a joint conversation with the business. I don’t think it’s just the business responsibility to say my customer acquisition is going to go up, or my revenue is going to go up by X. There is a technology component to it, which is extremely critical, especially as a full-scale digital-only organization. What does it take for you to build the capability? How long will it take? How much does it cost and what does it cost to run it? ... Building a strong leadership team is critical. Empowering them is even more critical. When people talk about empowerment, they think it means I leave my leaders alone and they go do whatever they want. It’s actually the opposite. We have sensitive and conflict-filled conversations, and the intent of that is to make each other better. If I don’t understand how my leaders are executing, I won’t be able to connect the dots. It is not questioning what they’re doing; it’s asking questions for my learning so I can connect and share learnings from what other leaders are doing. That’s what leads us to preserving that culture.


To defend against disruption, build a thriving workforce

To build a thriving workplace, leaders must reimagine work, the workplace, and the worker. That means shifting away from viewing employees as cogs who hit their deliverables then turn back into real human beings after the day is done. Employees are now more like elite artists or athletes who are inspired to produce at the highest levels but need adequate time to recharge and recover. The outcome is exceptional; the path to getting there is unique. ... Thriving is more than being happy at work or the opposite of being burned out. Rather, one of the cornerstones of thriving is the idea of positive functioning: a holistic way of being, in which people find a purposeful equilibrium between their physical, mental, social, and spiritual health. Thriving is a state that applies across talent categories, from educators and healthcare specialists to data engineers and retail associates. ... In this workplace, people at every level are capable of being potential thought leaders who have influence through the right training, support, and guidance. They don’t have to be just “doers” who simply implement what others tell them to. 


Tips for Controlling the Costs of Security Tools

The total amount that a business spends on security tools can vary widely depending on factors like which types of tools it deploys, the number of users or systems the tools support and the pricing plans of tool vendors. But on the whole, it’s fair to say that tool expenditures are a significant component of most business budgets. Moody’s found, for example, that companies devote about 8% of their total budget to security. That figure includes personnel costs as well as tool costs, but it provides a sense of just how high security spending tends to be relative to overall business expenses. These costs are likely only to grow. IDC believes that total security budgets will increase by more than a third over the next few years, due in part to rising tool costs. This means that finding ways to rein in spending on security tools is important not just for reducing overall costs today, but also preventing cost overruns in the future. Of course, reducing spending can’t amount simply to abandoning critical tools or turning off important features.


UK Regulator Tells Platforms to 'Tame Toxic Algorithms'

The Office of Communications, better known as Ofcom, on Wednesday urged online intermediaries, which include end-to-end encrypted platforms such as WhatsApp, to "tame toxic algorithms." Ensuring recommender systems "do not operate to harm children" is a measure the regulator made in a draft proposal for regulations enacting the Online Safety Act, legislation the Conservative government approved in 2023 that is intended to limit children's exposure to damaging online content. The law empowers the regulator to order online intermediaries to identify and restrict pornographic or self-harm content. It also imposes criminal prosecution for those whose send harmful or threatening communications. Instagram, YouTube, Google and Facebook that are among 100,000 web services that come under the scope of the regulation and are likely to be affected by the new requirements. "Any service which operates a recommender system and is at higher risk of harmful content should identify who their child users are and configure their algorithms to filter out the most harmful content from children's feeds and reduce the visibility of other harmful content," Ofcom said.


Businesses lack AI strategy despite employee interest — Microsoft survey

“While leaders agree using AI is a business imperative, and many say they won’t even hire someone without AI skills, they also believe that their companies lack a vision and plan to implement AI broadly; they’re stuck in AI inertia,” Colette Stallbaumer, general manager of Copilot and Cofounder of Work Lab at Microsoft, said in a pre-recorded briefing. “We’ve come to the hard part of any tech disruption, moving from experimentation to business transformation,” Stallbaumer said. While there’s clear interest in AI’s potential, many businesses are proceeding with caution with major deployments, say analysts. “Most organizations are interested in testing and deployment, but they are unsure where and how to get the most return,” said Carolina Milanesi, president and principal analyst at Creative Strategies. Security is among the biggest concerns, said Milanesi, “and until that is figured out, it is easier for organizations to shut access down.” As companies start to deploy AI, IT teams face significant demands, said Josh Bersin, founder and CEO of The Josh Bersin Company. 


Mayorkas, Easterly at RSAC Talk AI, Security, and Digital Defense

While acknowledging the increasingly ubiquitous use of AI in many services across the nation, Mayorkas commented about the advisory board’s conversation of leveraging that technology in cybersecurity. “It’s a very interesting discussion on what the definition of ‘safe’ is,” he said. “For example, most people now when they speak of the civil rights, civil liberties implications, categorize that under the responsible use of AI, but what we heard yesterday was an articulation of the fact that the civil liberties, civil rights implications of AI really are part and parcel of safety.” ... Technologies are shipped in ways that create risk, vulnerabilities, and they are configured and deployed in ways that are incredibly complex. “It’s eerily reminiscent of William Gibson's ‘Neuromancer,’” Krebs said. “When he talks about cyberspace, he said ‘the unthinkable complexity,’ and that’s what it's like right now to deploy and manage a large enterprise.” “We are just not sitting in place or standing in place because new technology for emerging on a regular basis,” he said. 



Quote for the day:

"Successful people do what unsuccessful people are not willing to do. Don't wish it were easier; wish you were better." -- Jim Rohn

Daily Tech Digest - April 27, 2024

AI twins and the digital revolution

The digital twin is designed to work across a single base station with a few mobile devices all the way up to hundreds of base stations with thousands of devices. “I would say the RF propagation piece is perhaps one of the most exciting areas apart from the data collection,” Vasishta noted. “The ability to simulate at scale real antenna, including interface interference and other elements, data is where we’ve really spent the most time to make sure that it is an accurate implementation”. The platform also includes a software-defined, full RAN stack to allow researchers and members to customise, programme and test 6G network components in real time. Vendors, such as Nokia, can bring their own RAN stack to the platform, but Nvidia’s open RAN compliant stack is provided. Vasishta added users of the research platform can collect data from their digital twin within their channel model, which allows them to train for optimisation. “It now allows you to use AI and machine learning in conjunction with a digital twin to fully simulate an environment and create site specific channel models so you can always have best connectivity or lowest power consumption, for instance,” he said.


The temptation of AI as a service

AWS has introduced a new feature aimed at becoming the prime hub for companies’ custom generative AI models. The new offering, Custom Model Import, launched on the Amazon Bedrock platform (enterprise-focused suite of AWS) and provides enterprises with infrastructure to host and fine-tune their in-house AI intellectual property as fully managed sets of APIs. This move aligns with increasing enterprise demand for tailored AI solutions. It also offers tools to expand model knowledge, fine-tune performance, and mitigate bias. All of these are needed to drive AI for value without increasing the risk of using AI. In the case of AWS, the Custom Model Import allows model integrations into Amazon Bedrock, where they join other models, such as Meta’s Llama 3 or Anthropic’s Claude 3. This provides AI users the advantage of managing their models centrally alongside established workflows already in place on Bedrock. Moreover, AWS has announced enhancements to the Titan suite of AI models. The Titan Image Generator, which translates text descriptions into images, is shifting to general availability.


Overwhelmed? 6 ways to stop small stresses at work from becoming big problems

"Someone once used the analogy that you have crystal balls and bouncy balls. If you drop your crystal ball, it shatters, and you'll never be able to get it back. Whereas if you drop your bouncy ball, it will bounce back." "I think you need to work out the crystal balls to prioritize because if you drop that ball, it's gone. For me, it always helps to take stuff off the priority list. And I think that approach helps with work/life balance. Sometimes, it's important to choose." ... "If we have a small problem in one store, and we pick up that's prevalent in all stores, collectively the impact is significant. So, that's why I get to the root cause as quickly as possible."  "And then you understand what's going on rather than just trying to stick a plaster over what appears to be a cut, but is something quite a bit deeper underneath." ... "If you look at something in darkness, it can feel pretty overwhelming quickly. So, giving a problem focus and attention, and getting some people around it, tends to put the issue in perspective."  ... "It's nice to have someone who can point out to you, 'You're ignoring that itch, why don't you do something about it?' I've found it's good to speak with an expert with a different perspective."


15 Characteristics of (a Healthy) Organizational Culture

Shared common values refer to the fundamental beliefs and principles that an organization adopts as its foundation. These values act as a compass, guiding behaviors, decision-making, and interactions both within the organization and with external stakeholders. They help create a cohesive culture by aligning employees’ actions with the company’s core mission and vision. ... A clear purpose and direction align the organization’s efforts and goals. This clarity helps unite the team, focusing their efforts on achieving specific objectives and guiding strategic planning and daily operations. ... Transparent and regular communication supports openly sharing information and feedback throughout the organization. This practice fosters trust, helps in early identification of issues, encourages collaboration, and ensures that everyone is informed and aligned with the organization’s goals. ... Collaboration and teamwork underpin a cooperative environment where groups work together to achieve collective objectives. This approach enhances problem-solving, innovation, and efficiency, while also building a supportive work environment.


Palo Alto Networks’ CTO on why machine learning is revolutionizing SOC performance

When it comes to data center security, you have to do both. You have to keep them out. And that’s the role of traditional cybersecurity. So network security, including, of course, the security between the data center and Ethernet, internal security for segmentation. It includes endpoint security for making sure that vulnerabilities aren’t being exploited and malware isn’t running. It includes identity and access management. Or even privileged access management (PAM), which we don’t do. We don’t do identity access or PAM. It includes many different things. This is about keeping them out or not letting them walk inside laterally. And then the second part of it which, which goes to your question, is now let’s assume they are inside and all defenses have failed. It’s the role of the SOC to look for them. We call it hunting, the hunting function in the SOC. How do we do that? You need machine learning, not [large language models] LLMs, or GPT, but real, traditional machine learning, to do both, both to keep them out and also both to find them if they’re already inside. So we can talk about both and how we use machine learning here and how we use machine learning there.


From hyperscale to hybrid: unlocking the potential of the cloud

To optimize their cloud adoption strategies, and ensure they architect the best fits for their needs, organizations will first need to undertake detailed assessments of their workloads to determine which cloud combinations to go for. Weighing up which cloud options are most appropriate for which workloads isn’t always an easy task. Ideally, organizations should utilize a cloud adoption framework to help scope out the overall security and business drivers that will influence their cloud strategy decision-making. Enabling organizations to identify and mitigate risks and ensure compliance as they move ahead, these frameworks make it easier to proceed confidently with their cloud adoption plans. Since every infrastructure strategy will have unique requirements that include tailored security measures, leveraging the expertise of cloud security professionals will also prove invaluable for ensuring appropriate security measures are in place. Similarly, organizations will be able to gain a deeper understanding of how best to orchestrate their on-premises, private, and public clouds in a unified and cost/performance-optimized manner.


No Fear, AI is Here: How to Harness AI for Social Good

We must proactively think about how our organizations can responsibly leverage AI for good. Our role is to offer our teams the support and guidance required to harness AI’s full power in ways big and small to inspire positive change, ensuring fear doesn’t override optimism. While AI has an undeniable advantage when it comes to its ability to outperform, it cannot replace the power of human creativity, perspectives, and deep insight. ... We only have a finite amount of time to address climate change and related issues such as poverty and inequity. To get there, we’re going to have to try. And then try again. And again. Though it will be an uphill climb, AI can help us climb faster -- and explore as many options as we possibly can, as quickly as we can -- if we use it responsibly. The key is for tech impact leaders to bring forward a human-centric perspective to their company’s investments and use of AI technology, ensuring that their strategies don’t lead to unintended consequences for employment. Don’t let fear prevent you from getting all the help you can from the most powerful technology available. Your team, and the world, need you to be fearless.


Data for AI — 3 Vital Tactics to Get Your Organization Through the AI PoC Chasm

We now have the opportunity to automate manual heavy lifting in data prep. AI models can be trained to detect and strip out sensitive data, identify anomalies, infer records of source, determine schemas, eliminate duplication, and crawl over data to detect bias. There is an explosion of new services and tools available to take the grunt work out of data prep and keep the data bar high. By automating these labor-intensive tasks, AI empowers organizations to accelerate data preparation, minimize errors, and free up valuable human resources to focus on higher-value activities, ultimately enhancing the efficiency and effectiveness of AI initiatives. ... AI is being experimented with and adopted broadly across organizations. With so much activity and interest, it is difficult to centralize work, and often centralization creates bottlenecks that slow down innovation. Encouraging decentralization and autonomy in delivering AI use cases is beneficial as it increases capacity for innovation across many teams, and embeds work into the business with a focus on specific business priorities. 


Blockchain Distributed Ledger Market 2024-2032

Organizations are leveraging blockchain's decentralized and immutable ledger capabilities to enhance transparency, security, and efficiency in their operations. Secondly, the growing demand for secure and transparent transactions, coupled with the rising concerns over data privacy and cybersecurity, is driving the adoption of blockchain distributed ledger solutions. Businesses and consumers alike are increasingly turning to blockchain technology to safeguard their data and assets against cyber threats and fraudulent activities. Moreover, the proliferation of digitalization and the internet of things (IoT) is further driving market growth by creating a demand for reliable and tamper-proof data storage and transmission systems. Blockchain's ability to provide a decentralized and verifiable record of transactions makes it well-suited for IoT applications, such as supply chain tracking, smart contracts, and secure data sharing. Additionally, the emergence of regulatory frameworks and standards for blockchain technology adoption is providing a favorable environment for market expansion, as it instills confidence among businesses and investors regarding compliance and legal aspects.


The real cost of cloud banking

Although compute and memory costs have come down and capacities have gone up over the years, the fact is an inefficient piece of software will still cost you more today than a well-designed, optimised one. Before it was a simple case of running your software and checking how much memory you were using. With cloud, the pricing may be transparent, but the options available and cost calculation are much more complex. Such is the complexity involved with cloud costs that many banks bring in specialists to help them optimise their spend. In fact, it’s not only banks, but banking software providers too. Cloud cost optimisation is a fine art that requires time and expertise to fully understand. It would be easy to blame developers, but I’ve never seen business requirements that state that applications should minimise their memory use or use the least expensive type of storage. I’ve been in the position of “the business” needing to make decisions on requirements for storage options, and these decisions aren’t easy, even for someone with a technical background. In defence of cloud providers, their pricing is transparent. 



Quote for the day:

“The road to success and the road to failure are almost exactly the same.” -- Colin R. Davis

Daily Tech Digest - March 30, 2024

The future of Banking as a Service: Banking trends 2024

Reflecting on the banking industry trends and emerging technologies explored, what does the future hold for Banking as a Service? And is the banking system changing as a result? The unbanked remains high while financial inclusivity is low, but Banking as a Service systems are helping to change that. While their impact on the customer isn’t direct, it enables non-bank providers to explore new and untapped markets, and expand their embed offerings to underserved consumers. With these non-bank providers, fueled by BaaS, consumers aren’t restricted to traditional banking requirements and now have a wider variety of payment and credit options. As this sector progresses, we could see access and inclusivity to financial services increase, with more personalized finance solutions diversifying the industry’s offering. The adoption of Banking as a Service by traditionally non-financial entities is also a top area to watch. Companies in areas such as telecommunications, energy and utilities, and even education are integrating financial services into their systems, streamlining transactions and improving customer experience.


From Despair to Disruption: Zafran Takes on Cyber Mitigation

Zafran aims to close the gap between threat detection and remediation by anticipating and neutralizing threats before they can be exploited by attackers, according to Yashar. She wants to use the funding led by Sequoia Capital and Cyberstarts to make Zafran's platform more scalable, integrate AI to refine the mitigation knowledge base, and assemble a team of top-tier developers, researchers and analysts. "Raising is not hard when you're solving a real pain,"" Yashar said. "The biggest money is going toward the platform and hiring the best talent." On the risk assessment side, Zafran wants to take a customer's existing controls under consideration when determining what vulnerabilities pose the biggest risk to them, which Yashar said will help organizations optimize their return on investment. The company's dashboard helps customers see what risk is most exploitable as well as risk reduction activity they could carry out with their existing controls. Zafran has built a war game simulation that allows customers to check how well their cyber platform defends against existing threats and how much risk is reduced by paying for additional controls. 


Infrastructure as Code Is Dead: Long Live Infrastructure from Code

Despite the clear benefits to scale and automation that come with IaC, it remains very complex because cloud infrastructure is complex and constantly changing. As more teams are involved with cloud provisioning, they have to agree how best to use IaC tools and learn the nuances of each one they choose. With these added pressures, fresh solutions promising to improve the developer experience without increasing risk are emerging. To create the next generation of solutions, organizations need to understand where the problems truly lie for development, platform engineering and security teams. ... With multiple tools and frameworks to choose from, learning new languages and tools can be difficult for teams whose experience stems from manual infrastructure provisioning or writing application code. In addition to requiring a new programming language and interface, most IaC tools define and support infrastructure and resource management using declarative languages. That means teams must learn how to define the desired state of the infrastructure environment rather than outlining the steps required to achieve a result, a challenge for new and experienced programmers alike.


The most versatile, capable category twisted pair ever

There are still many situations that don’t require the versatility and performance of Cat6A throughout the entire network but require at the very least multi-gigabit in specific areas of the network for specific applications, but without the hassle of mitigation efforts. The GigaSPEED XL5 solution, a new addition to the GigaSPEED family, addresses the growing sweet spot for a Category 6 solution that can support the intermediate 2.5 and 5.0 GBE bandwidth demands, guaranteed and without mitigation. GigaSPEED XL5 cables can support four connections in a 100 meters channel to support 5G Ethernet. So, it’s ideal for connecting wireless access points located in the ceiling. And because the cable diameter is only slightly larger than GigaSPEED XL cables, the installation tools and procedures are the same as well. Some companies are now beginning the transition from Wi-Fi 6 to more bandwidth heavy Wi-Fi 6E. It will be several more years before the migration to Wi-Fi 7 and its 10+ GbE demands. As a result, the GigaSPEED XL5 solution has an important role to play in enterprise networks for many years to come.


OpenAI Tests New Voice Clone Model

With hyper-realistic voice generation, a criminal could trick family members into scams or worse. And with an election cycle coming up, concerns about deepfakes used to spread misinformation are growing. “This is a massive, dual-edged sword,” Saxena tells InformationWeek in a phone interview. “This could be another nail in the coffin for truth and data privacy. This adds yet more of an unknown dynamic where you could have something that can create a lot of emotional distress and psychological effects. But I can also see a lot of positives. It all depends on how it gets regulated.” ... Max Ball, a principal analyst at Forrester, says voice cloning software already exists, but the efficiency of OpenAI’s model could be a game-changer. “It’s a pretty strong step in a couple ways,” Ball tells InformationWeek in an interview. “Today, from what the vendors are showing me, you can do a custom voice, but it takes 15-20 minutes of voice to be able to train it. While 15 minutes doesn’t sound like a lot of time, it’s tough to get anyone to sit down for 15 minutes during a day of work.”


How to Tame Technical Debt in Software Development

Huizendveld provided some heuristics that have helped him tame technical debt: If you can fix it within five minutes, then you should. Try to address technical debt by improving your domain model. If that is too involved, you could resort to a technical hack. In the event that that is too involved, try to at least automate the solution. But sometimes even that is too difficult; in this case, make a checklist for the next time. Agree on a timebox for the improvement that you introduce with the team. How much time are you willing to invest in a small improvement? That defines your timebox. Now it is up to you and the team to honour that timebox, and if you exceed it, make a checklist and move on. Don’t fix it yourself, if it can be fixed by machines. If it is messy because you have a lot of debt, then make it look messy. Please don’t make a tidy list of your technical debt. The visual should inspire change. Only people with skin in the game are allowed to pay off debt, in order to prevent solutions that don’t work in practice.


How New Tech Is Changing Banking

At its core, blockchain provides a shared record of transactions that is updated in real time. This allows complete transaction transparency while eliminating inefficiencies and risks associated with manual processes. All participants in a blockchain network can view a single source of truth. For banking, blockchain delivers enhanced security and lower fraud risk. Records cannot be altered without agreement from all network participants, preventing falsified or duplicated transactions. Data is also cryptographically secured and distributed across the network. Even if one location is compromised, the data remains validated and secured. Blockchain also brings new levels of efficiency to banking. With an immutable record and smart contracts that execute automatically, blockchain eliminates laborious reconciliation and confirmation steps. Settlement times can be reduced from days to minutes. These efficiencies translate into lower operational costs for banks. By removing intermediaries and allowing peer-to-peer transactions, blockchain also opens up new opportunities in banking. From micropayments to decentralized finance, blockchain enables models that are impossible with traditional infrastructure.


Cloud Email Filtering Bypass Attack Works 80% of the Time

After examining Sender Policy Framework (SPF)-specific configurations for 673 .edu domains and 928 .com domains that were using either Google or Microsoft email servers along with third-party spam filters, the researchers found that 88% of Google-based email systems were bypassed, while 78% of Microsoft systems were. The risk is higher when using cloud vendors, since a bypass attack isn't as easy when both filtering and email delivery are housed on premises at known and trusted IP addresses, they noted. The paper offers two major reasons for these high failure rates: First, the documentation to properly set up both the filtering and email servers is confusing and incomplete, and often ignored or not well understood or easily followed. Second, many corporate email managers err on the side of making sure that messages arrive to recipients, for fear of deleting valid ones if they institute too strict a filter profile. "This leads to permissive and insecure configurations," according to the paper. ... the fact that configuring all three of the main email security protocols — SPF,  DMARC, and DKIM — are needed to be truly effective at stopping spam.


RPA promised to solve complex business workflows. AI might take its job

Like most enterprise software companies, RPA vendors are experimenting with generative AI technologies. "Generative AI is poised to amplify the accessibility and scalability of RPA, mitigating the predominant obstacles to entry, namely the need for specialized developers and the risk of bot failure," Saxena said. Alex Astafyev, co-founder and chief business development officer at ElectroNeek, agreed that generative AI will make it much easier to use RPA technology inside companies that have their expensive software developers committed to other projects. "While many RPA platforms follow a low-code approach, thus allowing non-tech users to build automation bots, the knowledge of variables and programming logic might be needed in certain cases. Integration of AI lowers the barrier even further," he said. ... Generative AI technology will also allow RPA systems to deal with complicated problems described with natural language inputs, Pandiarajan said. “In the near future, it is conceivable that you could ask a bot about the status of a customer's package in the fulfillment process, and the AI would understand the process and provide real-time updates," he said.


Why CDOs Need AI-Powered Data Management to Accelerate AI Readiness in 2024

Historically, data and AI governance have been marred by complexity, hindered by siloed systems and disparate standards. However, the urgency of the AI-driven future demands a paradigm shift. Enter modern cloud-native integrated tools – the catalysts for simplifying the adoption of data and AI governance. Pezzetta wishes to leverage AI to clean data and look for anomalies. By leveraging a modernized solution approach, organizations can streamline governance processes, breaking down silos and harmonizing standards across disparate datasets. These tools offer scalability, flexibility, and interoperability, empowering stakeholders to navigate the complexities of data and AI governance with ease. ... “We need to bring AI into our processes. Therefore, we need to define governance processes to develop AI and data together with hubs in business on centralized platforms with integration patterns. I would love to get AI functions in ETL (extract, transform, and load) processes. I hope that we start to use AI in the data pipelines to enhance data quality,” Zimmer adds.



Quote for the day:

“When you fail, that is when you get closer to success.” -- Stephen Richards