Showing posts with label best practices. Show all posts
Showing posts with label best practices. Show all posts

Daily Tech Digest - March 03, 2025


Quote for the day:

“If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work.” -- Thomas J. Watson




How to Create a Winning AI Strategy

“A winning AI strategy starts with a clear vision of what problems you’re solving and why,” says Surace. “It aligns AI initiatives with business goals, ensuring every project delivers measurable value. And it builds in agility, allowing the organization to adapt as technology and market conditions evolve.” ... AI is also not a solution to all problems. Like any other technology, it’s simply a tool that needs to be understood and managed. “Proper AI strategy adoption will require iteration, experimentation, and, inevitably, failure to end up at real solutions that move the needle. This is a process that will require a lot of patience,” says Lionbridge’s Rowlands-Rees. “[E]veryone in the organization needs to understand and buy in to the fact that AI is not just a passing fad -- it’s the modern approach to running a business. The companies that don’t embrace AI in some capacity will not be around in the future to prove everyone else wrong.” Organizations face several challenges when implementing AI strategies. For example, regulatory uncertainty is a significant hurdle and navigating the complex and evolving landscape of AI regulations across different jurisdictions can be daunting. ... “There’s a gap between AI’s theoretical potential and its practical business application. Companies invest millions in AI initiatives that prioritize speed to market over actual utility,” Palmer says.


Work-Life Balance: A Practitioner Viewpoint

Organisation policymakers must ensure a well-funded preventive health screening at all levels so those with identified health risks can be advised and guided suitably on their career choices. They can be helped to step back on their career accelerators, and their needs can be accommodated in the best possible manner. This requires a mature HR policy-making and implementation framework where identifying problems and issues does not negatively impact the employees' careers. Deploying programs that help employees identify and overcome stress issues will be beneficial. A considerable risk for individuals is adopting negative means like alcohol, tobacco, or even getting into a shell to address their stress issues, and that can take an enormous toll on their well-being. Kindling purposeful passion alongside work is yet another strategy. In today's world, an urgent task to be assigned is just a phone call away. One can have some kind of purposeful passion that keeps us engaged alongside our work. This passion will have its purpose; one can fall back on it to keep oneself together and draw inspiration. Purposeful passion can include things such as acquiring a new skill in a sport, learning to play a musical instrument, learning a new dance form, playing with kids, spending quality time with family members in deliberate and planned ways, learning meditation, environment protection and working for other social causes.


The 8 new rules of IT leadership — and what they replace

The CIO domain was once confined to the IT department. But to be tightly partnered and co-lead with the business, CIOs must increasingly extend their expertise across all departments. “In the past they weren’t as open to moving out of their zone. But the role is becoming more fluid. It’s crossing product, engineering, and into the business,” says Erik Brown, an AI and innovation leader in the technology and experience practice at digital services firm West Monroe. Brown compares this new CIO to startup executives, who have experience and knowledge across multiple functional areas, who may hold specific titles but lead teams made up of workers from various departments, and who will shape the actual strategy of the company. “The CIOs are not only seeing strategy, but they will inform it; they can shape where the business is moving, and then they can take that to their teams and help them brainstorm how to support that. And that helps build more impactful teams,” Brown says. He continues: “You look at successful leaders of today and they’re all going to have a blended background. CIOs are far broader in their understanding, and where they’re more shallow, they’ll surround themselves with deputies that have that depth. They’re not going to assume they’re an expert in everything. So they may have an engineering background, for example, and they’ll surround themselves with those who are more experienced in that.”


Managing AI APIs: Best Practices for Secure and Scalable AI API Consumption

Managing AI APIs presents unique challenges compared to traditional APIs. Unlike conventional APIs that primarily facilitate structured data exchange, AI APIs often require high computational resources, dynamic access control and contextual input filtering. Moreover, large language models (LLMs) introduce additional considerations such as prompt engineering, response validation and ethical constraints that demand a specialized API management strategy. To effectively manage AI APIs, organizations need specialized API management strategies that can address unique challenges such as model-specific rate limiting, dynamic request transformations, prompt handling, content moderation and seamless multi-model routing, ensuring secure, efficient and scalable AI consumption. ... As organizations integrate multiple external AI providers, egress AI API management ensures structured, secure and optimized consumption of third-party AI services. This includes governing AI usage, enhancing security, optimizing cost and standardizing AI interactions across multiple providers. Below are some best practices for exposing AI APIs via egress gateways: Optimize Model Selection: Dynamically route requests to AI models based on cost, latency or regulatory constraints. 


Charting the AI-fuelled evolution of embedded analytics

First of all, the technical requirements are high. To fit today’s suite of business tools, embedded analytics have to be extremely fast, lightweight, and very scalable, otherwise they risk dragging down the performance of the entire app. “As development and the web moves to single-page apps using frameworks like Angular and React, it becomes more and more critical that the embedded objects are lightweight, efficient, and scalable. In terms of embedded implementations for the developer, that’s probably one of the biggest things to look out for,” advises Perez. On top of that, there’s security, which is “another gigantic problem and headache for everybody,” observes Perez. “Usually, the user logs into the hosting app and then they need to query data relevant to them, and that involves a security layer.” Balancing the need for fast access to relevant data against the needs for compliance with data privacy regulations and security for your own proprietary information can be a complex juggling act. ... Additionally, the main benefit of embedded analytics is that it makes insights easily accessible to line-of-business users. “It should be very easy to use, with no prior training requirements, it should accept and understand all kinds of requests, and more importantly, it needs to seamlessly work on the company’s internal data,” says Perez.


The Ransomware Payment Ban – Will It Work?

A complete, although targeted, ban on ransom payments for public sector organisations is intended to remove cybercriminals’ financial motivation. However, without adequate investment in resilience, these organisations may be unable to recover as quickly as they need to, putting essential services at risk. Many NHS healthcare providers and local councils are already dealing with outdated infrastructure and cybersecurity staff shortages. If they are expected to withstand ransomware attacks without the option of paying, they must be given the resources, funding, and support to defend themselves and recover effectively. A payment ban may disrupt criminal operations in the short term. However, it doesn’t address the root of the issue – the attacks will persist, and vulnerable systems remain an open door. Cybercriminals are adaptive. If one revenue stream is blocked, they’ll find other ways to exploit weaknesses, whether through data theft, extortion, or targeting less-regulated entities. The requirement for private organisations to report payment intentions before proceeding aims to help authorities track ransomware trends. However, this approach risks delaying essential decisions in high-pressure situations. During a ransomware crisis, decisions must often be made in hours, if not minutes. Adding bureaucratic hurdles to these critical moments could exacerbate operational chaos.


The Modern CIO: Architect of the Intelligent Enterprise

Moving forward, traditional technology-driven CIOs will likely continue to lose leadership influence and C-suite presence as more strategic, business-focused CxOs move in. “There is a growing divergence. And the CIO that plays more of a modern CTO role will not have a set at the table,” Clydesdale-Cotter said. This increased business focus demands CIOs not only have a broad and deep technical understanding of how new technologies impact the nature of their company’s relationship with the broader market and impact on how the business operates, but also command fluency in the vertical markets of their business and not only accountability for the ROI on digital initiatives but the broader success of the business as well. There’s probably no technology having a more significant impact today than AI adoption. ... The maturation of generative AI is moving CIOs from managing pilot deployments to enterprise-scale initiatives. Starting this year, analysts expect about half of CIOs to increasingly prioritize the cultivation of fostering data-centric cultures, ensuring clean, accessible datasets to train their AI models. However, challenges persist: a 2024 Deloitte survey found that 59% of employees resist AI adoption due to job security fears, requiring CIOs to lead change management programs that emphasize upskilling.


7 Steps to Building a Smart, High-Performing Team

Hiring is just the beginning — training is where the real magic happens. One of the biggest mistakes I see business owners make is throwing new hires into the deep end without proper onboarding. ... A strong team is built on clarity. Employees should know exactly what is expected of them from day one. Clear role definitions, performance benchmarks and a structured feedback system help employees stay aligned with company goals. Peter Drucker, often called the father of modern management, once said, "What gets measured gets managed." Establishing key performance indicators (KPIs) ensures that every team member understands how their work contributes to the company's broader objectives. ... Just like in soccer, some players will need a yellow card — a warning that performance needs to improve. The best teams address underperformance before it becomes a chronic issue. A well-structured performance review system, including monthly check-ins and real-time feedback, helps keep employees on track. A study from MIT Sloan Management Review found that teams that receive continuous feedback perform 22% better than those with annual-only reviews. If an employee continues to underperform despite clear feedback and support, it may be time for the red card — letting them go. 


How eBPF is changing container networking

eBPF is revolutionary because it works at the kernel level. Even though containers on the same host have their own isolated view of user space, says Rice, all containers and the host share the same kernel. Applying networking, observability, or security features here makes them instantly available to all containerized applications with little overhead. “A container doesn’t even need to be restarted, or reconfigured, for eBPF-based tools to take effect,” says Rice. Because eBPF operates at the kernel level to implement network policies and operations such as packet routing, filtering, and load balancing, it’s better positioned than other cloud-native networking technologies that work in the user space, says IDC’s Singh. ... “eBPF comes with overhead and complexity that should not be overlooked, such as kernel requirements, which often require newer kernels, additional privileges to run the eBPF programs, and difficulty debugging and troubleshooting when things go wrong,” says Sun. A limited pool of eBPF expertise is available for such troubleshooting, adding to the hesitation. “It is reasonable for service mesh projects to continue using and recommending iptables rules,” she says. Meta’s use of Cilium netkit across millions of containers shows eBPF’s growing usage and utility.


If Architectural Experimentation Is So Great, Why Aren’t You Doing It?

Architectural experimentation is important for two reasons: For functional requirements, MVPs are essential to confirm that you understand what customers really need. Architectural experiments do the same for technical decisions that support the MVP; they confirm that you understand how to satisfy the quality attribute requirements for the MVP. Architectural experiments are also important because they help to reduce the cost of the system over time. This has two parts: you will reduce the cost of developing the system by finding better solutions, earlier, and by not going down technology paths that won’t yield the results you want. Experimentation also pays for itself by reducing the cost of maintaining the system over time by finding more robust solutions. Ultimately running experiments is about saving money - reducing the cost of development by spending less on developing solutions that won’t work or that will cost too much to support. You can’t run experiments on every architectural decision and eliminate the cost of all unexpected changes, but you can run experiments to reduce the risk of being wrong about the most critical decisions. While stakeholders may not understand the technical aspects of your experiments, they can understand the monetary value.


Daily Tech Digest - February 04, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie


Technology skills gap plagues industries, and upskilling is a moving target

“The deepening threat landscape and rapidly evolving high-momentum technologies like AI are forcing organizations to move with lightning speed to fill specific gaps in their job architectures, and too often they are stumbling,” said David Foote, chief analyst at consultancy Foote Partners. To keep up with the rapidly changing landscape, Gartner suggests that organizations invest in agile learning for tech teams. “In the context of today’s AI-fueled accelerated disruption, many business leaders feel learning is too slow to respond to the volume, variety and velocity of skills needs,” said Chantal Steen, a senior director in Gartner’s HR practice. “Learning and development must become more agile to respond to changes faster and deliver learning more rapidly and more cost effectively.” Studies from staffing firm ManpowerGroup, hiring platform Indeed, and Deloitte consulting show that tech hiring will focus on candidates with flexible skills to meet evolving demands. “Employers know a skilled and adaptable workforce is key to navigating transformation, and many are prioritizing hiring and retaining people with in-demand flexible skills that can flex to where demand sits,” said Jonas Prising, ManpowerGroup chair and CEO.


Mixture of Experts (MoE) Architecture: A Deep Dive & Comparison of Top Open-Source Offerings

The application of MoE to open-source LLMs offers several key advantages. Firstly, it enables the creation of more powerful and sophisticated models without incurring the prohibitive costs associated with training and deploying massive, single-model architectures. Secondly, MoE facilitates the development of more specialized and efficient LLMs, tailored to specific tasks and domains. This specialization can lead to significant improvements in performance, accuracy, and efficiency across a wide range of applications, from natural language translation and code generation to personalized education and healthcare. The open-source nature of MoE-based LLMs promotes collaboration and innovation within the AI community. By making these models accessible to researchers, developers, and businesses, MoE fosters a vibrant ecosystem of experimentation, customization, and shared learning. ... Integrating MoE architecture into open-source LLMs represents a significant step forward in the evolution of artificial intelligence. By combining the power of specialization with the benefits of open-source collaboration, MoE unlocks new possibilities for creating more efficient, powerful, and accessible AI models that can revolutionize various aspects of our lives.


The DeepSeek Disruption and What It Means for CIOs

The emergence of DeepSeek has also revived a long-standing debate about open-source AI versus proprietary AI. Open-source AI is not a silver bullet. CIOs need to address critical risks as open-source AI models, if not secured properly, can be exposed to grave cyberthreats and adversarial attacks. While DeepSeek currently shows extraordinary efficiency, it requires an internal infrastructure, unlike GPT-4, which can seamlessly scale on OpenAI's cloud. Open-source AI models lack support and skills, thereby mandating users to build their own expertise, which could be demanding. "What happened with DeepSeek is actually super bullish. I look at this transition as an opportunity rather than a threat," said Steve Cohen, founder of Point72. ... The regulatory non-compliance adds another challenge as many governments restrict and disallow sensitive enterprise data from being processed by Chinese technologies. A possibility of potential backdoor can't be ruled out and this could open the enterprises to additional risks. CIOs need to conduct extensive security audits before deploying DeepSeek. rganizations can implement safeguards such as on-premises deployment to avoid data exposure. Integrating strict encryption protocols can help the AI interactions remain confidential, and performing rigorous security audits ensure the model's safety before deploying it into business workflows.


Why GreenOps will succeed where FinOps is failing

The cost-control focus fails to engage architects and engineers in rethinking how systems are designed, built and operated for greater efficiency. This lack of engagement results in inertia and minimal progress. For example, the database team we worked with in an organization new to the cloud launched all the AWS RDS database servers from dev through production, incurring a $600K a month cloud bill nine months before the scheduled production launch. The overburdened team was not thinking about optimizing costs, but rather optimizing their own time and getting out of the way of the migration team as quickly as possible. ... GreenOps — formed by merging FinOps, sustainability and DevOps — addresses the limitations of FinOps while integrating sustainability as a core principle. Green computing contributes to GreenOps by emphasizing energy-efficient design, resource optimization and the use of sustainable technologies and platforms. This foundational focus ensures that every system built under GreenOps principles is not only cost-effective but also minimizes its environmental footprint, aligning technological innovation with ecological responsibility. Moreover, we’ve found that providing emissions feedback to architects and engineers is a bigger motivator than cost to inspire them to design more efficient systems and build automation to shut down underutilized resources.


Best Practices for API Rate Limits and Quotas

Unlike short-term rate limits, the goal of quotas is to enforce business terms such as monetizing your APIs and protecting your business from high-cost overruns by customers. They measure customer utilization of your API over longer durations, such as per hour, per day, or per month. Quotas are not designed to prevent a spike from overwhelming your API. Rather, quotas regulate your API’s resources by ensuring a customer stays within their agreed contract terms. ... Even a protection mechanism like rate limiting could have errors. For example, a bad network connection with Redis could cause reading rate limit counters to fail. In such scenarios, it’s important not to artificially reject all requests or lock out users even though your Redis cluster is inaccessible. Your rate-limiting implementation should fail open rather than fail closed, meaning all requests are allowed even though the rate limit implementation is faulting. This also means rate limiting is not a workaround to poor capacity planning, as you should still have sufficient capacity to handle these requests or even design your system to scale accordingly to handle a large influx of new requests. This can be done through auto-scale, timeouts, and automatic trips that enable your API to still function.


Protecting Ultra-Sensitive Health Data: The Challenges

Protecting ultra-sensitive information "is an incredibly confusing and complicated and evolving part of the law," said regulatory attorney Kirk Nahra of the law firm WilmerHale. "HIPAA generally does not distinguish between categories of health information," he said. "There are exceptions - including the recent Dobbs rule - but these are not fundamental in their application, he said. Privacy protections related to abortion procedures are perhaps the most hotly debated type of patient information. For instance, last June - in response to the June 2022 Supreme Court's Dobbs ruling, which overturned the national right to abortion - the Biden administration's U.S. Department of Health and Human Services modified the HIPAA Privacy Rule to add additional safeguards for the access, use and disclosure of reproductive health information. The rule is aimed at protecting women from the use or disclosure of their reproductive health information when it is sought to investigate or impose liability on individuals, healthcare providers or others who seek, obtain, provide or facilitate reproductive healthcare that is lawful under the circumstances in which such healthcare is provided. But that rule is being challenged in federal court by 15 state attorneys general seeking to revoke the regulations.


Evolving threat landscape, rethinking cyber defense, and AI: Opportunties and risk

Businesses are firmly in attackers’ crosshairs. Financially motivated cybercriminals conduct ransomware attacks with record-breaking ransoms being paid by companies seeking to avoid business interruption. Others, including nation-state hackers, infiltrate companies to steal intellectual property and trade secrets to gain commercial advantage over competitors. Further, we regularly see critical infrastructure being targeted by nation-state cyberattacks designed to act as sleeper cells that can be activated in times of heightened tension. Companies are on the back foot. ... As zero trust disrupts obsolete firewall and VPN-based security, legacy vendors are deploying firewalls and VPNs as virtual machines in the cloud and calling it zero trust architecture. This is akin to DVD hardware vendors deploying DVD players in a data center and calling it Netflix! It gives a false sense of security to customers. Organizations need to make sure they are really embracing zero trust architecture, which treats everyone as untrusted and ensures users connect to specific applications or services, rather than a corporate network. ... Unfortunately, the business world’s harnessing of AI for cyber defense has been slow compared to the speed of threat actors harnessing it for attacks. 


Six essential tactics data centers can follow to achieve more sustainable operations

By adjusting energy consumption based on real-time demand, data centers can significantly enhance their operational efficiency. For example, during periods of low activity, power can be conserved by reducing energy use, thus minimizing waste without compromising performance. This includes dynamic power management technologies in switch and router systems, such as shutting down unused line cards or ports and controlling fan speeds to optimize energy use based on current needs. Conversely, during peak demand, operations can be scaled up to meet increased requirements, ensuring consistent and reliable service levels. Doing so not only reduces unnecessary energy expenditure, but also contributes to sustainability efforts by lowering the environmental impact associated with energy-intensive operations. ... Heat generated from data center operations can be captured and repurposed to provide heating for nearby facilities and homes, transforming waste into a valuable resource. This approach promotes a circular energy model, where excess heat is redirected instead of discarded, reducing the environmental impact. Integrating data centers into local energy systems enhances sustainability and offers tangible benefits to surrounding areas and communities whilst addressing broader energy efficiency goals.


The Engineer’s Guide to Controlling Configuration Drift

“Preventing configuration drift is the bedrock for scalable, resilient infrastructure,” comments Mayank Bhola, CTO of LambdaTest, a cloud-based testing platform that provides instant infrastructure. “At scale, even small inconsistencies can snowball into major operational inefficiencies. We encountered these challenges [user-facing impact] as our infrastructure scaled to meet growing demands. Tackling this challenge head-on is not just about maintaining order; it’s about ensuring the very foundation of your technology is reliable. And so, by treating infrastructure as code and automating compliance, we at LambdaTest ensure every server, service, and setting aligns with our growth objectives, no matter how fast we scale. Adopting drift detection and remediation strategies is imperative for maintaining a resilient infrastructure. ... The policies you set at the infrastructure level, such as those for SSH access, add another layer of security to your infrastructure. Ansible allows you to define policies like removing root access, changing the default SSH port, and setting user command permissions. “It’s easy to see who has access and what they can execute,” Kampa remarks. “This ensures resilient infrastructure, keeping things secure and allowing you to track who did what if something goes wrong.”


Strategies for mitigating bias in AI models

The need to address bias in AI models stems from the fundamental principle of fairness. AI systems should treat all individuals equitably, regardless of their background. However, if the training data reflects existing societal biases, the model will likely reproduce and even exaggerate those biases in its outputs. For instance, if a facial recognition system is primarily trained on images of one demographic, it may exhibit lower accuracy rates for other groups, potentially leading to discriminatory outcomes. Similarly, a natural language processing model trained on predominantly Western text may struggle to understand or accurately represent nuances in other languages and cultures. ... Incorporating contextual data is essential for AI systems to provide relevant and culturally appropriate responses. Beyond basic language representation, models should be trained on datasets that capture the history, geography, and social issues of the populations they serve. For instance, an AI system designed for India should include data on local traditions, historical events, legal frameworks, and social challenges specific to the region. This ensures that AI-generated responses are not only accurate but also culturally sensitive and context-aware. Additionally, incorporating diverse media formats such as text, images, and audio from multiple sources enhances the model’s ability to recognise and adapt to varying communication styles.

Daily Tech Digest - December 26, 2024

Best Practices for Managing Hybrid Cloud Data Governance

Kausik Chaudhuri, CIO of Lemongrass, explains monitoring in hybrid-cloud environments requires a holistic approach that combines strategies, tools, and expertise. “To start, a unified monitoring platform that integrates data from on-premises and multiple cloud environments is essential for seamless visibility,” he says. End-to-end observability enables teams to understand the interactions between applications, infrastructure, and user experience, making troubleshooting more efficient. ... Integrating legacy systems with modern data governance solutions involves several steps. Modern data governance systems, such as data catalogs, work best when fueled with metadata provided by a range of systems. “However, this metadata is often absent or limited in scope within legacy systems,” says Elsberry. Therefore, an effort needs to be made to create and provide the necessary metadata in legacy systems to incorporate them into data catalogs. Elsberry notes a common blocking issue is the lack of REST API integration. Modern data governance and management solutions typically have an API-first approach, so enabling REST API capabilities in legacy systems can facilitate integration. “Gradually updating legacy systems to support modern data governance requirements is also essential,” he says.


These Founders Are Using AI to Expose and Eliminate Security Risks in Smart Contracts

The vulnerabilities lurking in smart contracts are well-known but often underestimated. “Some of the most common issues include Hidden Mint functions, where attackers inflate token supply, or Hidden Balance Updates, which allow arbitrary adjustments to user balances,” O’Connor says. These aren’t isolated risks—they happen far too frequently across the ecosystem. ... “AI allows us to analyze huge datasets, identify patterns, and catch anomalies that might indicate vulnerabilities,” O’Connor explains. Machine learning models, for instance, can flag issues like reentrancy attacks, unchecked external calls, or manipulation of minting functions—and they do it in real-time. “What sets AI apart is its ability to work with bytecode,” he adds. “Almost all smart contracts are deployed as bytecode, not human-readable code. Without advanced tools, you’re essentially flying blind.” ... As blockchain matures, smart contract security is no longer the sole concern of developers. It’s an industry-wide challenge that impacts everyone, from individual users to large enterprises. DeFi platforms increasingly rely on automated tools to monitor contracts and secure user funds. Centralized exchanges like Binance and Coinbase assess token safety before listing new assets. 


Three best change management practices to take on board in 2025

For change management to truly succeed, companies need to move from being change-resistant to change-ready. This means building up "change muscles" -- helping teams become adaptable and comfortable with change over the long term. For Mel Burke, VP of US operations at Grayce, the key to successful change is speaking to both the "head" and the "heart" of your stakeholders. Involve employees in the change process by giving them a voice and the ability to shape it as it happens. ... Change management works best when you focus on the biggest risks first and reduce the chance of major disruptions. Dedman calls this strategy "change enablement," where change initiatives are evaluated and scored on critical factors like team expertise, system dependencies, and potential customer impact. High-scorers get marked red for immediate attention, while lower-risk ones stay green for routine monitoring to keep the process focused and efficient. ... Peter Wood, CTO of Spectrum Search, swears by creating a "success signals framework" that combines data-driven metrics with culture-focused indicators. "System uptime and user adoption rates are crucial," he notes, "but so are team satisfaction surveys and employee retention 12-18 months post-change." 


Corporate Data Governance: The Cornerstone of Successful Digital Transformation

While traditional data governance focuses on the continuous and tactical management of data assets – ensuring data quality, consistency, and security – corporate data governance elevates this practice by integrating it with the organization’s overall governance framework and strategic objectives. It ensures that data management practices are not operating in silos but are harmoniously aligned and integrated with business goals, regulatory requirements, and ethical standards. In essence, corporate data governance acts as a bridge between data management and corporate strategy, ensuring that every data-related activity contributes to the organization’s mission and objectives. ... In the digital age, data is a critical asset that can drive innovation, efficiency, and competitive advantage. However, without proper governance, data initiatives can become disjointed, risky, and misaligned with corporate goals. Corporate data governance ensures that data management practices are strategically integrated with the organization’s mission, enabling businesses to leverage data confidently and effectively. By focusing on alignment, organizations can make better decisions, respond swiftly to market changes, and build stronger relationships with customers. 


What is an IT consultant? Roles, types, salaries, and how to become one

Because technology is continuously changing, IT consultants can provide clients with the latest information about new technologies as they become available, recommending implementation strategies based on their clients’ needs. As a result, for IT consultants, keeping the pulse of the technology market is essential. “Being a successful IT consultant requires knowing how to walk in the shoes of your IT clients and their business leaders,” says Scott Buchholz, CTO of the government and public services sector practice at consulting firm Deloitte. A consultant’s job is to assess the whole situation, the challenges, and the opportunities at an organization, Buchholz says. As an outsider, the consultant can see things clients can’t. ... “We’re seeing the most in-demand types of consultants being those who specialize in cybersecurity and digital transformation, largely due to increased reliance on remote work and increased risk of cyberattacks,” he says. In addition, consultants with program management skills are valuable for supporting technology projects, assessing technology strategies, and helping organizations compare and make informed decisions about their technology investments, Farnsworth says.


Blockchain + AI: Decentralized Machine Learning Platforms Changing the Game

Tech giants with vast computing resources and proprietary datasets have long dominated traditional AI development. Companies like Google, Amazon, and Microsoft have maintained a virtual monopoly on advanced AI capabilities, creating a significant barrier to entry for smaller players and independent researchers. However, the introduction of blockchain technology and cryptocurrency incentives is rapidly changing this paradigm. Decentralized machine learning platforms leverage blockchain's distributed nature to create vast networks of computing power. These networks function like a global supercomputer, where participants can contribute their unused computing resources in exchange for cryptocurrency tokens. ... The technical architecture of these platforms typically consists of several key components. Smart contracts manage the distribution of computational tasks and token rewards, ensuring transparent and automatic execution of agreements between parties. Distributed storage solutions like IPFS (InterPlanetary File System) handle the massive datasets required for AI training, while blockchain networks maintain an immutable record of transactions and model provenance.


DDoS Attacks Surge as Africa Expands Its Digital Footprint

A larger attack surface, however, is not the only reason for the increased DDoS activity in Africa and the Middle East, Hummel says. "Geopolitical tensions in these regions are also fueling a surge in hacktivist activity as real-world political disputes spill over into the digital world," he says. "Unfortunately, hacktivists often target critical infrastructure like government services, utilities, and banks to cause maximum disruption." And DDoS attacks are by no means the only manifestation of the new threats that organizations in Africa are having to contend with as they broaden their digital footprint. ... Attacks on critical infrastructure and financially motived attacks by organized crime are other looming concerns. In the center's assessment, Africa's government networks and networks belonging to the military, banking, and telecom sectors are all vulnerable to disruptive cyberattacks. Exacerbating the concern is the relatively high potential for cyber incidents resulting from negligence and accidents. Organized crime gangs — the scourge of organizations in the US, Europe, and other parts of the world, present an emerging threat to organizations in Africa, the Center has assessed. 


Optimizing AI Workflows for Hybrid IT Environments

Hybrid IT offers flexibility by combining the scalability of the cloud with the control of on-premises resources, allowing companies to allocate their resources more precisely. However, this setup also introduces complexity. Managing data flow, ensuring security, and maintaining operational efficiency across such a blended environment can become an overwhelming task if not addressed strategically. To manage AI workflows effectively in this kind of setup, businesses must focus on harmonizing infrastructure and resources. ... Performance optimization is crucial when running AI workloads across hybrid environments. This requires real-time monitoring of both on-premises and cloud systems to identify bottlenecks and inefficiencies. Implementing performance management tools allows for end-to-end visibility of AI workflows, enabling teams to proactively address performance issues before they escalate. ... Scalability also supports agility, which is crucial for businesses that need to grow and iterate on AI models frequently. Cloud-based services, in particular, allow teams to experiment and test AI models without being constrained by on-premises hardware limitations. This flexibility is essential for staying competitive in fields where AI innovation happens rapidly.


The Cloud Back-Flip

Cloud repatriation is driven by various factors, including high cloud bills, hidden costs, complexity, data sovereignty, and the need for greater data control. In markets like India—and globally—these factors are all relevant today, points out Vishal Kamani – Cloud Business Head, Kyndryl India. “Currently, rising cloud costs and complexity are part of the ‘learning curve’ for enterprises transitioning to cloud operations.” ... While cloud repatriation is not an alien concept anymore, such reverse migration back to on-premises data centres is seen happening only in organisations that are technology-driven and have deep tech expertise, observes Gaurang Pandya, Director, Deloitte India. “This involves them focusing back on the basics of IT infrastructure which does need a high number of skilled employees. The major driver for such reverse migration is increasing cloud prices and performance requirements. In an era of edge computing and 5G, each end system has now been equipped with much more computing resources than it ever had. This increases their expectations from various service providers.” Money is a big reason too- especially when you don’t know where is it going.


Why Great Programmers fail at Engineering

Being a good programmer is about mastering the details — syntax, algorithms, and efficiency. But being a great engineer? That’s about seeing the bigger picture: understanding systems, designing for scale, collaborating with teams, and ultimately creating software that not only works but excels in the messy, ever-changing real world. ... Good programmers focus on mastering their tools — languages, libraries, and frameworks — and take pride in crafting solutions that are both functional and beautiful. They are the “builders” who bring ideas to life one line of code at a time. ... Software engineering requires a keen understanding of design principles and system architecture. Great code in a poorly designed system is like building a solid wall in a crumbling house — it doesn’t matter how good it looks if the foundation is flawed. Many programmers struggle to:Design systems for scalability and maintainability. Think in terms of trade-offs, such as performance vs. development speed. Plan for edge cases and future growth. Software engineering is as much about people as it is about code. Great engineers collaborate with teams, communicate ideas clearly, and balance stakeholder expectations. ... Programming success is often measured by how well the code runs, but engineering success is about how well the system solves a real-world problem.



Quote for the day:

"Ambition is the path to success. Persistence is the vehicle you arrive in." -- Bill Bradley

Daily Tech Digest - October 16, 2024

AI Models in Cybersecurity: From Misuse to Abuse

In a constant game of whack-a-mole, both defenders and attackers are harnessing AI to tip the balance of power in their respective favor. Before we can understand how defenders and attackers leverage AI, we need to acknowledge the three most common types of AI models currently in circulation. ... Generative AI, Supervised Machine Learning, and Unsupervised Machine Learning are three main types of AI models. Generative AI tools such as ChatGPT, Gemini, and Copilot can understand human input and can deliver outputs in a human-like response. Notably, generative AI continuously refines its outputs based on user interactions, setting it apart from traditional AI systems. Unsupervised machine learning models are great at analyzing and identifying patterns in vast unstructured or unlabeled data. Alternatively, supervised machine learning algorithms make predictions from well-labeled, well-tagged, and well-structured datasets. ... Despite the media hype, the usage of AI by cybercriminals is still at nascent stage. This doesn’t mean that AI is not being exploited for malicious purposes, but it’s also not causing the decline of human civilization like some purport it to be. Cybercriminals use AI for very specific tasks


Meet Aria: The New Open Source Multimodal AI That's Rivaling Big Tech

Rhymes AI has released Aria under the Apache 2.0 license, allowing developers and researchers to adapt and build upon the model. It is also a very powerful addition to an expanding pool of open-source AI models led by Meta and Mistral, which perform similarly to the more popular and adopted closed-source models. Aria's versatility also shines across various tasks. In the research paper, the team explained how they fed the model with an entire financial report and it was capable of performing an accurate analysis, it can extract data from reports, calculate profit margins, and provide detailed breakdowns. When tasked with weather data visualization, Aria not only extracted the relevant information but also generated Python code to create graphs, complete with formatting details. The model's video processing capabilities also seem promising. In one evaluation, Aria dissected an hour-long video about Michelangelo's David, identifying 19 distinct scenes with start and end times, titles, and descriptions. This isn't simple keyword matching but a demonstration of context-driven understanding. Coding is another area where Aria excels. It can watch video tutorials, extract code snippets, and even debug them. 


Preparing for IT failures in an unpredictable digital world

By embracing multiple vendors and hybrid cloud environments, organizations would be better prepared so that if one platform goes down, the others can pick up the slack. While this strategy increases ecosystem complexity, it buys down the risk accepted by ensuring you’re prepared to recover and resilient to widespread outages in complex, hybrid, and cloud-based environments. ... It’s clear that IT failures aren’t just a possibility — they are inevitable. Simply waiting for things to go wrong before reacting is a high-risk approach that’s asking for trouble. Instead, organizations must go on the front foot and adopt a strategy that focuses on early detection, continuous monitoring, and risk prevention. This means planning for worst-case scenarios, but also preparing for recovery. After all, one of the planks of IT infrastructure management is business continuity. It’s about optimal performance when things are going well while ensuring that systems recover quickly and continue operating even in the face of major disruptions. This requires a holistic approach to IT management, where failures are anticipated, and recovery plans are in place. 


CIOs must adopt startup agility to compete with tech firms

CIOs often struggle with soft skills, despite knowing what needs to be done. We engage with CEOs and CFOs to foster alignment among the leadership team, as strong support from them is crucial. CIOs also need help gaining buy-in from other CXOs, particularly when it comes to automation initiatives. Our approach emphasises unlocking bandwidth within IT departments. If 90% of their resources are spent on running the business, there’s little time for innovation. We help them automate routine tasks, which allows their best people to focus on transformative efforts. ... CIOs play a crucial role in driving innovation and maintaining cost efficiency while justifying tech investments, especially as organisations become digital-first. A key challenge is controlling cloud costs, which often escalate as IT spending moves outside central control. To counter this, CIOs should streamline access to central services, reduce redundant purchases, and negotiate larger contracts for better discounts. They must also recognise that cloud services are not always cheaper; cost-efficiency depends on application types and usage. 


AI makes edge computing more relevant to CIOs

Many user-facing situations could benefit from edge-based AI. Payton emphasizes facial recognition technology, real-time traffic updates for semi-autonomous vehicles, and data-driven enhancements on connected devices and smartphones as possible areas. “In retail, AI can deliver personalized experiences in real-time through smart devices,” she says. “In healthcare, edge-based AI in wearables can alert medical professionals immediately when it detects anomalies, potentially saving lives.” And a clear win for AI and edge computing is within smart cities, says Bizagi’s Vázquez. There are numerous ways AI models at the edge could help beyond simply controlling traffic lights, he says, such as citizen safety, autonomous transportation, smart grids, and self-healing infrastructures. To his point, experiments with AI are already being carried out in cities such as Bahrain, Glasgow, and Las Vegas to enhance urban planning, ease traffic flow, and aid public safety. Self-administered, intelligent infrastructure is certainly top of mind for Dairyland’s Melby since efforts within the energy industry are underway to use AI to meet emission goals, transition into renewables, and increase the resilience of the grid.


Deepfake detection is a continuous process of keeping up with AI-driven fraud: BioID

BioID is part of the growing ecosystem of firms offering algorithmic defenses to algorithmic attacks. It provides an automated, real-time deepfake detection tool for photos and videos that analyzes individual frames and video sequences, looking for inter-frame or video codec anomalies. Its algorithm is the product of a German research initiative that brought together a number of institutions across sectors to collaborate on deepfake detection strategy. But it is also continuing to refine its neural network to keep up with the relentless pace of AI fraud. “We are in an ongoing fight of AI against AI,” Freiberg says. “We can’t just just lean back and relax and sell what we have. We’re continuously working on increasing the accuracy of our algorithms.” That said, Freiberg is not only offering doom and gloom. She points to the Ukrainian Ministry of Foreign Affairs AI ambassador, Victoria Shi, as an example of deepfake technology used with non-fraudulent intention. The silver lining is reflected in the branding of BioID’s “playground” for AI deepfake testing. At playground.bioid.com, users can upload media to have BioID judge whether or not it is genuine.


How Manufacturing Best Practices Shape Software Development

Manufacturers rely on bills of materials (BOMs) to track every component in their products. This transparency enables them to swiftly pinpoint the source of any issues that arise, ensuring they have a comprehensive understanding of their supply chain. In software, this same principle is applied through software bills of materials (SBOMs), which list all the components, dependencies and licenses used in a software application. SBOMs are increasingly becoming critical resources for managing software supply chains, enabling developers and security teams to maintain visibility over what’s being used in their applications. Without an SBOM, organizations risk being unaware of outdated or vulnerable components in their software, making it difficult to address security issues. ... It’s nearly impossible to monitor open source components manually at scale. But with software composition analysis, developers can automate the process of identifying security risks and ensuring compliance. Automation not only accelerates development but also reduces the risk of human error, so teams can manage vast numbers of components and dependencies efficiently.


Striking The Right Balance Between AI & Innovation & Evolving Regulation

The bottom line is that integrating AI comes with complex challenges to how an organisation approaches data privacy. A significant part of this challenge relates to purpose limitation – specifically, the disclosure provided to consumers regarding the purpose(s) for data processing and the consent obtained. To tackle this hurdle, it’s vital that organisations maintain a high level of transparency that discloses to users and consumers how the use of their data is evolving as AI is integrated. ... Just as the technology landscape has evolved, so have consumer expectations. Today, consumers are more conscious of and concerned with how their data is used. Adding to this, nearly two-thirds of consumers worry about AI systems lacking human oversight, and 93% believe irresponsible AI practices damage company reputations. As such, it’s vital that organisations are continuously working to maintain consumer trust as part of their AI strategy. With this said, there are many consumers who are willing to share their data as long as they receive a better personalised customer experience, showcasing that this is a nuanced landscape that requires attention and balance.


WasmGC and the future of front-end Java development

The approach being offered by the WasmGC extension is newer. The extension provides a generic garbage collection layer that your software can refer to; a kind of garbage collection layer built into WebAssembly. Wasm by itself doesn’t track references to variables and data structures, so the addition of garbage collection also implies introducing new “typed references” into the specification. This effort is happening gradually: recent implementations support garbage collection on “linear” reference types like integers, but complex types like objects and structs have also been added. ... The performance potential of languages like Java over JavaScript is a key motivation for WasmGC, but obviously there’s also the enormous range of available functionality and styles among garbage-collected platforms. The possibility for moving custom code into Wasm, and thereby making it universally deployable, including to the browser, is there. More broadly, one can’t help but wonder about the possibility of opening up the browser to other languages beyond JavaScript, which could spark a real sea-change to the software industry. It’s possible that loosening JavaScript’s monopoly on the browser will instigate a renaissance of creativity in programming languages.


Mind Your Language Models: An Approach to Architecting Intelligent Systems

The reason why we wanted a smaller model that's adapted to a certain task is, it's easier to operate, and when you're running LLMs, it's going to be much economical, because you can't run massive models all the time because it's very expensive and takes a lot of GPUs. Currently, we're struggling with getting GPUs in AWS. We searched all EU Frankfurt, Ireland, North Virginia. It's seriously a challenge now to get big GPUs to host your LLMs. The second part of the problem is, we started getting data. It's high quality. We started improving the knowledge graph. The one thing that is interesting when you think about semantic search is that when people interact with your system, even if they're working on the same problem, they don't end up using the same language. Which means that you need to be able to translate or understand the range of language that your users can actually interact with your system. ... We converted these facts with all of their synonyms, with all of the different ways one could potentially ask for this piece of data, and put everything into the knowledge graph itself. You could use LLMs to generate training data for your smaller models. 



Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." -- Philippos

Daily Tech Digest - October 04, 2024

Over 80% of phishing sites now target mobile devices

M-ishing was highlighted to be the top security challenge plaguing the mobile space, both in the public sector (10%) and the private sector, and more importantly, 76% of phishing sites are now using HTTP, giving users a false sense of communication protocol. “Phishing using HTTPS is not completely new,” Krishna Vishnubhotla, vice President for product strategy at Zimperium. “Last year’s report revealed that, between 2021 and 2022, the percentage of phishing sites targeting mobile devices increased from 75% to 80%. Some of them were already using HTTPS but the focus was converting campaigns to target mobile.” “This year, we are seeing a meteoric rise in this tactic for mobile devices, which is a sign of maturing tactics on mobile, and it makes sense. The mobile form factor is conducive to deceiving the user because we rarely see the URL in the browser or the quick redirects. Moreover, we are conditioned to believe a link is secure if it has a padlock icon next to the URL in our browsers. Especially on mobile, users should look beyond the lock icon and carefully verify the website’s domain name before entering any sensitive information,” Vishnubhotla said.


How GPT-4o defends your identity against AI-generated deepfakes

OpenAI’s latest model, GPT-4o, is designed to identify and stop these growing threats. As an “autoregressive omni model, which accepts as input any combination of text, audio, image and video,” as described on its system card published on Aug. 8. OpenAI writes, “We only allow the model to use certain pre-selected voices and use an output classifier to detect if the model deviates from that.” Identifying potential deepfake multimodal content is one of the benefits of OpenAI’s design decisions that together define GPT-4o. Noteworthy is the amount of red teaming that’s been done on the model, which is among the most extensive of recent-generation AI model releases industry-wide. All models need to constantly be training on and learning from attack data to keep their edge, and that’s especially the case when it comes to keeping up with attackers’ deepfake tradecraft that is becoming indistinguishable from legitimate content. ... GANs most often consist of two neural networks. The first is a generator that produces synthetic data (images, videos or audio) and a discriminator that evaluates its realism. The generator’s goal is to improve the content’s quality to deceive the discriminator. This advanced technique creates deepfakes nearly indistinguishable from real content.


The 4 Evolutions of Your Observability Journey

Known unknowns can be used to describe the second stage. They fit because we’re looking at things we know we don’t know, but we’re trying to see how well we can develop the understanding of those unknowns, whereas if these were unknown unknowns, we wouldn’t even know where to start. If the first stage is where most of your observability tooling lies, then this is the era of service-level objectives (SLOs); this is also the stage where observability starts being phrased in a “yes, and” manner. … Having developed the ability to figure out that you can ask questions about what happened in a system in the past, you’re probably now primarily concerned with statistical questions and developing more comprehensive correlations. ... Additionally, one of the most interesting developments here is when your incident reports change: They stop becoming concerned about what happened and start becoming concerned with how unusual or surprising it was. You’re seeing first hand this stage of the observability journey in action if you’ve ever read a retrospective that said something like, “We were surprised by the behavior, so we dug in. Even though our alerts were telling us that this other thing was the problem, we investigated the surprising thing first.”


Be the change you want to see: How to show initiative in the workplace

At one point or another, all of us are probably guilty of posing a question without offering a solution. Often we may feel that others are more qualified to address an issue than we are and as long as we bring the matter to someone’s attention, then that’s as far as we need go. While this is well and good – and certainly not every scenario can be dealt with single-handedly – it can be good practice to brainstorm ideas for the problems you identify. It’s important to loop people in and utilise the expertise of others, but you should also have confidence in your ability to tackle an issue. Identifying the problem is half the battle, so why not keep going and see what you come up with? ... Some are born with confidence to spare and some are not, luckily it is a skill that can be learned over time. Working on improving your confidence level, being more vocal and presenting yourself as an expert in your field are crucial to improving your ability to show initiative, as it means you are far more likely to take the reins and lead the way. Taking the initiative or going out on a limb, in many scenarios, can be nerve-wracking and you may doubt that you are the best person for the job. 


What is RPA? A revolution in business process automation

RPA is often touted as a mechanism to bolster ROI or reduce costs, but it can also be used to improve customer experience. For example, enterprises such as airlines employ thousands of customer service agents, yet customers are still waiting in queues to have their calls fielded. A chatbot could help alleviate some of that wait. ... COOs were some of the earliest adopters of RPA. In many cases, they bought RPA and hit a wall during implementation, prompting them to ask for IT’s help (and forgiveness). Now citizen developers without technical expertise are using cloud software to implement RPA in their business units, and often the CIO has to step in and block them. Business leaders must involve IT from the outset to ensure they get the resources they require. ... Many implementations fail because design and change are poorly managed, says Sanjay Srivastava, chief digital officer of Genpact. In the rush to get something deployed, some companies overlook communication exchanges between the various bots, which can break a business process. “Before you implement, you must think about the operating model design,” Srivastava says. “You need to map out how you expect the various bots to work together.” 


Best practices for implementing threat exposure management, reducing cyber risk exposure

Threat exposure management is the evolution of traditional vulnerability management. Several trends are making it a priority for modern security teams. An increase in findings that overwhelm resource-constrained teams As the attack surface expands to cloud and applications, the volume of findings is compounded by more fragmentation. Cloud, on-prem, and AppSec vulnerabilities come from different tools. Identity misconfigurations from other tools. This leads to enormous manual work to centralize, deduplicate, and prioritize findings using a common risk methodology. Finally, all of this is happening while attackers are moving faster than ever, with recent reports showing the median time to exploit a vulnerability is less than one day! Threat exposure management is essential because it continuously identifies and prioritizes risks—such as vulnerabilities and misconfigurations—across all assets, using the risk context applicable to your organization. By integrating with existing security tools, TEM offers a comprehensive view of potential threats, empowering teams to take proactive, automated actions to mitigate risks before they can be exploited. 


Understanding VBS Enclaves, Windows’ new security technology

Microsoft recently extended its virtualization-based security model to what it calls VBS Enclaves. If you’ve looked at implementing confidential computing on Windows Server or in Azure, you’ll be familiar with the concept of enclaves, using Intel’s SGX instruction set to lock down areas of memory, using them as a trusted execution environment. ... So how do you build and use VBS Enclaves? First, you’ll need Windows 11 or Windows Server 2019 or later, with VBS enabled. You can do this from the Windows security tool, via a Group Policy, or with Intune to control it via MDM. It’s part of the Memory Integrity service, so you should really be enabling it on all supported devices to help reduce security risks, even if you don’t plan to use VBS Enclaves in your code. The best way to think of it is as a way of using encrypted storage securely. So, for example, if you’re using a database to store sensitive data, you can use code running in an enclave to process and query that data, passing results to the rest of your application. You’re encapsulating data in a secure environment with only essential access allowed. No other parts of your system have access to the decryption keys, so on-disk data stays secure.


Smart(er) Subsea Cables to Provide Early Warning System

With the U.N. estimating between 150 to 200 cable faults annually, operators need all the help they can get to maintain the global fiber network, which carries about 99% of internet traffic between continents. Additionally, $10 trillion of financial transactions flow over them per day. This growing situation has businesses desperately seeking network resiliency and clamoring for always-on-network services as their data centers and apps demand maximum uptime. The system has been beset this year with large cable outages starting in February in the Red Sea and in the spring along Western Africa, and more. ... Equipping the cable with sensors would enhance research into one of the most under-explored regions of the planet: the vast depths of the Southern Ocean, the study read. The Southern Ocean that surrounds Antarctica strongly influences other oceans and climates worldwide, according to the NSF. “Equipping the subsea telecommunications cable with sensors would help researchers better understand how deep-sea currents contribute to global climate change and improve understanding of earthquake seismology and related early warning signs for tsunamis in the earthquake-prone South Pacific region.”


Security Needs to Be Simple and Secure By Default: Google

"Google engineers are working to secure AI and to bring AI to security practitioners," said Steph Hay, senior director of Gemini + UX for cloud security at Google. "Gen AI represents the inflection point of security. It is going to transform security workflows and give the defender the advantage." ... Google also advocates for the convergence of security products and embedding AI into the entire security ecosystem. Through Mandiant, VirusTotal and the Google Cloud Platform, Google aims to drive this convergence, along with safe browsing. Google is making this convergence possible by taking a platform-centric approach through its Security Command Center, or SCC. Hemrajani shared that SCC aims to unify security categories such as cloud security posture management, Kubernetes security posture management, entitlement management and threat intelligence. Security information and event management and security orchestration, automation and response also need to converge. "SCC is bringing all of these together to be able to model the risk that you are exposed to in a holistic manner," he said. "We also realize that there is a power of convergence between cloud risk management and security operations. We need to converge them even further and bring them together to truly benefit."


The AI Revolution: How Machine Learning Changed the World in Two Years

The future of AI in business will involve continued collaboration between governments, businesses, and individuals to address challenges and maximize opportunities presented by this transformative technology. AI is likely to become increasingly integrated into software and hardware, making it easier for businesses to adopt and utilize its capabilities. Success will depend on how it is leveraged to augment human capabilities rather than replacing them, creating a future where humans and AI work together in a complementary way. Beyond automating individual tasks, AI is driving a paradigm shift towards unprecedented efficiency across entire business operations. By automating repetitive tasks, AI allows employees to focus on more strategic and creative work, leading to increased productivity and innovation. A recent McKinsey study found AI could potentially automate 45% of the activities currently performed by workers. As well as automating processes, it can also streamline operations, and minimize errors, leading to significant cost savings for businesses. For example, automating customer service with AI can reduce the need for human agents, leading to lower labor costs.



Quote for the day:

"Intelligence is the ability to change your mind when presented with accurate information that contradicts your beliefs" -- Vala Afshar