Eduardo Ordax’s Post

View profile for Eduardo Ordax

🤖 Generative AI Lead @ AWS ☁️ (150k+) | Startup Advisor | Public Speaker | AI Outsider | Founder Thinkfluencer AI

🤯 𝗟𝗟𝗠𝘀 𝗛𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗼𝗻𝘀: 𝗙𝗲𝗮𝘁𝘂𝗿𝗲 𝗼𝗿 𝗕𝘂𝗴? Hallucinations in Large Language Models (LLMs) occur when the model generates plausible-sounding but inaccurate or nonsensical responses. This is a major reason why many customers hesitate to push back their prototypes into production, especially for end-customer-facing applications. 𝑾𝒉𝒚 𝑫𝒐 𝑯𝒂𝒍𝒍𝒖𝒄𝒊𝒏𝒂𝒕𝒊𝒐𝒏𝒔 𝑯𝒂𝒑𝒑𝒆𝒏? 🔸𝟭. 𝗟𝗶𝗺𝗶𝘁𝗲𝗱 𝗖𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴: LLMs transform prompts and training data into abstractions, leading to potential loss of information. 🔸𝟮. 𝗡𝗼𝗶𝘀𝗲 𝗼𝗿 𝗕𝗶𝗮𝘀𝗲𝘀 𝗶𝗻 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗗𝗮𝘁𝗮: Skewed statistical patterns can cause unexpected responses. 🔸𝟯. 𝗢𝘃𝗲𝗿𝗳𝗶𝘁𝘁𝗶𝗻𝗴: High model complexity, combined with incomplete or biased training data, often leads to hallucinations. Some researchers argue that hallucinations are inevitable due to LLMs' inherent limitations. Others view them as a feature, fostering creativity. From my perspective, both views are valid. Hallucinations can indeed result in catastrophic errors—like Chevy recommending a Ford or Google suggesting you eat rocks. However, much like in real life, these hallucinations can also spark creativity, allowing LLMs to generate innovative ideas. Balancing these aspects is key to leveraging the full potential of LLMs while mitigating risks. #AI #MachineLearning #LLM #Innovation #TechTrends

  • No alternative text description for this image

LLMs hallucinate due to limited contextual understanding, noise or biases in training data, and overfitting. These factors lead to the generation of plausible-sounding but inaccurate or nonsensical responses. Hallucinations in Language Models (LLMs) can significantly impact their performance by generating plausible-sounding but inaccurate or nonsensical responses. This issue causes hesitation in deploying LLMs for customer-facing applications due to the potential for catastrophic errors, such as providing completely wrong information. However, hallucinations can also foster creativity by generating innovative ideas, suggesting a balance between leveraging their potential and mitigating risks is essential.

Walter G.

Gen AI for Tech Teams | VC Advisory | AI Engineering

1y

If you know what you are doing, *maybe* feature. If you dont, always drama.

Ravi Ranjan

Building Generative AI-based Systems

1y

LLM is like a friend who is always high.

Brennan M. Woodruff

Accelerating HardTech Innovation through Strategic Partnerships with Industry

1y

Bug for everything outside of where creativity is a need. Bug even in those instances because it shows a fundamental lack of understanding of what it is saying which isn't all that intelligent (though some humans would follow suit).

Ágnostos Apórrētos

Undisclosed Ágnostos Apórrētos

1y

Unexpected responses is a nice way of saying incorrect responses. Let's not use soft language here, this is science not your kindergarten.

Carlos Escapa

Ex-Meta, Amazon Data & AI expert with a strong record in building industrial alliances. Committed to bridging the digital divide, I guest lecture to advocate for Open Science and broaden knowledge accessibility.

1y

Spurious correlations is a better description of this phenomenon. Whether they are creative or hallucinatory, that's a human opinion.

Giovanni Sisinna

🔹Portfolio-Program-Project Management, Technological Innovation, Management Consulting, Generative AI, Artificial Intelligence🔹AI Advisor | Director Program Management | Partner @YOURgroup

1y

Thank you for sharing Eduardo! The explanation of LLM hallucinations is insightful. Addressing these issues by improving contextual understanding and data quality is crucial. Balancing creativity and accuracy will be essential for future advancements in this field.

Victory Adugbo

Cross-Border & Stablecoin Payments Specialist||Growth Expert || DeFi GTM Strategist || Close to a Decade of Experience Driving Global Impact

1y

To mitigate LLM hallucinations, consider enhancing contextual understanding, reducing noise and biases in training data, and preventing overfitting. Additionally, integrating knowledge graphs and employing visual contrastive decoding are promising strategies.

Ben Rodrigue

Founder and CEO of SoftStackersAI, a truly client centric cloud support provider.

1y

Oh man. That's funny. Well done.

Like
Reply
Albert Rojas

Client Technical Specialist – QBE | Co-Founder | ex-Oracle, IBM, Google | Database & AI/ML Architect | Healthcare & GRC

1y

Eduardo Ordax just tell the LLM researches to embed the impressions with search embeddings😎. It’s akin to Channel surfing between #bbc, #cnn, #fox… it’s how DocNote.ai unlocks medical jargon where hallucinations 🚫 as you know. Cheers!

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories