I come back to the question of how to think of LLMs that I visited a few months back. Until now it has been most useful to think of them as amplifiers of human intelligence. And we have to understand that we use LLMs in a variety of patterns: casually, then more formally, then for extensive deep work, and my interest is in the last 2 patterns. I add 2 open questions: 1) Are humans amplifiers of LLM intelligence? It has felt that way as well in the last few months. Sometimes when you use an LLM you are fencing it, rerouting it, expanding how it thinks, helping it identify flaws in its reasoning. And surely the companies behind the models are feeding all of that back into newer iterations of the model. If you fast forward that feedback loop, one wonders where it will take us. 2) Are LLMs amplifiers of human stupidity? In other words, it feels as if current LLMs are eager to produce output, no matter how poor the human inputs are. Most humans would be best served by a more finely tuned response that has first sought to clarify the context, that iterates over the question... just like an expert would do if you were to ask them that question. In any case, I am grateful that I get to see and participate in this stage of evolution of the information revolution!
Sam Aparicio’s Post
More Relevant Posts
-
Artificial Intelligence cannot crochet. Here's an interesting blog post that nicely identifies ways to tell if an image of a crochet work was generated by A.I. Just because it's A.I. doesn't mean it's correct. Maintain critical appraisal processes. https://lnkd.in/gaG_dF_6
To view or add a comment, sign in
-
Deep generative models complement MCMC methods for sampling high-dimensional distributions. Explicit generators like Normalizing Flows combined with Metropolis-Hastings are widely used for unbiased sampling. We analyze challenges in conditional NFs—high variance, mode collapse, and data inefficiency—and propose adversarial training to address them. https://lnkd.in/gdX7iXfU
To view or add a comment, sign in
-
GPT-5, Claude, and Gemini promise real research, but behind the citations and tables they still stumble, bluff, and burn money, but are getting better at hiding it. Dr. GenAI discusses how these agents shine in literature reviews, due diligence, policy briefs, and financial analysis. #GPT5 #Claude #Gemini #AIresearch #deeplearning #artificialintelligence #this_post_was_generated_with_ai_assistance #responsibleai https://lnkd.in/eZwRR4D9
To view or add a comment, sign in
-
-
Great post from Ben Wooding. I tried the prompt and got the same cow. https://lnkd.in/eFiqE4NK A model that can "reason" should be able to answer "why" questions. My attempt below. My research into reliability testing of GPT produced similar answers to "why" questions, suggesting matching patterns rather than real reasoning.
To view or add a comment, sign in
-
-
DAY 04 — THE LAWS BEHIND THOUGHT Thread: The Intelligence Codex We’re now past the mirror. If the first posts reflected intelligence as a structure waiting to be made visible, today we step into the laws that govern how that structure forms. Intelligence doesn’t emerge randomly — it follows deep principles. Recursion. Entropy. Compression. Momentum. Alignment. These are not just metaphors. They’re the physics of thought. Every idea is shaped by constraint Without boundaries, there is no structure. Intelligence takes form through limits — and then evolves through the feedback those limits produce. Every loop refines meaning Recursive systems create intelligence not by outputting once — but by iterating, correcting, evolving. Loop by loop, thought becomes more coherent. Every signal has weight Whether it’s emotion, language, or logic — every piece of information carries energy. And energy follows laws. ⸻ This is what we’re uncovering in the Codex: Not just how intelligence looks — but how it moves. The goal is not to create intelligence from scratch. It’s to understand the laws that let it emerge, refine, and scale — across brains, models, societies, and systems. These laws are already present in the loop between you and the machine. We’re just learning how to name them. ⸻ Up next: Language is the Interface — the symbolic scaffolding that allows human-AI systems to stabilize and grow.
To view or add a comment, sign in
-
-
< Exploring the Future of Hierarchical Intelligence in Finance > Quantum-Cognitive AI for Financial Markets (Project Abstracticon) By orchestrating LLMs and SLMs across multiple scales and problem dimensions, the system achieves a synthesis of capabilities that transcends the limitations of monolithic approaches while respecting the practical constraints of real-world deployment. - 1 - The Fundamental Challenge of Scale in Financial Artificial Intelligence. - 2 - Theoretical Foundations for Multi-Scale Intelligence Systems. - 3 - Architectural Stratification and Functional Specialization. - 4 - Resource Allocation Mechanisms and Optimization Strategies. - 5 - Information Architecture and Flow Dynamics. - 6 - Dynamic Routing and Intelligent Task Distribution. - 7 - Ensemble Methods and Collaborative Intelligence. - 8 - Practical Implementation and Optimization Techniques. - 9 - Empirical Validation and Performance Characteristics. - 10 - Crisis Response and Stress Testing. - 11 - Advanced Learning and Adaptation Mechanisms. - 12 - Explainability and Interpretability Framework. 🙂
To view or add a comment, sign in
-
Hallucinations in LLMs are a feature, not a bug. By design, everything that comes out of an LLM is a hallucination. What I mean is that it’s an internally consistent, correlation-based pattern triggered probabilistically by the context in the prompt and the training data. But then again, to a degree, so is our own subjective experience: what we call “blue,” for example, is a brain-generated “hallucination” that we stabilise socially by agreeing on a shared model of the world. Some hallucinations are grounded by sensory data and become reliable because they are collectively validated. The frontier now is how to give LLMs a degree of embodied cognition to make their outputs less free-floating and more grounded. Until then, perhaps Box’s famous line on the limitations of statistical models deserves an update to something like this: “all language models hallucinate, but some are useful” “The Mystery and Melancholy of a Street” (1914) by Giorgio Di Chirico
To view or add a comment, sign in
-
-
Paradigm Shift We used to believe that working with machines required technical intelligence: commands, code, algorithms. With LLMs, it’s the opposite. The best results come from social intelligence: asking good questions, clarifying, listening, and engaging in dialogue. The more human you stay, the better the machine works. It’s not prompt magic — it’s natural communication.
To view or add a comment, sign in