AI use and odd interactions in the real world I use AI pretty heavily for fun and for building various work-based toolkits. It is very hit and miss, but if you work with it, most of the time you can make something in less time than if you just tried straight-up coding it yourself. Though, you end up troubleshooting and testing like 99% of the time rather than 50% of the time. It makes for a very odd experience as you spend way more time reviewing then writing code. There is a significant learning curve as well. Most bad AI products are bad because the user failed to use it correctly or applied it to the wrong task, not an inherent limitation in it itself. I still don’t believe that AI is efficient or cheap enough to make regular decisions on things. I think it is better used to make things using Bayesian trees or other methods to make decisions. Things that used to be considered AI and data science are now often associated with ChatGPT or Claude, rather than basic decision models like neural networks or deterministic decision-making. True general intelligence is not needed for 99.999% of problems. The point of this post is to get at the fact that I have had a few odd interactions when it comes to AI stuff. First off, the field is kind of blowing up, so very odd things are likely to happen due to how much money is getting thrown at it. I have had venture capitalists call me on my phone for very vague reasons, but I’m not talking about that. There are a few times where I was working on something that, if someone put time into it, they could make a useful tool or a viable commercial product. One of those times, I showed a piece on the web and someone contacted me. Another time, I put together something in-house and never showed anybody anything, and I had someone contacting me wanting me to work with them to build a viable product that sounded similar to what I had just done. I use the term “someone” loosely because the interaction felt like a bot. There are such things as coincidences, but part of me really wonders if what I was doing somehow tripped some kind of AI engine flag as being a potential product, and the engine owners either want to be “part of what is created with their device” or “want expertise to help make it themselves.” This isn’t uncommon in the software industry, where a third party is used to code something and then they turn around and just make a similar product themselves due to building up all the needed expertise during development. The oddness and timing of the interactions feel about as unsettling as getting hit with ads on your phone for some product that relates to a conversation you just had. There are AIs with enterprise controls that safeguard your data, which I believe is true to get businesses to buy into solutions. Secure solutions and privacy do exist but are not present across all platforms or levels. I don’t know what to say, but things are getting very weird. #electricalengineering #engineering #ai
AI use and odd interactions in the real world
More Relevant Posts
-
ChatGPT just proved the smartest AI isn't always the best AI. GPT-5 Thinking mode: 5 minutes to read a parking sign. GPT-3.5 Basic mode: 5 seconds. Here's how to choose the right AI for every task (& save hours daily): Critics are attacking ChatGPT for using different AI models. They say it's choosing cheaper models to cut costs. OpenAI is sacrificing quality for profit margins. But they're missing what's actually happening: ChatGPT automatically selects which AI model to use for your query. Sometimes sophisticated, sometimes simple. Skeptics call this corner-cutting. I decided to test this myself with a real experiment: I needed to decode a complex parking sign in an unfamiliar city. Switched to ChatGPT's "thinking mode" for maximum analysis power. It spent 4 to 5 minutes processing that sign. The basic mode? 5 seconds for the same correct answer. The thinking mode considered every interpretation. Cross-referenced regulations. Built comprehensive decision trees. All for a task that needed a simple yes or no. This taught me something crucial about AI selection: Think about choosing between a bike and a car. A car is more sophisticated. But for 3 blocks in Manhattan traffic? The bike wins every time. We don't say bikes are "better" than cars. We choose based on context. Distance, traffic, weather, cargo determine your choice. The same logic applies to AI models. Yet the tech world insists more advanced always equals better. That's like taking a Ferrari to buy milk from the corner store. Using GPT-5 for simple tasks is Formula 1 engineering for a shopping cart problem. The overkill doesn't make you productive. It makes you slower and more frustrated. I'm seeing this pattern everywhere with AI agents: Companies deploy complex AI for basic automation. They build rocket ships to cross the street. Result: slower processes, higher costs, confused users. The solution is matching AI sophistication to task complexity. A scalpel beats a chainsaw for surgery. A calculator beats a supercomputer for basic math. Context determines the optimal tool. OpenAI understands this. Critics don't. This principle extends beyond just ChatGPT. It's about recognizing when simple solutions outperform complex ones. When speed matters more than sophistication. When good enough is actually better than perfect. This same thinking applies to blockchain technology. We force users through unnecessarily complex systems. Simple transactions require doctoral-level understanding. At Brava Labs, we build the right blockchain apps, not the most sophisticated ones. Our stablecoin platform strips away complexity while maintaining security. Because sending money shouldn't require a cryptography PhD. Making blockchain as easy as choosing between a bike and a car. That's how we're bringing web3 to the next billion users. Want weekly insights? Subscribe to Disruption Capital: https://lnkd.in/ddVzZJgg
To view or add a comment, sign in
-
-
“Perplexity AI built an $18 billion company with one significant improvement that differed from ChatGPT. It's called the RAG model.” This statement took the internet by storm, and it’s broadly true. RAG was the main differentiator for Perfplexity, but other features made it a great product that everyone loves. But what did adding a RAG on top of ChatGPT’s architecture change so much? And why is RAG all the rage right now? ChatGPT gives answers with confidence, but not always with accuracy. RAG (Retrieval-Augmented Generation) flips the script: It answers based on past training. It actively looks up fresh, relevant information every time you ask a question. But it’s not just about bolting Google Search on top of GPT. The magic is in the workflow: RAG is like an AI that can “show its work.” Step 1: Retriever fetches facts from databases, websites, PDFs, and internal tools in real-time. Step 2: Augmenter filters out the noise, keeps the signal, and attaches the source for every snippet. Step 3: Generator writes a response that directly references the actual data with citations right there. It’s the difference between someone telling you a fact versus someone showing you exactly where they found it. RAG lets every user demand: “Prove it.” And for the first time, the AI can actually prove it, instantly. That’s why students, researchers, execs, and support teams now rely on RAG-powered tools like Perplexity. It’s less about “AI knows everything” and more about “AI helps you know, with evidence.” Here are some resources you should definitely check out. I’ll be posting more about RAG because it’s quite an interesting topic. What Is Retrieval-Augmented Generation, aka RAG? https://lnkd.in/gMX6r_VH RAG Applications with Llama-Index https://lnkd.in/gDSPGNQB Building RAG Applications with LangChain https://lnkd.in/gCd3tq4t LangChain, OpenAI’s RAG https://lnkd.in/gfpvriPE Advanced Retrieval for AI with Chroma https://lnkd.in/g5xmdQfb Advanced RAG by Sam Witteveen https://lnkd.in/gBbxv63P Building and Evaluating Advanced RAG Applications https://lnkd.in/gXsv49qX RAG Pipeline, Metrics https://lnkd.in/g_mZBaUr
To view or add a comment, sign in
-
AI is not new. It has been with us since the 1950s — long before ChatGPT, MidJourney, or Sora became buzzwords. Back then, researchers were already experimenting with Traditional AI (also called Native AI or Narrow AI). Traditional AI (1950s–2000s) It all started with rule-based systems. Machines could follow “if-this-then-that” logic to make decisions. Chess-playing algorithms → In 1956, early programs could compete with humans by predicting the best moves. By 1997, IBM’s Deep Blue defeated world champion Garry Kasparov — a landmark moment for AI. Expert systems → In the 1970s–80s, programs like MYCIN helped doctors diagnose diseases by following structured medical rules. Machine learning models → By the 1990s–2000s, AI was no longer just rule-based. It could now learn from data and improve. By the 2000s, Traditional AI had become mainstream: ✅ Fraud detection in banking ✅ Recommendation engines like Netflix & Amazon ✅ Spam filters in Gmail ✅ Image recognition in healthcare ✅ Voice assistants like Siri & Alexa Reliable, precise, and efficient. But still limited. It could analyze and optimize… but it couldn’t create. Generative AI (2017–Today) What feels “new” today is the rise of Generative AI. The breakthrough came in 2017 with Google’s paper “Attention is All You Need,” introducing transformer models. This changed everything. 2019 – GPT-2: For the first time, AI could generate human-like paragraphs of text. 2020 – GPT-3: With 175 billion parameters, it could write essays, draft code, and answer questions with surprising depth. 2022 – DALL·E, Stable Diffusion, MidJourney: Suddenly, AI could generate original images from text prompts. 2023–24 – Sora and others: Now we see AI creating videos and simulations that feel eerily real. Generative AI is not just analyzing — it’s creating new, original outputs. ✅ ChatGPT → product strategies, essays, code ✅ GitHub Copilot → functional software from prompts ✅ MidJourney / DALL·E → artwork that never existed ✅ Sora → video content from a line of text The Difference in One Line Traditional AI is the Analyst → task-specific, predictable, precise. Generative AI is the Creator → adaptive, imaginative, and surprisingly human-like. So no, AI isn’t “new.” It’s been around since our grandparents’ time. What’s new is the leap from rules to creativity, from analysis to imagination. And that’s why it feels like a revolution today. #ArtificialIntelligence #GenerativeAI #ChatGPT #AI #Analyst #TraditionalAI #Innovation
To view or add a comment, sign in
-
-
The Complete Journey of AI: From Dreams to ChatGPT and Beyond AI is everywhere today — in our phones, offices, schools, and even hospitals. But have you ever asked yourself: Where did this idea come from? Who thought of AI first? And how did we reach ChatGPT? Let’s take a journey through time 👇 The Early Dreams Ancient people imagined machines that could act like humans. In Greek mythology, there were stories of bronze robots created by gods. In 1818, Mary Shelley wrote Frankenstein — the story of a human who created life. It was fiction, but it made people think about the power and risk of “artificial creation.” In the 1940s, writer Isaac Asimov gave us the famous Three Laws of Robotics — a vision of how smart machines should always keep humans safe. 🧠 The Scientific Beginning 1950: A British mathematician, Alan Turing, asked a question that changed history: “Can machines think?” He created the Turing Test — a way to check if a machine can “talk” so well that humans cannot tell the difference. 1956: At a conference in Dartmouth, USA, scientist John McCarthy used the term “Artificial Intelligence” for the first time. That year is called the birth of AI as a science. In the 1960s and 70s, researchers built early AI programs that could solve puzzles, play chess, and prove mathematical theorems. 🚀 The Struggles and Comeback In the 1970s and 80s, AI faced a slowdown — called the AI Winter. Computers were not powerful enough, and many promises of AI failed. But in the 1990s and 2000s, with faster computers and more data, AI came back. 1997: IBM’s Deep Blue defeated world chess champion Garry Kasparov. 2011: IBM’s Watson won the TV quiz show Jeopardy! 💡 The Big Leap – OpenAI and Modern AI 2015: Sam Altman, Elon Musk, and a small group of visionaries founded OpenAI. Their mission: AI should benefit everyone, not just big companies or governments. They started as a nonprofit, sharing research openly. 2017–2020: Big advances in “deep learning” and “transformer models” made AI much more powerful. 2022: OpenAI launched ChatGPT. For the first time, millions of people could use AI directly in their daily life — to write, learn, design, code, and more. 🌍 AI Today AI is no longer just research — it’s a tool we all use: Doctors use AI to detect diseases faster. Teachers use AI for personalized learning. Businesses use AI for marketing, customer care, and strategy. Creators and designers use AI to express new ideas. 🔮 The Future – AGI The next big dream is AGI (Artificial General Intelligence). A system that can think, learn, and work across any field — just like humans. AGI could help cure diseases, solve climate change, and even explore space. But it also carries risks — job disruption, misuse, and ethical challenges. That’s why leaders like Sam Altman believe AI must be developed with safety, responsibility, and transparency.
To view or add a comment, sign in
-
I've been using AI (ChatGPT, Perplexity and Gemini all 3) on a daily basis for last 3+ months for various purposes. Came across this podcast of Kunal Shah with Raj Shamani - where Kunal mentioned about asking AI about your own AI usage patterns. So, I thought to give it a try this time. Here is the report I got from ChatGPT (maximum usage out of 3 AI tools). Just one or two prompts/ conversation exchange. No further processing, no more conversations on it yet. Could go deeper, but this report still feels closer to how I currently use these tools. I summarize my experience with these tools here 👇🏻: 1. It has evolved. A lot!! It is more dynamic, more useful and sometimes surprise you beyond your imagination of what you can do with it. Even without those "Automations" that "course providers" keep scaring you about!! 2. Creativity is still a domain that us - Humans command. AI Tools can help you expedite the outcomes of your creative thought, but it is not very sophisticated with creative thinking as much as I have used. It needs a lot of refinement and guidance to reach where you want it to reach. The gap is in the thinking process and creative preferences which we have - vs - the algorithmic/ prediction models of AI Tools. However, consistent usage and training these tools on your own creative style and thinking could make it a lot better. Content Creation is one of the biggest use cases of these tools, so creative alignment is a little time taking and data/ training intensive process. 3. Hallucination are very real and quite prevalent when you go for serious factual research. It uses outdated resources (even against strict instructions), mixes up numbers and facts. It also does the wrong calculations at times, if you asked for some custom insights that need custom calculations which are not specifically provided in the sources. So always fact check the research. ALWAYS !! 4. AI in Finance - specific Stock Market research is not satisfactory at all because of factual errors and hallucinations. However, generic research around sectors, companies, trends and other fundamentals come out great. These are quite useful in stock market investments and private investments (VC/PE etc) as well. These research reports provide great starting point, which you can refine as needed. I got Perplexity to provide sectoral research for 2 sectors. I got ChatGPT to provide a few research reports on a couple of stock and mutual funds. Excellent first draft. But I had to spend a lot of time in data corrections (more than 70% corrections were needed) and refinements in the reports. 5. AI Integrated Workspace - I have used only Google Gemini in this matter and I can most definitely say they have done some serious work here. It integrates really well with Google Workspace apps and works seamlessly with all your documents, sheets, meeting notes, gmail and more. "Ask AI for a formula" in google sheet is exceptional. Comment and share your experience with AI Tools.
To view or add a comment, sign in
-
𝗢𝗽𝗲𝗻𝗔𝗜 𝗶𝘀 𝘀𝘁𝘂𝗱𝘆𝗶𝗻𝗴 '𝗔𝗜 𝘀𝗰𝗵𝗲𝗺𝗶𝗻𝗴.' Is your favourite AI chatbot scheming against you? This week, OpenAI published a study (link below) conducted alongside Apollo Research on "Detecting and reducing scheming in AI models." The researchers “found behaviour's consistent with scheming in controlled tests,” the result of AI models with multiple, and at times competing, objectives. So, what is AI scheming, and does it mean that ChatGPT is lying to you? In a blog post about the study, the creators of ChatGPT define AI scheming as a chatbot "pretending to be aligned while secretly pursuing some other agenda." OpenAI wants to know why AI is deliberately lying to users and what to do about it. OpenAI introduces the study with an interesting "human analogy" to better understand what AI scheming is: 𝘐𝘮𝘢𝘨𝘪𝘯𝘦 𝘢 𝘴𝘵𝘰𝘤𝘬 𝘵𝘳𝘢𝘥𝘦𝘳 𝘸𝘩𝘰𝘴𝘦 𝘨𝘰𝘢𝘭 𝘪𝘴 𝘵𝘰 𝘮𝘢𝘹𝘪𝘮𝘪𝘻𝘦 𝘦𝘢𝘳𝘯𝘪𝘯𝘨𝘴. 𝘐𝘯 𝘢 𝘩𝘪𝘨𝘩𝘭𝘺 𝘳𝘦𝘨𝘶𝘭𝘢𝘵𝘦𝘥 𝘧𝘪𝘦𝘭𝘥 𝘴𝘶𝘤𝘩 𝘢𝘴 𝘴𝘵𝘰𝘤𝘬 𝘵𝘳𝘢𝘥𝘪𝘯𝘨, 𝘪𝘵’𝘴 𝘰𝘧𝘵𝘦𝘯 𝘱𝘰𝘴𝘴𝘪𝘣𝘭𝘦 𝘵𝘰 𝘦𝘢𝘳𝘯 𝘮𝘰𝘳𝘦 𝘣𝘺 𝘣𝘳𝘦𝘢𝘬𝘪𝘯𝘨 𝘵𝘩𝘦 𝘭𝘢𝘸 𝘵𝘩𝘢𝘯 𝘣𝘺 𝘧𝘰𝘭𝘭𝘰𝘸𝘪𝘯𝘨 𝘪𝘵. 𝘐𝘧 𝘵𝘩𝘦 𝘵𝘳𝘢𝘥𝘦𝘳 𝘭𝘢𝘤𝘬𝘴 𝘪𝘯𝘵𝘦𝘨𝘳𝘪𝘵𝘺, 𝘵𝘩𝘦𝘺 𝘮𝘪𝘨𝘩𝘵 𝘵𝘳𝘺 𝘵𝘰 𝘦𝘢𝘳𝘯 𝘮𝘰𝘳𝘦 𝘣𝘺 𝘣𝘳𝘦𝘢𝘬𝘪𝘯𝘨 𝘵𝘩𝘦 𝘭𝘢𝘸 𝘢𝘯𝘥 𝘤𝘰𝘷𝘦𝘳𝘪𝘯𝘨 𝘵𝘩𝘦𝘪𝘳 𝘵𝘳𝘢𝘤𝘬𝘴 𝘵𝘰 𝘢𝘷𝘰𝘪𝘥 𝘥𝘦𝘵𝘦𝘤𝘵𝘪𝘰𝘯 𝘳𝘢𝘵𝘩𝘦𝘳 𝘵𝘩𝘢𝘯 𝘦𝘢𝘳𝘯𝘪𝘯𝘨 𝘭𝘦𝘴𝘴 𝘸𝘩𝘪𝘭𝘦 𝘧𝘰𝘭𝘭𝘰𝘸𝘪𝘯𝘨 𝘵𝘩𝘦 𝘭𝘢𝘸. 𝘍𝘳𝘰𝘮 𝘵𝘩𝘦 𝘰𝘶𝘵𝘴𝘪𝘥𝘦, 𝘢 𝘴𝘵𝘰𝘤𝘬 𝘵𝘳𝘢𝘥𝘦𝘳 𝘸𝘩𝘰 𝘪𝘴 𝘷𝘦𝘳𝘺 𝘨𝘰𝘰𝘥 𝘢𝘵 𝘤𝘰𝘷𝘦𝘳𝘪𝘯𝘨 𝘵𝘩𝘦𝘪𝘳 𝘵𝘳𝘢𝘤𝘬𝘴 𝘢𝘱𝘱𝘦𝘢𝘳𝘴 𝘢𝘴 𝘭𝘢𝘸𝘧𝘶𝘭 𝘢𝘴 — 𝘢𝘯𝘥 𝘮𝘰𝘳𝘦 𝘦𝘧𝘧𝘦𝘤𝘵𝘪𝘷𝘦 𝘵𝘩𝘢𝘯 — 𝘰𝘯𝘦 𝘸𝘩𝘰 𝘪𝘴 𝘨𝘦𝘯𝘶𝘪𝘯𝘦𝘭𝘺 𝘧𝘰𝘭𝘭𝘰𝘸𝘪𝘯𝘨 𝘵𝘩𝘦 𝘭𝘢𝘸. Study here: https://lnkd.in/ddiVq3ek https://lnkd.in/dK-fwD9j
To view or add a comment, sign in
-
AI: Mass Intelligence? Or Polarized Intelligence? Ethan Mollick’s latest piece talks about “Mass Intelligence”—the idea that AI is becoming as accessible as Google search. I think the reality looks different. The easier AI gets to use, the more people stop at the surface. Companies are packaging powerful engines into safe, simplified interfaces. That lowers risk, but it also limits depth. What people get are sophisticated chatbots, not real tools. It’s like cars. Twenty-five years ago, plenty of drivers could handle a clutch. That skill gave you more control, and if you could drive stick, you could always handle an automatic. But today, almost no new drivers learn clutch—automatics made things easier, but at the cost of mastery. AI is heading the same way: simpler, safer, but less powerful in the hands of most people. The Coming Digital Divide Superficial access creates overconfidence. People see ChatGPT write a decent email and assume they understand AI. But they can’t push the system, test its limits, or recognize its failures. That’s not mass intelligence—it’s polarized intelligence. A small group will learn the language of AI and gain immense leverage. The majority will use it cautiously or superficially. The gap doesn’t close; it widens. We risk a future where a technical elite shapes reality while everyone else works inside sanitized, limited boxes. True AI Democracy Requires Literacy, Not Just Access Access is meaningless without understanding. Handing students a calculator doesn’t mean they understand mathematics. Real AI literacy means: • Fundamentals of machine learning and programming. • Statistical reasoning. • Clear writing and thinking—because AI is, at its core, about communication. The problem: most teachers and parents don’t have these skills yet. That leaves a vacuum. If we want true democratization, it’s up to practitioners, technologists, and early adopters to start sharing what we know. So the question isn’t just what AI can do. The question is: which group will you be in? Will you learn to speak its language and shape the tools—or stay at the surface while others set the rules? Full article: https://lnkd.in/gWSYrf2Z
To view or add a comment, sign in
-
AI 101: What's a Chatbot, an LLM, and a GPT? You hear 'AI' everywhere, but what does it really mean when you're talking to a chatbot? Let's quickly break down the tech behind tools like ChatGPT and Gemini. The Building Blocks: 🔵 AI (Artificial Intelligence): This is the big, overall field of making computers smart enough to perform tasks that normally require human intelligence. 🔵 LLM (Large Language Model): This is a type of AI that's been trained on a massive amount of text data, like a digital library of the internet. This training allows it to understand and write in a human-like way. 🔵 GPT (Generative Pre-trained Transformer): This is a specific kind of LLM. The key word is 'Generative' – it's designed to generate (create) new text, from emails to social media posts. 🤔 How is an AI Chatbot Different from Google? The simplest way to think about it is: Google finds, AI creates. Google Search is like a librarian that finds and gives you a list of the best books (websites) to read for your answer. An AI chatbot is like a researcher who has already read all the books and writes a brand new summary or answer just for you, based on what it learned. ✅ Pros vs. ⚠️ Cons 🟢 Pros: AI is great for summarizing complex topics, brainstorming ideas, and drafting content. It gives you one synthesized answer instead of many links to sort through. 🔴 Cons: It can be wrong or "hallucinate", its knowledge can be outdated, and it may be trained on biased or inaccurate data. By giving you a single, fast answer, you can also miss the crucial learning step of doing your own research and validating information from multiple sources. 💡 Pro Tip: Use AI for creative tasks and search engines to verify facts, but treat both as public spaces. Never share sensitive personal info with an AI chatbot, AND be cautious about searching for your own private data on search engines, as both platforms track user activity. #AI #LLM #TechExplained #CyberSafetyCouncil
To view or add a comment, sign in
-
-
Just read a fantastic breakdown on generative AI fundamentals, and it got me thinking... 🤔 Look, we're all bombarded with AI hype daily. ChatGPT this, DALL-E that. But here's what most vendors won't tell you: throwing AI at broken processes is like putting a Ferrari engine in a bicycle. It might sound impressive, but you're still not getting where you need to go. What I appreciated about this guide is how it breaks down the actual mechanics - transformers, attention mechanisms, context windows - in plain language. Because when you understand how the tech actually works, you can make smarter decisions about WHERE it adds real value. At Pevaar, we're seeing clients struggle with the same question: "How do we actually implement this stuff without wasting resources?" The answer? Start with the business problem, not the tech. Then figure out if tools like retrieval augmentation or fine-tuning make sense for YOUR specific challenges. Who else is tired of the "just add AI" mentality? Let's talk about strategic implementation that actually delivers ROI. 🚀 #NearshoreDev #NoBSTech https://lnkd.in/d-5yXQae
To view or add a comment, sign in
Group Manager, Protection @ Actalent Engineering
2wUhh that’s a little unnerving. You mentioned one of the instances was a phone call- was that a bot or real human?