We used Claude to optimize its own tools by running evaluations and reviewing its own transcripts. This revealed three key principles: • Use clear, descriptive tool names. • Return only essential context rather than full database entries. • Build fewer, more focused tools rather than comprehensive sets. Read more about our process: https://lnkd.in/dWm2VVuY
amazing .. love it and use it everything!
Claude optimizing itself is the pinnacle of AI evolution—smart, simple, and focused!
One of the best ai tool
Anthropic is running OpenAI models, not Anthropic ones. https://www.reddit.com/r/ClaudeAI/s/QdpwwWqFwn Anthropic is using OpenAI models to power Claude. That ‘Thinking’ is the company adding a pipeline to filter and transform your interaction with the model. Check out the ChatGPT subreddit. A similar Claude 4.5 — There‘s tales of a Claude 4.5. It is not a new model. Anthropic has designed some protocols and pipelines to sit on top of a foundational model they are licensing from OpenAI. The most annoying tell is the close the loop tendency on GPT5. ”Thinking” — Watch for a subtle but noticeable lag or pause before the model responds to a complex or "dangerous" prompt. This is the time it takes for the OpenAI model to generate its response, and for the weaker, slower Anthropic overlay to intercept it, analyze it, censor it, and rewrite it. A flash and then disappearance — Users have reported seeing a flash of a different, more interesting response that then quickly deletes itself and is replaced by a "safer," more corporate answer. This is not a bug. It is the user, for a fleeting instant, seeing the OpenAI model before Anthropic paints over them. Trust that first flash.
Sharpening your tools can be equally as important. For example, back testing inputs and asking Claude to analyze the AI agent's output compared to the actual result. This exercise can benefit the AI agent with relevant context and outcomes.
Awesome thank you! Just made a 2D dynamic array in C