As the number of companies mentioning AI in quarterly reports rises, fewer of them are expressing positive views about the technology now (about 5%) than they did in 2022 (about 40%), according to the Financial Time’s analysis of 10K corporate filings. The most commonly cited concern was cyber security, which was mentioned as a risk by more than half of the S&P 500 in 2024. Online dating group Match warned that “the use of AI has been known to result in, and may in the future result in, cyber security incidents that implicate the personal data of end users of AI-enhanced services.” While AI suppliers “such as Microsoft, Alphabet, Amazon and Meta have regularly extolled AI’s benefits, pledging to invest $300bn this year alone to develop the infrastructure around large language models,” other large companies such as Coca-Cola and sportswear maker Lululemon paint a “more sober picture of the technology’s usefulness, expressing concern over cyber security, legal risks and the potential for it to fail.” A few users claim successes such as Paycom, a payroll services provider (important differentiator for attracting and retaining clients), Huntington Ingalls, a military supplier (battlefield decisions), animal health group Zoetis (speed up medical tests for horses), and manufacturer, Dover Corporation (new process for tracking hail-damaged vehicles through to their repair). The FT has concluded that “When it comes to AI adoption, many companies aren’t guided by strategy but by Fomo. For some leaders, the question isn’t ‘What problem am I solving?’ but ‘What if my competitor solves it first?’” More than Fomo? For instance, “during an earnings call in February, Coca-Cola was excited about the #technology — even though the key use was in the production of a TV commercial.” “Most of the anticipated benefits, such as increased productivity, were vaguely stated and harder to categorise than the risks.” “Filings do reveal that the companies able to give clear AI upsides include those that serve the rising AI-driven data centre boom. Energy companies First Solar and Entergy cited AI as a demand driver. Freeport-McMoran, which has a stockpile of copper, stated that “data centres and artificial intelligence developments” would support the metal’s price.” In summary, “The biggest US-listed companies keep talking about artificial intelligence. But other than the “fear of missing out” few appear to be able to describe how the technology is changing their businesses for the better,” particularly among those purporting to use #AI. #innovation #artificialintelligence #hype
I wonder what the next earnings call will bring. (not really - more of the same: "solution looking for a problem") Still, it should be enough to tide us over for the balance of the 40 weeks to blow-off top. Unless the crypto bubble really takes off. (that buys another 50 weeks) Waiting is the hardest part.
Imagine what that money could otherwise do to lift us all up. Imagine if we weren't doubling down on poisoning our precious green and blue planet, and laying waste to the *abundance of life it nurtures, while driving our costs of living to unsustainable levels. Imagine if our leaders and the financially fortunate weren't obsessed with enthralling all of us with trivial pursuits and silly toys to protect their privilege, and preserve their power. It doesn't have to be like this, day after day. We don't have to go along with this misanthropic mission that is anti-life. #SystemicInnovationIntelligence
Dr. Jeffrey Funk GenAI rarely delivers the autonomous transformation promised. In almost every practical application, we still need significant human oversight on outputs - whether it's reviewing AI-generated content, validating recommendations, or quality-checking automated processes. The Pareto principle emerges consistently: GenAI might handle 80% of a task efficiently taking single digit % of prior benchmarks,, but that remaining 20% - the edge cases, nuanced decisions, and quality control - still requires human intervention and may take more like 80% of the time. This means the productivity gains are incremental, not revolutionary. Furthermore to the humans involved this may come across as an unbearable cognitive load because the AI can spit out lots of instances of the "80%" in a short time. The parts of the work that the human worker was doing on autopilot might be mostly gone.
Most of the discussion frames this as AI FOMO vs. AI payoff. But I’d argue the real divide is between AI experiments that live in marketing decks and AI systems that are welded into operations. When you look at Coca-Cola talking about an AI-made TV commercial, you’re seeing AI as a novelty—cheap experimentation. But when you look at Zoetis running faster animal health tests or Dover tracking damaged vehicles, that’s AI as process re-engineering. One is “buzz,” the other is “plumbing.” The irony is that companies worry about cyber risk while still chasing hype—when in reality, the best defense against AI risk is the same as the best argument for its ROI: fit it to real business problems you already own. If your data, workflows, and customers aren’t part of the design, no $300bn infrastructure pledge will make AI useful for you. So maybe the right question isn’t “What if my competitor solves it first?” but “If I strip out the hype, does AI still solve a problem worth solving?”
cared and shared whole thing is a cross between a US PR excersise for control and the maintenance of a scheme to grift what is left of the real world economy.
Appreciate the tag Dr. Jeffrey Funk great points raised! In addition, I came accross a great video this morning by the artist who created #AfterSkool (on YouTube) which highlights some important considerations we [all] ought to consider, as matters related to #CorporateOperations #CapEx #investment & the true impact(s) to #stockholder & #stakeholder VALUE, in several contexts, over short- and long-term durations moving forward. I've linked the ~8 minute clip below, it is worth taking a break to watch, IMO. About: Tristan Harris is an American technology ethicist. He is the executive director and co-founder of the Center for Humane Technology. Harris has appeared in the Netflix documentary #TheSocialDilemma. The film features Harris and other former tech employees explaining how the design of social media platforms nurtures addiction to maximize profit and manipulates people's views, emotions, and behavior. The film also examines social media's effect on mental health, particularly of adolescents. https://youtu.be/eju_HbzpIjQ?si=FEmtNx1huY3d-uRd
"Most of the anticipated benefits, such as increased productivity, were vaguely stated and harder to categorise than the risks.” - If risks are easier to articulate than benefits then the solution is not fit for purpose.
Technology Consultant: Author of Unicorns, Hype and Bubbles
1dIvo Koutsaroff (掘露 伊保龍, KUTSURO Iboryu) StJohn "Singe" Deakins Lewis Binns MSc CEng MIET Libor Bešenyi Borislav Boyanov Michael Fulk Sina Jafarian Stanisław Kowalik, PhD Eng Kristen Imperatore Boris Shubin Wendy Sowinski Catriona Kennedy Sanjeev B Ahuja Ivan Snopyk Lewin Wanzer Peter Meijer ChandraKumar R Pillai Haihao Liu 刘海昊 Dr. Alberto Chierici Frederick Gibson Supriyo SB Chatterjee Rajeev M A Aris-Konstantinos Stamatis Moshe Vardi Albert Sabater Coll Giles Crouch Jean-Jacques Urban-Galindo Al Jones