Elon Musk’s Grok chatbot spewed election disinformation
[Photo: Anlomaja/Adobe Stock; Mariia Shalabaieva/Unsplash; Donghun Shin/Unsplash]

Elon Musk’s Grok chatbot spewed election disinformation

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m focusing on news that xAI’s Grok chatbot gave out incorrect information that led to viral disinformation on X and elsewhere. I also look at AI’s role in the recent market sell-off, and at the fight over California’s AI bill.

Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan


Elon Musk’s Grok spewed election disinformation for a week

The result of this year’s hotly contested presidential election will likely be decided by a narrow slice of independent and undecided voters in a handful of swing states. It’s against this backdrop that X’s AI chatbot, Grok, spewed a heavy dose of election misinformation about Joe Biden’s decision not to run. 

Just after President Biden made his announcement not to seek reelection on July 21, 2024, Grok began stating that it was already too late to switch out Biden’s name for Harris’s on the top of the ticket in nine states. The chatbot continued to propagate this false claim until July 31st. While only Premium users can use Grok, and Grok includes a disclaimer saying it might get things wrong, MAGA types promptly amplified the claim across X and other social platforms, giving fodder to conspiracy theories. 

As a result, five secretaries of state sent a letter to Elon Musk this week demanding that he “immediately implement changes to X’s AI search assistant, Grok, to ensure voters have accurate information in this critical election year.” Rather than feeding information to users, the letter says, Grok should direct users to the neutral voter information site CanIVote.org (as OpenAI’s ChatGPT does). 

Click here to read more about the secretaries of state’s demand that X implement changes to Grok.


Did Wall Street just lose its religion on AI?

The so-called Magnificent Seven stocks are having a rough go. The companies in this group, which include Apple, Tesla, Alphabet (Google), Nvidia, Microsoft, Amazon, and Meta, have pushed the market to multiple all-time highs this year, but they lost a combined $1.3 trillion of market capitalization from July 31 through August 5. The companies together saw $650 billion in market cap wiped out on Monday alone. The stocks, along with the rest of the market, have since stabilized somewhat but are still a ways off from a full recovery. 

The sell-off was likely caused by factors unrelated to the companies and their products. The Fed hasn’t lowered interest rates, and unemployment is rising, to name a couple. But Magnificent Seven companies are some of the most active in developing and productizing new AI technology. This has caused speculation that the sell-off was partly caused by a growing belief among investors that the value creation of new AI has been overplayed, causing an “AI bubble.”

Click here to read more about how Wall Street drama is affecting AI companies.


AI companies object to California AI bill’s ‘pre-harm’ sanctions

A bill that would regulate the development and use of AI has for months been moving in the California state house, and has become the subject of more debate as it gets closer to passage. Senate Bill 1047 would require companies to conduct safety testing for advanced AI systems and shoulder liability for any catastrophic harm they cause. Many across the AI community have said that the bill overreaches, and would stifle new investment in AI as well as slow down the race toward Artificial General Intelligence. 

Anthropic and others oppose the bill because it lets the state sue AI companies for not having a compliant and transparent safety framework in place. Opponents believe the bill should incentivize AI companies to have effective safety practices in place by using a threat of enforcement action, including penalties, if and when the AI actually causes some kind of harm (such as devising a cyberattack or helping to create a bioweapon.)

Click here to read more about AI companies’ concerns around California’s new AI bill.


More AI coverage from Fast Company:


Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

Lawri Williamson

Communications | Brand Governance & Compliance | Employee Advocate

1y

So am I understanding this correctly: AI companies do not want to have to take responsibility for any harm they might cause? How is expecting a company to adhere to basic safety laws overreaching? They would rather wait until they cause a disaster and then be fined for it?? What planet is this?!?

Lisa Crews

Motivational Speaker/ classes on how to Overcome Narcissist Abuse /medical billing & Coding/ Georgia Realtor/

1y

Are you sure? Maybe it’s just gonna be like it was four years ago and have it stolen again I’m from Georgia so I know he won here maybe smarter than the AI

Like
Reply
Josh McHugh

Chief Executive Officer, Attention Span Media LLC | Editor in Chief, Attention FWD

1y

That’s just, like, your opinion, man.

George Pappas

ShipBroker / YachtBroker at PacMarine Yachts ; CEO at ITPRT Inc. ; CEO Digi-Prom Inc.

1y

slanted article! no dislike button?

To view or add a comment, sign in

Others also viewed

Explore content categories