AI and Trust
"Can we build a reliable network out of unreliable parts in an unreliable world?" This edition of "Advances in Computing" features an opinion piece by Bruce Schneier making a case for why we need trustworthy AI.
Also included: an opinion piece article on reclaiming the mission of higher education and a research article on how to combat malicious AI models, and hand-picked stories from Interactions, ACM's magazine on user experience and interaction design.
Enjoy!
This is a discussion about artificial intelligence (AI), trust, power, and integrity. I am going to make four basic arguments:
Okay, so let’s break that down. Trust is a complicated concept, and the word is overloaded with many different meanings.
There’s personal and intimate trust. When we say we trust a friend, it is less about their specific actions and more about them as a person. It’s a general reliance that they will behave in a trustworthy manner. Let’s call this “interpersonal trust.”
There’s also a less intimate, less personal type of trust. We might not know someone personally or know their motivations, but we can still trust their behavior. This type of trust is more about reliability and predictability. We’ll call this “social trust.” It’s the ability to trust strangers.
Interpersonal trust and social trust are both essential in society. This is how it works. We have mechanisms that induce people to behave in a trustworthy manner, both interpersonally and socially. This allows others to be trusting, which enables trust in society. And that keeps society functioning. The system isn’t perfect—there are always untrustworthy people—but most of us being trustworthy most of the time is good enough.
I wrote about this in 2012, in a book called Liars and Outliers.
I wrote about four trust-enabling systems: our innate morals, concern about our reputations, the laws we live under, and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two, and how the last two scale better and allow for larger and more complex societies. They’re what enable trust amongst strangers.
What I didn’t appreciate is how different the first two and the last two are. Morals and reputation are person to person, based on human connection. They underpin interpersonal trust. Laws and security technologies are systems that compel us to act trustworthy. They’re the basis for social trust.
Taxi driver used to be one of the country’s most dangerous professions. Uber changed that. I don’t know my Uber driver, but the rules and the technology lets us both be confident that neither of us will cheat or attack each other. We are both under constant surveillance, and we are competing for star rankings.
Lots of people write about the difference between living in high-trust and low-trust societies. That literature is important, but for this discussion, the critical point is that social trust scales better. You used to need a personal relationship with a banker to get a loan. Now it’s all done algorithmically, and you have many more options.
That scale is important. You can ask a friend to deliver a package across town, or you can pay the post office to do the same thing. The first is interpersonal trust, based on morals and reputation. You know your friends and how reliable they are. The second is a service, made possible by social trust. And to the extent that it is a reliable and predictable service, it’s primarily based on laws and technologies. Both can get your package delivered, but only the second can become a global package delivery service like FedEx.
Because of how large and complex society has become, we have replaced many of the rituals and behaviors of interpersonal trust with security mechanisms that enforce reliability and predictability: social trust.
But because we use the same word for both, we regularly confuse them. When we do that, we are making a category error. We do it all the time, with governments, with organizations, with systems of all kinds—and especially with corporations.
We might think of them as friends, when they are actually services. Corporations are not moral; they are precisely as immoral as they can get away with.
Both language and the law make this an easy category error to make. We use the same grammar for people and corporations. We imagine we have personal relationships with brands. We give corporations many of the same rights as people. Corporations benefit from this confusion because they profit when we think of them as friends.
We are about to make this same category error with AI. We’re going to think of AI as our friend when it is not.
There is a through line from governments to corporations to AI. Science fiction writer Charlie Stross calls corporations “slow AI.” They are profit-maximizing machines. The most successful ones do whatever they can to achieve that singular goal. David Runciman makes this point more fully in his book, The Handover. He describes governments, corporations, and AIs all as superhuman machines that are more powerful than their individual components. Science fiction writer Ted Chiang claims our fears of AI are basically fears of capitalism and that the paperclip maximizer is basically every start-up’s business plan.
This is the story of the Internet. Surveillance and manipulation are its business models. Products and services are deliberately made worse in the pursuit of profit.
We use these services as if they are our agents, working on our behalf. In fact, they are double agents, also secretly working for their corporate owners. We trust them, but they are not trustworthy. They’re not friends; they’re services.
It’s going to be the same with AI. And the result will be worse, for three reasons.
The first is that these AI systems will be more relational. We will be conversing with them, using natural language. As such, we will naturally ascribe human-like characteristics to them.
I actually think that websites will largely disappear in our AI future. Static websites, where organizations make information generally available, are a recent invention—and an anomaly. Before the Internet, if you wanted to know when a restaurant opened, you would call and ask. Now you check the website. In the future, you—or your AI agent—will once again ask the restaurant, the restaurant’s AI, or some intermediary AI. It’ll be conversational: the way it used to be.
This relational nature will make it easier for those corporate double agents to do their work. Did your chatbot recommend a particular airline or hotel because it’s the best deal for you? Or because the AI company got a kickback from those companies? When you asked it to explain a political issue, did it bias that explanation toward the political party that gave it the most money? The conversational interface will help the AI hide its agenda.
The second reason is power. Sometimes we have no choice but to trust someone or something because they are powerful. We are forced to trust the local police because they’re the only law enforcement authority in town. We are forced to trust some corporations because there aren’t viable alternatives. Or, to be more precise, we have no choice but to entrust ourselves to them. We will be in this same position with AI. In many instances, we will have no choice but to entrust ourselves to their decision making.
The third reason to be concerned is these AIs will be more intimate. One of the promises of generative AI is a personal digital assistant that acts as your advocate to others and as an assistant to you. This requires a greater intimacy than your search engine, email provider, cloud storage system, or phone. It might even have a direct neural interface. You’re going to want it with you 24/7, training on everything you do, so it can most effectively work on your behalf.
And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.
You will default to thinking of it as a friend. It will converse with you in natural language. If it is a robot, it will look humanoid—or at least like an animal. It will interact with the whole of your existence, just like another person would.
And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you respond to.
All of this is a long-winded way of saying we need trustworthy AI ...
Visit the full article here.
More like this:
What is the fundamental purpose of higher education?
This is not to say that truth, inquiry, and social justice are not goals of higher education; rather, they are means toward an end, which is the public good.
Unveiling the dark side.
Detecting and mitigating malicious AI models is challenging due to their complexity and opacity.
Interactions - All About HCI:
To better predict the impact of this intimate human-technology interaction, this article sheds light on the experiential needs of preterm care and argues for a collaborative, integrative, and hands-on approach to designing technologies for the very young.
Vogue Ukraine partnered with Google, testing AI's ability to detect patterns that our eyes see but our brains struggle to decode.
Discover our past editions:
Enjoyed this newsletter?
Subscribe now to keep up to speed with what's happening in computing. The articles featured in this edition are from CACM, ACM's flagship magazine on computing and information technology; Interactions, a home to research related to human-computer interaction (HCI), user experience (UX), and all related disciplines. If you are new to ACM, consider following us on X | IG | FB | YT | Mastodon | Bsky | Thread. See you next time!
@ Owl Labs
2mo"David Runciman makes this point more fully in his book, The Handover. He describes governments, corporations, and AIs all as superhuman machines that are more powerful than their individual components." a forest has special properties. these special properties doesn't mean it is no longer a collection of trees. many of our problems are because we have failed to universalize human ethics and law. a government or corporation can behave in ways that if done by a individual, society would want to see them locked up. corporations and governments are made up of people. the trademarks, logos, flags and uniforms does not change this. corporations are not people. the infinite customization provided by AI agents is discrimination. as distasteful/ugly as it can be sometimes, people are free to discriminate. service providers do no get the same flexibility. the ability to freely discriminate was traded for a reduction in liability. what we have is Schrödinger's corporations. they want to choose when to be a person and when to be a limited liability. trustworthy AI can only begin when society forces them to pick a lane. we need to have trustworthy AI/technology in order to have equality under the law.