Generative AI's reflection of power & culture & what to do about it
As people & companies experiment with generative AI, it's more important than ever for us to consider how the technologies we interact with shape our world. If we've learned anything from social media, it's that what we create and consume influences us in ways we are just beginning to understand.
It’s why policy innovation—creating the rules for how tech is integrated into society—is just as important as technical innovation. Napster, Kazaa, iTunes, & Spotify all fundamentally enable playing music from a device, but the policy shifts that underpinned their products were what differentiated them & enabled improvements in quality. What are the creative policy solutions we need to unlock better, more equitable outcomes from AI?
There are untold benefits that we'll uncover with generative algorithms. Already I've been in touch with companies that look to enable artists & designers to co-create with AI, using its capabilities to broaden the scope of their creativity. In addition, imagine the positive impact for people with learning disabilities who might find certain parts of language to be a challenging medium for communication.
But historical exclusion has resulted in data gaps, making under and misrepresentation a significant challenge. In Celeste Ng's speculative novel 'Our Missing Hearts,' a Chinese-American child struggles to understand his identity when he grows up surrounded by narratives of anti-Chinese rhetoric. What happens if historically excluded stories aren't incorporated into generative AI—or worse: if they are narrated by people outside of excluded communities?
Whose responsibility is it to ensure proper inclusion that enables grounded, accurate, and fair narratives about historically excluded communities? Companies are increasingly aware of these gaps—should they aim to fund and fill them? Or should governments as they consider how best to ensure 'data readiness' for AI uptake? What's the role of civil society in all of this?
Traditionally, scientists & companies have focused on building technology while policymakers regulate to ensure a fair and equitable distribution of benefits. But for many reasons, the center of gravity (and responsibility) for moral labor can't sit solely with governments. First, public funding used to be the major vehicle for science & technology. That’s no longer true today. Second, cutting-edge technology is moving so quickly that the few building it are often the ones with the most expertise to fully understand and make appropriate adjustments. Third, regulation often takes too long—already there are signs that regulatory proposals might be obsolete by the time they’re implemented. And finally, inventors do have a moral responsibility for its use & outcomes—especially if its inputs perpetuate or exacerbate existing injustice.
So how do we redistribute the burden of moral labor in the case of generative AI? Many companies are building generative AI as a step toward general purpose AI. Can any technology truly be general unless it serves the margins—or does 'general' really mean the majority, or those in power?
We have to start with the levers for incentivizing the equitable distribution of benefits for technology. To borrow from Larry Lessig, in a capitalist environment, that means the markets, which determine norms among corporate behavior, and then scaling through laws.
Markets: What are venture capitalists requiring for the companies they fund? Are accuracy, groundedness, and minimizing harm core requirements for investments? How can public boards exercise their responsibility to govern companies that impact how we live & hold them to account as part of their fiduciary responsibility?
Norms: Are companies releasing technologies in ways that will lead to equitable outcomes or the replication of existing bias? What are the technical practices & standards we can invest in as an industry to resolve challenges—like responsible disclosure in security or bug bounties, exposing bugs/biases and then rectifying them with good reference datasets that offer contrasting views?
Laws: What are some future-proof or technology-neutral regulations that would stand the test of time—pushing toward outcomes that serve the public interest rather than elucidating practices that will be ignored or become out of date? Policymakers are the only part of the incentive chain empowered with the democratic will to reflect what society wants to see; they must help scale behaviors that are in line with societal goals.
It's incumbent on us all to push toward better models that more accurately reflect our realities—with first person narratives that enable communities to tell their own stories about their lived experiences. What is that practice at an individual level for us all as we engage with generative AI? I'd start there.
Retired from the corporate hedonic treadmill. Business operator. I don't check LinkedIn messages.
2yWhat a badass 💪
Strategic Finance @ Mews | NED
2yRock stars ♥️
Generative AI | AI | business growth finder and advisor. Get the most from AI with minimal risk - AI strategy, AI insights and leading AI advice - Contact me today - CEO - MikeNashTech.com
2yGreat discussion with important points Dorothy and Niki on AI/GenAI landscape. Also, like the line 'It's incumbent on us all to push toward better models that more accurately reflect our realities'. 👏
Managing Director | williamsworks
2yBrilliant!!! 👏👏👏