Ariel Guersenzvaig’s Post

View profile for Ariel Guersenzvaig

Technology Ethicist. Professor at ELISAVA Barcelona School of Design and Engineering ||||| Author of ‘The Goods of Design’ (Rowman & Littlefield, 2021) - CHOICE 2022 Outstanding Academic Title.

This by Aaron Benanav is really, really good: “The real threat posed by generative AI is not that it will eliminate work on a mass scale, rendering human labour obsolete. It is that, left unchecked, it will continue to transform work in ways that deepen precarity, intensify surveillance, and widen existing inequalities. Technological change is not an external force to which societies must simply adapt; it is a socially and politically mediated process. Legal frameworks, collective bargaining, public investment, and democratic regulation all play decisive roles in shaping how technologies are developed and deployed, and to what ends.” This comes from the preface to the Brazilian edition of “Automation and the Future of Work”. The original book is from early 2022 and while for obvious reasons it doesn’t discuss Generative AI, it’s still very much worth reading. You can read the whole preface here: https://lnkd.in/dfkXPruf

  • Cover of “Automation and the Future of Work”
by Aaron Benanav
Dr. Martin Schiele

AI architect | GDPR-compliant AI systems, independent of big tech - directly in your IT infrastructure | AI | data protection | automation | infrastructure | B2B

1w

Sounds like wishful thinking to me. Technology doesn't care about our political frameworks... it just advances. And the countries that embrace it fastest will outcompete those trying to regulate it into submission. We should focus on adapting rather than pretending we can control the inevitable.

Like
Reply

That’s a great recommendation! It reminded me of last week’s episode of The Diary Of A CEO Dr. Roman Yampolskiy, one of the leading voices on AI safety. It’s been the most popular episode this month! For me, AI gives all of us the opportunity to expand human capabilities to try to solve humanity’s toughest challenges like climate change. https://youtu.be/UclrVWafRAI?si=pNI6drTrcNYJ2JD9

Marta Bieńkiewicz

Connecting the dots for responsible innovation

1w

For the like-minded and not like-minded, I also recommend a recent interview with Aaron : https://techwontsave.us/episode/293_a_new_economy_will_deliver_better_technology_w_aaron_benanav

Book "The Second Machine Age" did set the stage and using AI automate changes the risks discussed in the post to another level. One of the consulting almost bankrupted a 6 billion comany in Ohio trying to make everything more efficient. Degenerative AI pushers are clueless in the kind of business risks they are creating unknowingly. Cybersecurity risks aside, unbridled push to automate with AI have a example to learn from Spectrum published in August edition. Move Too Fast, Risk Systemic Blow Back. When speed is everything, people pay the price. Developers should be aware of the term "unintended consequences" Plant automation turned reinvention to destruction. Magazine does discuss the push by Bush junior to implement Electronic Healthcare Record systems in US.

Like
Reply
Larry McGinity

Creator of the Art as a Derivative concept. Uniquely, I make art about the people and structures shaping financial markets.

1w

How could all this have become a societally empowering epoch of technical advance rather than a a semi-permanent hit-job on collective stability? Were we to begin all over again, we would surely adopt a form of trial by jury and apply it to technical advances that have the power of societal intrusion. The unbridgeable levels of understanding between engineers and architects who make our tech and the user who, willingly or otherwise, is caught up in exploiting it, requires that the user is afforded the opportunity to understand processes. This, generally speaking, is not the case. If advances come into existence that cannot be explained, at least in principle, to a reasonably adept human then we enter the danger zone. Tyranny lies in this insouciant separation between understanding of something and our dependancy on it. When a human cannot make a decision through understanding and reflection as to whether it is in their moral interest to employ a tool or not, then advancing technique at all costs becomes a fetish. That, of course, is an uncomfortable place to be and inevitably carries deleterious outcomes.

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories