Switzerland's National AI Model · Albania's AI Minister · And More
My weekly curation of news, papers, and ideas that will help you understand AI's legal and ethical challenges, emerging trends, and potential paths forward | Edition #234
👋 Hi everyone, Luiza Jarovsky here.
Welcome to the 234th edition of my newsletter, trusted by over 77,900 subscribers interested in AI governance, AI literacy, the future of work, and more.
It is great to have you here!
🎓 Expand your learning and upskilling journey with these resources:
Upgrade to access all editions (20% off for LinkedIn subscribers)
Join my AI Governance Training (yearly subscribers save $145)
Register for our job alerts for open roles in AI governance and privacy
Sign up for weekly educational resources in our Learning Center
Discover your next read in AI and beyond with our AI Book Club
🔥 Join the last cohort of 2025
If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, join the 25th cohort of my 16-hour live online AI Governance Training in November (the final cohort of the year).
Each cohort is limited to 30 people, and more than 1,300 professionals have taken part. Many described the experience as transformative and an important step in their career growth. *Yearly subscribers save $145.
Switzerland's National AI Model · Albania's AI Minister · And More
My weekly curation of news, papers, and ideas that will help you understand AI's legal and ethical challenges, emerging trends, and potential paths forward.
1. The news you cannot miss:
Switzerland's national AI model “Apertus” (Latin for “open”) is now available. According to the official release, it is fully open, transparent, and multilingual (40% of the data is non-English), built in compliance with Swiss data protection and copyright laws, as well as the EU AI Act. As I have written a few times in this newsletter, the new AI nationalism is growing, and Switzerland's prioritization of transparency and legal compliance may set a new standard, especially for EU countries aiming to protect fundamental rights. Read more about the Swiss model here.
Albania became the first country to appoint an AI system as a government minister. It's called “Diella,” and it will be in charge of all public procurement. According to the country's Prime Minister Edi Rama, all decisions on tenders will be taken out of the ministries and given to Diella. The goal of this measure is to reduce corruption in Albania. Will this AI use case be successful? Read more about it and see the official avatar here.
AI chatbots are dangerous, and the U.S. is finally taking action. The FTC issued 6(b) orders to Google, OpenAI, Meta, xAI, CharacterAI, Snap, and Instagram. The focus of these orders is to understand what steps these seven companies have taken to prevent the negative impacts that AI chatbots can have on children. Depending on how these inquiries go, we might see more targeted enforcement actions soon. Learn more about the FTC orders here.
The 257-year-old Encyclopaedia Britannica is suing Perplexity for copyright infringement, showing what happens when traditional publishers, legal uncertainty, and aggressive AI players clash. Read a few selected quotes from the lawsuit.
The 2024 National Assessment of Educational Progress's reading evaluation of a nationally representative sample of U.S. students concluded that in 2024, the average reading score at grade 12 was 3 points lower than in 2019. Compared to the first reading assessment in 1992, the average score was 10 points lower. These statistics are worrying, especially given the rise of AI chatbot deployment in the educational system and the uncertain impact on reading skills (which likely adds to the negative impact social media has had over the past two decades).
Real Simple Licensing (RSL) is a new collective rights non-profit that helps online publishers and creators protect their rights and negotiate compensation from AI companies. The platform enables publishers and creators to receive compensation when their content is used to generate an AI result. Read more here.
In a recent interview, OpenAI's founder Sam Altman said: "I actually don't worry about us getting the big moral decisions wrong... Maybe we'll get those wrong, too." To understand the current state of AI, watch this strange exchange between Sam Altman and Tucker Carlson on deciding the future of the world and believing in a higher power.
Mira Murati is one of the few leading women in the AI industry. Many of us instinctively root for her and expect her to drive change in AI. After leaving her position as the CTO of OpenAI, she founded Thinking Machines, which has recently raised $2 billion. Although the company has not launched any products yet, it recently published an interesting blog post titled "Defeating Nondeterminism in LLM Inference." Read my first impressions.
"Supremacy: AI, ChatGPT, and the Race that Will Change the World," by Parmy Olson, is a great read for everyone interested in AI, and it is the 28th recommended book of my AI Book Club. Read about the book here and join the club here (it is free).
I launched a new three-part series on “Becoming Future-Proof,” available in full to paid subscribers. Read the first essay here or upgrade.
*If you would like to share a specific ethical or legal development in AI or your thoughts on a specific event, reply to this email or use this form.
2. Interesting papers to download and read:
I. “AI Openness: A Primer for Policymakers” by the OECD (link):
“Decisions to release model weights should carefully consider potential benefits and risks. Falling compute costs and more accessible fine-tuning methods lower the barriers to both use and misuse, enhancing the potential advantages of open-weight models while also increasing the risk of harmful applications.”
II. “The Impact of LLM Adoption on User Behavior” by Nicolas Padilla et al (link):
“Our primary results suggest that concerns about LLMs substituting for web browsing may be well-founded, at least for a subset of online content provider. In particular, we find that after adopting LLMs, users make fewer searches in traditional search engines, including for question searches and both short and longer queries.”
III. “How People Use ChatGPT” by Aaron Chatterji et al (link):
“(…) the three most common ChatGPT conversation topics are Practical Guidance, Writing, and Seeking Information, collectively accounting for nearly 78% of all messages. Computer Programming and Relationships and Personal Reflection account for only 4.2% and 1.9% of messages respectively.”
*If you are a researcher in AI ethics or AI law and would like to have your recently published paper featured here, reply to this email or use this form.
3. Ideas to think about and act on:
AI chatbots require a radically different approach to AI policy (and this will not be easy).
I wrote my first article, warning against the dangers of AI chatbots, in February 2023, covering ‘AI companions’ with a specific focus on Replika.
Those were the early months of the generative AI wave. However, at that moment, it was already clear that:
AI anthropomorphism is dangerous, leading to potentially harmful emotional dependence and attachment (in 2023, the company behind Replika had to send users information about suicide prevention when the Italian Data Protection Authority ordered them to restrict personal data processing; read more about it in this paper by Daniella DiPaola and Ryan Calo).
Companies would deploy all sorts of unethical practices to make people become attached to chatbots, as emotional AI manipulation is lucrative (you can read my 2023 article about CharacterAI and my recent article on unethical AI marketing).
What was not clear yet in 2023 and is much clearer now is that: (...CONTINUES...)
👉 This is a free preview. To read the full edition (available on Substack), upgrade. Benefits:
Full access to all editions (never miss my insights!);
Learn, upskill, understand AI's legal and ethical implications, and future-proof your career;
Yearly subscribers only: 15% off my live online AI Governance Training (save $145) and free access to my on-demand course (upcoming).
👉 LinkedIn subscribers save 20% on the annual subscription. Upgrade here.
Research Director @Swisscom | Artificial Intelligence 🦾 | 🌎 TOP100 Researcher 🥼 | AI HOUSE Davos | Cybersecurity 🔐 Innovation 💡 & Strategic Tech Foresight 👁️ | Keynote Speaker 🎤 | ex. CERN 👨🔬 & UN 🇺🇳 | PhD 🎓
3dThanks a lot Luiza for highlighting the USPs of the Open Swiss AI #Apertus so nicely. I love: "𝘚𝘸𝘪𝘵𝘻𝘦𝘳𝘭𝘢𝘯𝘥'𝘴 𝘱𝘳𝘪𝘰𝘳𝘪𝘵𝘪𝘻𝘢𝘵𝘪𝘰𝘯 𝘰𝘧 𝘵𝘳𝘢𝘯𝘴𝘱𝘢𝘳𝘦𝘯𝘤𝘺 𝘢𝘯𝘥 𝘭𝘦𝘨𝘢𝘭 𝘤𝘰𝘮𝘱𝘭𝘪𝘢𝘯𝘤𝘦 𝘮𝘢𝘺 𝘴𝘦𝘵 𝘢 𝘯𝘦𝘸 𝘴𝘵𝘢𝘯𝘥𝘢𝘳𝘥, 𝘦𝘴𝘱𝘦𝘤𝘪𝘢𝘭𝘭𝘺 𝘧𝘰𝘳 𝘌𝘜 𝘤𝘰𝘶𝘯𝘵𝘳𝘪𝘦𝘴 𝘢𝘪𝘮𝘪𝘯𝘨 𝘵𝘰 𝘱𝘳𝘰𝘵𝘦𝘤𝘵 𝘧𝘶𝘯𝘥𝘢𝘮𝘦𝘯𝘵𝘢𝘭 𝘳𝘪𝘨𝘩𝘵𝘴." Imanol Schlag, Antoine Bosselut, Martin Jaggi, Joost VandeVondele, Martin Rajman
Fractional CHRO and HR Advisor Driving Culture-Led Growth for CEOs | Co-Founder @HumanAlpha | People-First Leader | 25 yrs | Author & Coach | Ex-DRDO, Quark, Bosch, Finastra
5dOpenness and compliance aren’t just idealistic goals, they’re strategic assets. Luiza Jarovsky, PhD
Founder & CEO, Writing For Humans™ | Humanizing Content in an AI World | Award-Winning Writer & Strategist | ex-Edelman & Ruder Finn
5dGreat to see some important international news on AI!
Privacy & AI Governance Strategy | Privacy Counsel (CIPP/US, Prince2) | Driving Scalable Compliance & Data Protection
5dThanks for sharing! Switzerland’s ‘Apertus’ may hint at how transparency and data protection could guide the future direction of AI governance.
AI Visionary | Ex-Meta | Startup Founder: We build business software MVPs for 3k in 3days - love it or its free
5dAlbania appointing "Diella" as AI procurement minister is fascinating - corruption-proof systems through transparency rather than human judgment calls. The real test: can it handle edge cases and cultural nuances that human ministers navigate daily?