Partnership on AI’s cover photo
Partnership on AI

Partnership on AI

Research Services

San Francisco, California 25,557 followers

Advancing Responsible AI

About us

Partnership on AI (PAI) is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances positive outcomes for people and society. By convening diverse, international stakeholders, we seek to pool collective wisdom to make change. We are not a trade group or advocacy organization. We develop tools, recommendations, and other resources by inviting voices from across the AI community and beyond to share insights that can be synthesized into actionable guidance. We then work to drive adoption in practice, inform public policy, and advance public understanding. Through dialogue, research, and education, PAI is addressing the most important and difficult questions concerning the future of AI. Our mission is to bring diverse voices together across global sectors, disciplines, and demographics so developments in AI advance positive outcomes for people and society.

Website
https://www.partnershiponai.org/
Industry
Research Services
Company size
11-50 employees
Headquarters
San Francisco, California
Type
Nonprofit

Locations

  • Primary

    2261 Market Street #4537

    San Francisco, California 94414, US

    Get directions

Employees at Partnership on AI

Updates

  • The EU has approved the General-Purpose AI Code of Practice -- just one day before the obligations for providers of GPAI models become applicable. This voluntary Code, developed with input from nearly 1,000 stakeholders, including PAI, offers a clear path for developers to align with the AI Act’s requirements on transparency, copyright, and safety & security. With 26 signatories already on board (including OpenAI, Anthropic, Google, and Microsoft), the Code is set to shape how general-purpose AI is built and used in Europe and beyond. Read our reflections and what this means for the future of AI governance:

  • Democracies have a critical role to play in upholding sovereignty, self-determination, and human dignity as global proposals for AI governance, like the U.S. AI Action Plan, gain momentum. A new Zero Draft paper by Stephanie Ifayemi (Partnership on AI), Amanda Craig Deckard (Microsoft), and Elham Tabassi (The Brookings Institution) introduces three practical tools to help policymakers, developers, and advocates navigate today’s evolving AI landscape: ☰ Governance Stack: Organizes existing AI laws, standards, and principles by their level of abstraction, from high-level values to technical implementation. 📍 Governance Map: Charts the Stack across AI topics like transparency and safety, revealing where efforts are concentrated and where more work is needed. 🪝 Anchors & Hooks: Connects the dots between governance efforts, helping stakeholders trace how different instruments relate and build on one another. Together, these tools offer a shared language to make sense of a crowded field and identify where collaboration can make the biggest difference. We’re inviting public feedback on this Zero Draft - download the paper and share feedback below.

  • On July 23rd, the US Administration released its highly anticipated AI Action Plan, which has received a range of responses from different organizations. As a multistakeholder partnership, we believe that a core feature of any policy strategy that benefits people and society is the involvement of the organizations and people it will affect. We see promising opportunities to build upon the Plan’s initiatives by furthering stakeholder collaboration, especially in these key areas: 1. Labor & the Workforce We applaud the Plan’s recognition of AI’s impact on American workers and its proposal to create a DOL AI Workforce Research Hub. But measuring impact is not enough. Workers must be meaningfully involved in shaping workforce policies and training strategies. Our research underscores the importance of centering worker voices to foster high-quality jobs and just transitions. 2. Foundational & Open Research We’re encouraged by the Plan’s focus on fostering open AI models built on American values, including through the NSF R&D Plan and NAIRR. To make this vision real, public investments must support technical and socio-technical research, prioritize multistakeholder collaboration, and reinforce open-source ecosystems with shared accountability across the AI value chain. 3. Independent Assurance Ecosystem The Plan calls for more robust AI evaluations, including through NIST and CAISI. But voluntary best practices alone are not enough. We must build an independent AI assurance ecosystem that includes testing, validation, and interpretability frameworks across public and private deployments. PAI’s Safe Foundation Model Deployment Guidance offers a roadmap. 4. Guardrails for Safety & Trust While the Plan touches on incident response and vulnerability sharing, it pays less attention to broader safety risks like hallucinations and human-AI misalignment. Our work on trustworthy deployment and the emerging risks of personlike AI systems shows the need for proactive governance across emerging capabilities. 5. International Engagement with Global Values The Plan’s call for U.S. leadership in global AI governance is a crucial step, but true cooperation means listening to and accommodating global values. As new international frameworks emerge (including China’s Global Governance AI Action Plan), the U.S. must engage collaboratively and uphold principles of sovereignty, self-determination, and human dignity. We encourage the Administration to consider strengthening the involvement of affected people and societal groups, and our Partners to continue carrying out technical and socio-technical research, as well as participating in the development of an independent AI assurance ecosystem. More in our blog below.

  • To make AI transparency meaningful, we must go beyond documentation. A new report from PAI and Feng Kung (UC Berkeley Labor Center) highlights how workers are too often left out of transparency efforts, even as AI transforms labor and the workplace. The report features real-world examples, including union collective bargaining over workplace technologies, as a model for what inclusive, accountable transparency can look like. 🗓️ Join us tomorrow for a panel conversation on putting these ideas into practice: https://buff.ly/IkFJLUE 📥 Download the report: https://buff.ly/0tQ8QXd

  • 📢 We’re #hiring! Partnership on AI is looking for a Head of Corporate Governance, Risk, and Responsible Practice to join our team. This senior role will: 🔹 Lead our Investor Disclosures program and guide development of standards for AI-related transparency 🔹 Help shape responsible Enterprise AI adoption frameworks and best practices 🔹 Engage with stakeholders across industry, finance, academia, and civil society 🔹 Serve as a key thought leader on AI-related risk, opportunity, and governance This is a #remote role based in the US or Canada with a salary range of $150,000–$165,000 USD. Learn more and apply: https://buff.ly/s5BBheQ

    • No alternative text description for this image
  • UNGA week is a key moment for global conversations on AI, policy, and impact. The PAI team will be on the ground, and we’d love to know who from our Partner community will be in town too. Planning to be there? Let us know 👇 #UNGA80

    This content isn’t available here

    Access this content and more in the LinkedIn app

  • Workers are on the frontlines of AI’s impact, but are often left out of conversations on how these systems are governed. On July 29, join Partnership on AI for a conversation on how worker participation can reshape transparency from a checklist of disclosures into a shared process for governance. We’ll launch a new report authored by Feng Kung (UC Berkeley Labor Center), followed by a panel discussion with: • Aiha N. (Data & Society Research Institute) • Elizabeth Anne Watkins, PhD (Intel Labs) • Michelle Miller (Center for Labor and a Just Economy at Harvard Law School) Moderated by Eliza McCullough (PAI). 📅 July 29 | 12pm ET / 9am PT 🔗 Register: https://buff.ly/IkFJLUE

    • No alternative text description for this image
  • How can we build AI systems that reflect the needs of the people they impact most? Earlier this year, we hosted a conversation on the value of participatory public engagement in AI development with Tina M. Park and members of our Global Task Force for Inclusive AI. Grounded in insights from our Guidance for Inclusive AI, this discussion explores why and how developers and deployers should engage directly with the public and what’s at stake if they don’t. 📺 Watch the full recording here: https://buff.ly/fy9gj2q

  • Partnership on AI reposted this

    View profile for Aimee Louise Bataclan

    Head of Communications at Partnership on AI

    #BayArea friends! Do you know an amazing #event producer that could support Partnership on AI in planning our annual Partner Forum? My team is looking for support to deliver a top-notch experience for our community of #AI leaders from academia, civil society, and industry. Accepting applications/proposals through July 22 -- please share with relevant folks in your network!

Similar pages

Browse jobs

Funding

Partnership on AI 1 total round

Last Round

Grant

US$ 600.0K

See more info on crunchbase