Perplexity’s Post

View organization page for Perplexity

1,132,976 followers

Today we're open-sourcing R1 1776—a version of the DeepSeek R1 model that has been post-trained to provide uncensored, unbiased, and factual information. DeepSeek-R1 rivals top reasoning models like o1 and o3-mini. However, its usefulness is limited by its refusal to engage with topics censored by the CCP. We aim to always provide accurate answers, but had to address R1's censorship before using its reasoning capabilities. To keep our model "uncensored" on sensitive topics, we created a diverse, multilingual evaluation set of 1000+ examples. We then use human annotators as well as carefully designed LLM judges to measure the likelihood a model will evade or provide overly sanitized responses to the queries. We also ensured that the model’s math and reasoning abilities remained intact after the uncensoring process. Benchmark evaluations showed it performed on par with the base R1 model, indicating that uncensoring had no impact on core reasoning capabilities. Download the model weights on our HuggingFace Repo or consider using the model via our Sonar API. HuggingFace Repo: https://lnkd.in/gfmJZffF Sonar API: https://lnkd.in/dtCXV4c6 Learn more about R1 1776: https://lnkd.in/gVizAFb7

  • graphical user interface, chart
Florian Bansac

AI - Agents - FinTech

7mo

Great news! Come share what you build with uncensored R1 and learn with us in the AI Agents group on linkedin: https://www.linkedin.com/groups/6672014

Like
Reply
CJ Combs

CIO Magazine Published AI Advisor demonstrating “Art of the Possible” | Microsoft Copilot Studio Dev and SharePoint Expert | Former Triathlete and current competitive Cyclist

7mo

Can we get a video or tour of how you implemented DeepSeek? How you are are protecting users data and showing how it does NOT transmit data to China?

Like
Reply
Bridget Hylak, CI, CT, MTC

Industry Spokesperson and Strategist • Global Marketing Expert • AI Localization Consultant

7mo

I fear that there are issues intrinsic to the Chinese culture , language, ethics, law, morality, etc., that also need to be considered... or may be overlooked... time for a few high-level bicultural consultants...

Like
Reply
Grace Yu

Maths & Stats @ Oxford \\ AI/ML, DS, Quant

7mo

Cool update but interesting name given what’s currently happening in the 1776 country.

Samuel Brasil

building the impossible

7mo

unbiased and factual information is the santa claus of information

🛡️ Rogier de Groot

Owner of DCK ° Creative Brain 🧠🎉 π AI } CREATIVE } BLOCKCHAIN } SAAS } Solutions. Digital ghost in web²/³ utility (advies/oplossingen/netwerk/denktank)

6mo

Nice!

Like
Reply

Very old, but highly relevant: In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. “What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said. Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher. “So that the room will be empty.” At that moment, Sussman was enlightened.

The release of R1 1776 marks an important milestone in developing language models that are more transparent and less biased.

Andy Wang

Security Engineer at Netcraft

7mo

Should've been called R1 1776-2025

Samater „Sam“ Liban

I create & transfer knowledge. From biz to tech and back. For teams and organizations. Transformation & innovation. Hate Bullshit. I dive deep & explain in the recipients contexts. Let's shape the future, together!

7mo

I don’t get it. When testing the Chinese app version of DeepSeek it actually almost every time answered in a relative Western way to (in CCP views) critical questions, but after posting the answer it deleted it, immediately, and re-answered that it can’t answer. So I as a layman thought, it’s probably just a filter above the LLM, that is censoring? Thinking about this, I also found that more reasonable. I mean - with what data would you train a model to ensure it answers in a censored way? Can you “machine learn” a “censored vector algorithm”? Google, OpenAI and others also were not able to train a model “censored”, but built level upon the models…sometimes so badly, that we saw black Nazis or Native American founding fathers .. musk called it “woke AI” back in the days, but it was an additional mechanism, that was doing it. So that’s why I don’t get the claims posted here. That perplexity “uncensored” a LLM. But it must be my level of comprehension? Maybe someone can point out, what I am missing/confusing?

See more comments

To view or add a comment, sign in

Explore content categories