Type something to search...
Perplexity: R1 1776

Perplexity: R1 1776

  • 125K Context
  • 2/M Input Tokens
  • 8/M Output Tokens

Note: As this model does not return tags, thoughts will be streamed by default directly to the content field.

R1 1776 is a version of DeepSeek-R1 that has been post-trained to remove censorship constraints related to topics restricted by the Chinese government. The model retains its original reasoning capabilities while providing direct responses to a wider range of queries. R1 1776 is an offline chat model that does not use the perplexity search subsystem.

The model was tested on a multilingual dataset of over 1,000 examples covering sensitive topics to measure its likelihood of refusal or overly filtered responses. Evaluation Results Its performance on math and reasoning benchmarks remains similar to the base R1 model. Reasoning Performance

Read more on the Blog Post

Related Posts

Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance. This is a normal offline LLM, but the [online version](/perpl ...

Perplexity: Llama 3.1 Sonar 70B
Perplexity
128K context $1/M input tokens $1/M output tokens

Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance. This is a normal offline LLM, but the [online version](/perpl ...

Perplexity: Llama 3.1 Sonar 8B
Perplexity
128K context $0.2/M input tokens $0.2/M output tokens
Type something to search...