From the course: The Power of Accurate Prompting with Anthropic’s Claude

Learning what makes Claude different

- Claude, which is both the name of the product as well as the model was founded by a team of technologists who left OpenAI over concerns that AI perhaps was becoming too commercial. They also had a desire to focus on a safety first approach. However, LLMs require vast amounts of processing and compute time in order to be trained, and that takes millions of dollars to realize. The company's organized as a public benefit corporation, which means that it combines social responsibility with the profit necessities of creating and running the model. It has received large investments from Google and Amazon. In order to be trained, Claude is fed a large data set of information from the internet as well as selected data sets by Anthropic. From this, the algorithm learns to predict the next token in a series of tokens, which to us looks like predicting the next word in a series of words. Like other LLMs, Claude is trained by using a technique called reinforcement learning from human feedback, where the model learns what makes an answer accurate by receiving feedback from a real human being. Claude's constitution is a document based on Anthropics own values and checks its responses to prompts so that every answer is filtered through their set of values. Although Claude does store your prompts, it won't normally use your conversations to train their models, which is different than other chatbots. This approach is reasonable because allowing unrestrained context into the model can introduce new problems and biases. A version called Claude 2 improves on the performance from the first generation, can handle longer responses, adds API access and a number of other improvements, including improved reasoning, a 2x decrease in hallucinations, improving summarization, honesty, and comprehension of documents. Claude isn't as featureful as other products, but focuses on text processing. Currently, it has no access to the web. It doesn't generate images, audio, or execute code to achieve tasks. It also doesn't let you create agents or custom chatbots. Claude is also trained with data up to December, 2022, and with no web access, there's no way for it to retrieve more recent data from its searches. This is deliberate since information from URLs can add context that violates their constitution. Claude has one of the longest context windows of any chatbots in the market with up to 200,000 tokens, which translates roughly to 150,000 words or over 500 pages of material. Think of this as the amount of memory that these systems can remember in a long conversation. A larger context means that you can feed it longer texts. Although there is a paid version of Claude for $20 a month, the free version is available with few restrictions to all users. The subscription allows for greater usage of the model, including access to the larger 200,000 token context window, as well as 5x more usage. Whereas other LLMs use different models for the free versus the paid versions. Claude allows you to upload some document types, including PDFs, docs, CSV files, text files, HTML, and others. If you use other chatbots, Working with Claude feels like talking to a different individual with its own personality. It's designed to be safer and more cautious, and you'll notice that it will admit when it doesn't know something or if it can't do something. That's actually quite refreshing. Next, let's take a look at how the interface and basic prompting work.

Contents