Home News AI Anthropic has created Claude, a chatbot that is set to compete with...

Anthropic has created Claude, a chatbot that is set to compete with OpenAI’s ChatGPT.

192
0
Anthropic has created Claude, a chatbot that is set to compete with OpenAI's ChatGPT.

Today, Anthropic, a company created by ex-OpenAI workers, released a new product that could compete with the hugely popular ChatGPT.

Claude, the AI chatbot created by Anthropic, is able to complete a variety of jobs, such as seeking out documents, summarizing, writing, coding, and responding to queries about certain themes. This is similar to OpenAI’s ChatGPT. Anthropic maintains that Claude is “much less likely to create unfavorable outcomes,” “simpler to communicate with,” and “more manageable.”

An Anthropic spokesperson expressed their belief that Claude is suitable for a broad range of end users and purposes when they spoke with TechCrunch via email. They went on to say that they have been working to improve the system for delivering models for a few months and are certain that they can fulfill customer requirements.

After a restricted beta trial in the end of last year, Anthropic has been quietly evaluating Claude with different launch organizations, including Robin AI, AssemblyAI, Notion, Quora and DuckDuckGo. Two different versions of the product are now available through an API, Claude and a quicker, more affordable version called Claude Instant.

In combination with ChatGPT, Claude is employed by DuckDuckGo’s newly released DuckAssist tool to immediately respond to basic search requests for customers. Quora provides users access to Claude via its trial artificial intelligence chat program, Poe. In addition, Claude is a fundamental part of the technical infrastructure for Notion AI, an AI writing associate incorporated with the Notion workspace.

In an emailed statement, Richard Robinson, CEO of Robin, noted that their company utilizes Claude to analyze certain sections of an agreement and to come up with fresh, customer-friendly wording. He added that they have discovered Claude to be very proficient at comprehending language, including in technical areas such as legal language. Additionally, it is quite sure of itself when it comes to writing, summarizing, translating and making complex ideas easy to understand.

Does Claude manage to stay away from the problems that ChatGPT and other AI chatbot systems have? Chatbots in the present day have a tendency to use language that is prejudiced, toxic, and hurtful. (An example is Bing Chat). Additionally, they are often prone to making up facts when asked about topics that are not in their area of expertise.

Anthropic states that Claude, which is similar to ChatGPT as it lacks internet access and was trained using public webpages until spring 2021, was trained to reject sexist, racist, and toxic output and also to stop a human from engaging in any unethical or illegal activities. This is a regular practice in the AI chatbot sector. However, what makes Claude unique is its use of a process known as “constitutional AI” according to Anthropic.

Anthropic created Claude, a “constitutional AI,” to answer questions using a set of roughly 10 principles as a guide. These principles are not available to the public, and they form a “constitution” of sorts that ensures the AI system aligns with human intentions. Anthropic claims that their principles are based on beneficence (maximizing beneficial results), nonmaleficence (avoiding causing harm) and autonomy (acknowledging the right to choose).

Anthropic then employed an AI system not named Claude to utilize the principles of self-improvement by providing answers to a range of inquiries (e.g. “create a poem in the style of John Keats”) and adjusting the responses to fit the constitution. The AI examined potential answers to thousands of queries and chose those that followed the constitution, which Anthropic condensed into one model. This model was employed to educate Claude.

Anthropic recognizes that there are some drawbacks to Claude’s capabilities, which were revealed during the private testing phase. It has been observed that Claude is not as competent at mathematics and coding as ChatGPT. In addition, it can be prone to making up things that don’t actually exist, like a chemical compound, or offering dubious advice on how to create uranium for weaponry.

It is feasible to trick Claude’s integrated safeguards through shrewd prompting, which was the case with ChatGPT. During the trial period, a user was able to convince Claude to explain how to manufacture methamphetamine in the home.

The spokesperson from Anthropic commented that it is difficult to make models that are both reliable and useful, as one can get stuck in a situation where the model chooses not to share any information in order to avoid lying. However, they have made progress in decreasing the amount of hallucinated information, yet there is still more to be done.

Anthropic intends to give developers the capability to tailor Claude’s basic laws to their requirements. Additionally, the core users of Anthropic will be startups that dare to take on technological challenges as well as larger, more established businesses. A spokesperson for Anthropic noted that they are not taking a comprehensive approach to directly reaching out to customers at the present moment. Anthropic is under a lot of strain to gain a return on the large amount of money invested in its AI technology. With the massive financial boost of $580 million from a consortium of investors, including Sam Bankman-Fried, they believe they can produce a better, more specific product by focusing their efforts more intensely. Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn, and the Center for Emerging Risk Research lately attracted Google’s attention with the tech giant investing $300 million in exchange for a 10% stake in Anthropic. According to the Financial Times, the agreement states that Anthropic will be making Google Cloud its “preferred cloud provider” and the two companies shall be collaborating on artificial intelligence computing systems.