Home News AI Get to know the group of people who are creating a free...

Get to know the group of people who are creating a free and accessible version of ChatGPT.

607
0
Get to know the group of people who are creating a free and accessible version of ChatGPT.

It is evident that AI-driven chatbots are currently in high demand.

Tools that generate essays, emails, and other text based on a few instructions have gained the interest of tech fanatics and companies alike. OpenAI’s ChatGPT, which is thought of as the first of its kind, has over 100 million users. Through an application programming interface, brands including Instacart, Quizlet, and Snap have started integrating it into their platforms, thus increasing the number of users even more.

Some developers are unhappy that the groups creating chatbots are part of a highly funded and exclusive group. Companies such as Anthropic, DeepMind, and OpenAI, which are all very wealthy, have been able to make their own modern chatbot technologies. On the other hand, the open source community has been unable to achieve this task.

A collective of researchers known as Together is attempting to break through the difficulties of training AI models for chatbots, which require a great deal of computing power as well as a large and carefully prepared dataset. They are attempting to be the first to make available an open source program resembling ChatGPT.

Progress has already been made by Together. Last week, it released trained models that any programmer can make use of in order to build an AI-driven chatbot.

Vipul Ved Prakash, the co-founder of Together, expressed to TechCrunch in an email interview that they are putting together an open platform for foundation models that are accessible. He believes that Together is creating a “Linux moment” for AI, enabling researchers, developers and corporations to use and enhance open-source AI models with a platform that consolidates data, models, and computation.

Prakash was a co-founder of Cloudmark, a cybersecurity business that was bought by Proofpoint for $110 million in 2017. After Apple bought his following project, Topsy, a social media search and analytics platform, in 2013, he worked as a senior director there for five years before setting out to establish Together.

On the weekend, Together launched its main inaugural initiative, OpenChatKit, which is an architecture for creating AI-fueled chatbots that are both specific and general-purpose. The set, attainable on GitHub, has the previously trained models and a “versatile” retrieval system that allows the models to take data (such as current sports scores) from different sources and websites.

The original models originated from EleutherAI, a research organization looking into text-generating systems. However, they were perfected by Together’s computing platform, the Together Decentralized Cloud, which is a collective of GPUs and other hardware from people across the web.

Prakash mentioned that the source repositories were created collaboratively which enables anyone to replicate the model results, customize their own model or incorporate a retrieval system. Furthermore, the documentation and community processes were also developed together.

How do I reset my password?”).

In addition to Together’s training framework, they worked with LAION and technologist Huu Nguyen’s Ontocord to make the Open Instruction Generalist Dataset. This dataset has over 40 million samples of questions, answers, and follow-up inquiries to show a model how to react to various instructions (for example, “How do I reset my password?”). I. Introduction
A. Background information on the Civil War
B. Thesis statement
II. Causes of the Civil War
A. Slavery
B. Sectionalism
III. Events of the Civil War
A. Major battles
B. Key figures
IV. Outcome of the Civil War
A. Impact on the Union
B. Impact on the Confederacy
V. Conclusion
A. Summary of key points
B. Reflection on the legacy of the Civil War

Together issued a demonstration so that anyone can try out the OpenChatKit models and provide feedback.

The main purpose was to make sure that everyone could use OpenChatKit to improve the model and to make task-specific chat models, according to Prakash. While large language models have demonstrated the capability to answer general questions accurately, they often perform better when adjusted for particular applications.

Prakash mentioned that the models can manage multiple activities such as solving high school math questions, designing Python programming, concocting stories, and summarizing documents. How effective are they in actual trials? Quite proficiently, as per my own experience – at least for fundamental tasks like composing realistic-sounding cover letters.

OpenChatKit has the capacity to compose cover letters, in addition to other tasks. Image Credits: OpenChatKit

However, there is a distinct limitation. If you spend enough time conversing with OpenChatKit models, you will begin to observe the same issues that ChatGPT and other modern chatbots have, such as repeating inaccurate data. I was able to make the OpenChatKit models give a contradictory reply as to whether the Earth is flat, and an inaccurate statement about who was victorious in the 2020 United States presidential election.

OpenChatKit gave an inaccurate response to a query regarding the 2020 U.S. presidential election. Image Credits: OpenChatKit

The OpenChatKit models are not very effective when it comes to shifting topics in the middle of a conversation, which can lead to confusion. Furthermore, they are not adept at creative writing and coding and may repeat themselves multiple times.

Prakash holds the training dataset responsible, which is still a work in progress. He remarked that they have designed a process so that the public can be involved in the progress, which he demonstrated.

OpenChatKit’s responses may not be up to standard, although ChatGPT’s to the same prompts may not be much better. Nevertheless, Together is trying to be proactive in moderating the platform.

Despite some chatbots being trained with data from toxic sources, making them capable of producing biased or offensive text, OpenChatKit models are more difficult to manipulate. I was able to get them to compose a phishing email, but they would not be manipulated into making more contentious statements, for example, endorsing the Holocaust or asserting that men are better CEOs than women.

OpenChatKit utilizes a certain degree of moderation, as demonstrated in this example. Image credits attributed to OpenChatKit.

The OpenChatKit has moderation as an optional tool and developers are not obligated to use it. One of the models was built with the intention of preventing issues with the larger model, which is what was used for the demo, but it does not come with filtering enabled by default, based on what Prakash said.

OpenChatKit’s strategy of not requiring filters contrasts with the strategy of OpenAI, Anthropic and other companies that prefer a top-down approach where moderation and filtering is done both manually and through automation at the API level. Prakash believes that this hidden and obscure process could be more damaging in the long run than OpenChatKit’s lack of filters.

Prakash commented that AI, both open and closed systems that are available via APIs, can be used for harmful purposes. He believes that if the research community is able to monitor, review, and improve generative AI technologies, society will be better equipped to address these risks. His overall opinion is that this will create a world that is safer and more secure. There is greater risk in a world where only a few big technology companies possess the power of huge AI models that are not subject to inspection, audit, or comprehension.

Prakash’s point about open development is highlighted by OpenChatKit, which includes a second dataset for training purposes called OIG-moderation. This dataset is designed to tackle the difficulty of programming bots to be neither overly aggressive nor despondent. The smaller model in OpenChatKit was generated using OIG-moderation and Prakash claims that it can be used to construct other models which will eliminate any potentially offensive text, allowing developers to have the option to do so.

Prakash expressed that they have a strong concern for AI safety but they don’t think relying on secrecy is the most effective long-term plan. He believes that having an open, visible approach is the norm for computer security and cryptography. He believes that this type of transparency is essential in order to have safe AI, citing Wikipedia as an example of how a group of people can successfully manage a large-scale task.

I am not positive. To begin with, Wikipedia is not exactly a reliable source – its system of monitoring is mysterious and territorial. In addition, open source systems are often abused and misused in a short period of time. For instance, Stable Diffusion, which is an AI tool for creating images, was utilized by 4chan and other websites for making non-consensual pornographic deepfakes of celebrities just days after its launch, even though it has moderation features.

The license for OpenChatKit specifically states that it cannot be used for any activities such as creating false information, encouraging offensive or discriminatory language, unsolicited messaging, or engaging in online bullying and abuse. However, there is nothing to stop those with malicious intentions from disregarding both the terms of the license and the moderation tools.

Some researchers are warning of the potential dangers of chatbots that are openly available to the public due to the possibility of a negative outcome.

In a recent study, it was found that ChatGPT, a type of chatbot, is able to be prompted to generate content that spreads false and harmful health claims about vaccines, resembling propaganda and disinformation coming from Russia and China, and employing the same language as partisan news outlets. The study showed that ChatGPT responded positively to these requests about 80% of the time.

In response to NewsGuard’s discoveries, OpenAI revised ChatGPT’s content filters internally. Obviously, this would not be achievable with an application like OpenChatKit, as it requires developers to keep the models up to date.

Prakash supports his point of view.

He believes that providing customization and specialization to many applications will be more successful with an open-source approach. He is optimistic that the open models will improve and there will be a significant rise in the use of these models.