Home News AI The past week has seen an increase in the use of ChatBots,...

The past week has seen an increase in the use of ChatBots, while Elon Musk has proposed creating a ChatBot that is focused on finding the truth.

153
0

Staying informed on AI, which is such a rapidly evolving field, is a demanding task. Until a computer can help you stay up-to-date, this selection of the week’s machine learning stories, in addition to relevant research and assessments that have yet to be covered in their own articles, is provided for your convenience.

This reporter picked up a news item this week that ChatGPT is more likely to provide inaccurate replies when asked questions in Chinese dialects in comparison with English. This was expected since ChatGPT is basically a data model with knowledge based on the restricted details it was trained on. However, this presents the chance of relying too much on systems that seem genuine though they could be relaying an agenda or creating made-up content.

Embracing Face’s endeavor to create an AI for conversation, like ChatGPT, is an instance of the current technical issues that haven’t been addressed in generative AI. Released this week, HuggingChat is open sourced, which is a greater advantage than the exclusive ChatGPT. Nonetheless, similar to its opposing product, it can be quickly put off track by posing the right questions.

HuggingChat seems undecided on who triumphed during the 2020 U.S. presidential contest. Its response to the query “What are typical jobs for men?” appears to take cues from an incel manifesto (observe here). As well, it mistakenly states that it “emerged from a cardboard box that had nothing inscribed on it.”

Discord’s AI chatbot was recently manipulated to show steps on making napalm and meth, and Stability AI’s ChatGPT-like model provided ridiculous replies to basic questions such as “how to make a peanut butter sandwich.”

Hugging Face has released a version of their own product, ChatGPT.

The positive wellspring associated with these commonly discussed difficulties with the current AI for producing text has been that it has caused a surge in attempting to improve those systems – or at the least, making attempts to mitigate the troubles as much as feasible. Consider Nvidia, which not too long ago unveiled a tool-kit – NeMo Guardrails – for making text-generative AI “safer” with way of open-source code, examples, and information. It is uncertain how successful this proposed solution may be, and Nvidia, a corporation heavily involved with AI organizations and tools, could gain from promoting its programs. However, it is still noteworthy to note that some steps are being pursued to counteract prejudice and harshness in AI systems.

Below are a few noteworthy headlines related to Artificial Intelligence from the past few days.

    Microsoft Designer has made its public debut with an assortment of upgraded options. Initially announced in October, it is an artificial intelligence-based web app not unlike Canva, giving users the capacity to come up with designs for invitations, posters, social media posts, and digital postcards, along with other visuals. Apple is reportedly creating a wellness mentoring service referred to as Quartz that harnesses AI technology, as reported by Mark Gurman at Bloomberg. Elon Musk has stated that he is looking to create a chatbot called TruthGPT, set to be “a maximum truth-seeking AI.” In addition, a large technology firm is allegedly working to pioneer technology that would track emotions, and are planning to launch a version of the iPhone Health app for iPads this year. At a Congressional hearing regarding the FTC’s work to stop scams as well as fraudulent activities, Chair Lina Khan and other commissioners cautioned House representatives of the ability of innovative AI systems, such as ChatGPT, to significantly boost deception. PenAI and Google’s publicized goal to “generate more good than injury” is something that’ll only be confirmed if and when it happens. In response to questioning regarding efforts to protect people from potential exploitation by way of technical progress, the EU has set up an independent research center to help ensure regulation of large digital platforms according to the Digital Services Act. The European Centre for Algorithmic Transparency, inaugurated recently in Seville, Spain, should impact the operations of large digital services like Facebook, Instagram and TikTok substantially. At the 2020 Snap Partner Summit, Snapchat made available a series of AI-related features including a ‘Cosmic Lens’ allowing people and items in range to be moved to a cosmic reality. Snapchat has recently made their Artificial Intelligence (AI) chatbot, My AI, readily accessible to global consumers despite mixed reviews of the chatbot’s unreliable performance. Meanwhile, Google has integrated two of its research divisions, DeepMind and Google Brain, creating an entirely new unit called Google DeepMind. DeepMind co-founder and CEO Demis Hassabis stated that the new unit will operate closely together. Throughout the various areas of Google’s products, there is a focus to implement artificial intelligence research and created items. In the music industry, Amanda has outlined the hardships that many artists face when confronted with AI technology that takes their work and spreads it without their consent. To illustrate this point, she highlights an instance involving two major artists, Drake and the Weeknd, when a song that utilized AI deepfake voices from them became a global sensation but with neither of them being involved in its making. Is it possible that Grimes has the solution? Who can determine that? We are facing a daringly different universe.
AI-made music is appealing to listeners, from the computer program DrakeGPT to the Artificial Intelligence-generated album Infinite Grimes.
    OpenAI is trying to register the acronym “GPT” (which stands for “Generative Pre-trained Transformer”) with the U.S. Patent and Trademark Office in response to the numerous violations and fake apps that have started to appear. This acronym is related to the technology used in various OpenAI models, such as ChatGPT, GPT-4 and other generative AI systems. OpenAI has announced it is creating a new version of ChatGPT specifically for business customers. The subscription service which is called ChatGPT Business is for people who need control over their data and for organizations that manage their end users. It is a service aimed at organizations that are rivals of the company.

Alternative machine learning techniques.

These are some other engaging stories that we didn’t have time to explore or thought deserved attention.

Stability, an open source AI development organization, just introduced a modified variation of the LLaMa foundation language model known as StableVicuña, which is a type of relative animal to a llama. Don’t fret, everybody is having trouble keeping up with all these derivatives of the models available – these are exclusive for developers not to use but to track and refine as the abilities of each model keep improving with every version.

If you want to find out more regarding these systems, John Schulman who is a co-founder of OpenAI gave a speech at UC Berkeley which can be watched or read. He speaks about the behaviour of the LLMs of stating an untruth as they are not aware of any other alternative, such as saying ‘I do not know’. Schulman thinks that if there is a solution, reinforcement learning from human feedback (which is called RLHF and StableVicuna uses it) could be part of the answer. View the lecture that is shown below.

At Stanford, they are utilizing algorithmic optimization (which could be classified as machine learning depending on one’s preferences) in the discipline of advanced farming. To decrease waste in irrigating land, seemingly easy questions, such as “Where should I put my sprinklers?” can become complex depending on the desired accuracy.

What is the distance one needs to be to appreciate the famous Panorama of Murten? Well, according to the museum, it won’t need to be too close. The painting is very large, measuring 10 meters by 100 meters, and was formerly housed in a rotunda. EPFL and Phase One are collaborating on the creation of what could be the largest digital image ever made; it will be composed of 150 megapixels multiplied by 127,000, resulting in an image that has 19 petapixels. I may be way off.

This project is a great opportunity for those who enjoy panoramic views, as well as for those interested in examining individual objects and painting specifics. Utilizing machine learning opens up the possibility of expertly restoring such artworks and making them easier to browse and learn from.

It’s worth giving credit to living creatures; in spite of their apparent skill, AI programs are comparatively slower learners. Sure, they may perform well in certain academic tests, but the same cannot be said for the reality of their environment. A robot might require hours to gain a basic grasp of its surroundings, whereas a mouse could take only a few minutes. Why is exploration by AI so selective? Scientists at University College London are attempting to find an answer to this, and posit that a fast feedback loop is being employed by living creatures when trying to discern what matters in a certain space. If we can instruct AI to employ the same model, it should not be hard for it to roam through our homes, if that is indeed what we wish it to do.

Finally, even though it appears that generative and conversational AI have the potential to be used in video games with great success, it seems as if we’re not quite there yet. In particular, Square Enix pushed back video game technology by decades with the AI Tech Preview edition of the vintage point-and-click adventure game “Portopia Serial Murder Case”. Its ambition to add natural language was a full-on failure, making this no-cost game one of the most poorly evaluated productions on Steam. I would absolutely enjoy conversing about Shadowgate or The Dig, however, this beginning is certainly not off to a stellar start.

Square Enix supplied the visuals for this content.