Home News Be My Eyes GPT-4 has enabled the development of a digital assistant to assist those...

GPT-4 has enabled the development of a digital assistant to assist those with visual impairments. This “virtual volunteer” provides aid to those who may have difficulty seeing.

GPT-4 has enabled the development of a digital assistant to assist those with visual impairments. This

OpenAI has unveiled their most recent cutting-edge AI model, GPT-4, and they have chosen to use it to provide assistance to those with visual impairments. The app Be My Eyes permits those who are visually impaired or have low vision to ask sighted people for help to explain what their phone is showing. Now, they have the option of a “virtual volunteer” providing AI-generated aid whenever needed.

Since the app’s inception in 2015, Be My Eyes has been a topic of discussion, considering the growth of computer vision and other aids that aid visually impaired individuals in their everyday lives. The app has its limits, however, and one of the key components has always been to have a volunteer who can look through the camera of the phone and provide detailed descriptions or instructions.

This latest version of the application includes GPT-4’s multimodal capacity, meaning that it can converse coherently and also analyze and comprehend the pictures it is presented with.

Individuals can transmit photographs through the app to an Artificial Intelligence-driven Virtual Volunteer, which will answer any inquiry concerning that image and promptly supply visual help for a variety of activities. For instance, if a user sends a photograph of the inside of their refrigerator, the Virtual Volunteer won’t just be able to precisely recognize what is in it, but also extrapolate and examine what can be prepared from those components. This tool can also provide multiple meal ideas based on the ingredients you have and send instructions on how to create them.

The video that goes with the description really shows the power of Be My Eyes. In it, Lucy demonstrates how the app helps her with a variety of tasks in real time. If you don’t understand the fast-paced dialogue of a screen reader, you might miss some of it, but she has it explain the design of a dress, name a plant, read a map, translate a sign, direct her to a specific exercise machine at the gym, and instruct her which buttons to press at a vending machine. Check out the video below.

“Become a Virtual Volunteer for My Eyes”

This presentation serves to clearly illustrate how inaccessible much of our city and business structures are for those with visual impairment. Furthermore, it highlights how helpful GPT-4’s multi-dimensional conversation can be in the right situations.

Undoubtedly, human volunteers will remain essential for people who use the Be My Eyes app – it is impossible to replace them, only to improve the standards for when they are needed (and they can be requested quickly if the artificial intelligence response is not satisfactory).

For instance, the AI at the gym provides useful advice such as “the machines that are free are the ones with no one on them.” It’s an impressive feature, as stated by Sam Altman from OpenAI, but we need to be wary of scrutinizing it too much.

Be My Eyes and OpenAI have joined forces in order to set up the necessary parameters and to provide support as the latter continues to advance.

At the moment, this feature is only available to a select few users of Be My Eyes, however, it will be open to more people in the near future. The team behind Be My Eyes is hoping to make this virtual volunteer service available to all members of the blind and low-vision community who use their app in the following months. As with their existing volunteer service, it is free to use.

It is encouraging to note that this new ChatGPT was put to use right away to help people, instead of being used for more mundane purposes like corporate SaaS platforms. To find out more about GPT-4, please click here.