In the days before the Silicon Valley Bank’s failure, the majority of conversations in the tech industry seemed to revolve around AI and chatbots. Just recently, OpenAI, which is funded by Microsoft, unveiled their GPT-4 language model, while Anthropic launched the Claude chatbot. Additionally, Google announced plans to incorporate AI into their Workspace programs, such as Gmail and Docs. Microsoft’s Bing search engine has also been gaining recognition due to its chatbot-powered searching capabilities. Apple was the only name not included in the activity.
Last month, Apple, based in Cupertino, organized an internal event centered on Artificial Intelligence and large language models. According to the New York Times, various departments, including the team working on Siri, are experimenting with “language-generating concepts” on a regular basis.
Many have expressed their dissatisfaction with Siri’s inability to comprehend their inquiries, including my own. Additionally, digital assistants like Alexa, Google Assistant, and Siri are not adept at understanding the accents and phonetics of individuals from different parts of the world, even when they are speaking the same language.
The recent popularity of ChatGPT and text-based search has made it simpler for people to communicate with diverse AI models. However, at present, the only way to converse with Apple’s AI assistant Siri is to turn on a setting in the accessibility settings.
John Burke, a former engineer at Apple who worked on Siri, revealed to The New York Times that the technology’s development has been sluggish due to “clunky code,” which impedes the rollout of even the most basic updates. He also discussed the hefty database of words that Siri relies on, which had to be completely reconstructed each time engineers sought to add features or phrases – a process that reportedly lasted up to six weeks.
The New York Times article didn’t reveal whether Apple is designing its own language systems or if it is using an existing model. Like Google and Microsoft, Apple wouldn’t limit itself to providing a Siri-enabled chatbot. Apple has historically supported and celebrated artists and creators, so it makes sense that they would be interested in utilizing advances in language models in these domains.
The company has been employing AI-driven functions for some time, although they may not be immediately noticeable. These encompass improved predictions on the keyboard, image processing, unlocking with Facial Identification, separation of elements from the background throughout the system, Apple Watch’s handwashing and crash detection, and most recently a karaoke feature on Apple Music. Nonetheless, none of these are as overt as chatbots.
Apple has not made much noise about its Artificial Intelligence undertakings. However, they began a program in January that provided authors with AI-assisted narration services to convert their books into audiobooks, suggesting the iPhone-maker is already considering the potential of generative AI. I wouldn’t be astonished if we learn more about the company’s development in these regions at the Worldwide Developer Conference (WWDC) in some months.