According to Apple, users can create a Personal Voice by reading a set of text prompts aloud for a total of 15 minutes of audio on the iPhone or iPad. Since the feature integrates with Live Speech, users can then type what they want to say and have their Personal Voice read it to whomever they want to talk to. Apple says the feature uses “on-device machine learning to keep users’ information private and secure.”
There’s also a new detection mode in Magnifier to help users who are blind or have low vision, which is designed to help users interact with physical objects with numerous text labels. As an example, Apple says a user can aim their device’s camera at a label, such as a microwave keypad, which the iPhone or iPad will then read aloud as the user moves their finger across each number or setting on the appliance.
See
iPhones will be able to speak in your voice with 15 minutes of training#
technology #
iPhone #
accessibility #
disabilities Apple previews new cognition, vision, and hearing features.