With Global Accessibility Awareness Day just days away, Apple is previewing a raft of new iOS features for cognitive accessibility, along with Live Speech, Personal Voice and more. The company said it worked in "deep collaboration" with community groups representing users with disabilities, and drew on "advances in hardware and software, including on-device machine learning" to make them work.
The biggest update is "Assistive Access" designed to support users with cognitive disabilities. Essentially, it provides a custom, simplified experience for the phone, FaceTime, Messages, Camera, Photos, and Music apps. That includes a "distinct interface with high contrast buttons and large text labels" along with tools that can be customized by trusted supporters for each individual.
"For example, for users who prefer communicating visually, Messages includes an emoji-only keyboard and the option to record a video message to share with loved ones. Users and trusted supporters can also choose between a more visual, grid-based layout for their Home Screen and apps, or a row-based layout for users who prefer text," Apple wrote.
The aim is to break down technological barriers for people with cognitive disabilities. "The intellectual and developmental disability community is bursting with creativity, but technology often poses physical, visual, or knowledge barriers for these individuals," said The Arc's Katy Schmid in a statement. "To have a feature that provides a cognitively accessible experience on iPhone or iPad — that means more open doors to education, employment, safety, and autonomy. It means broadening worlds and expanding potential."
Another important new feature is Live Speech and Personal Voice for iPhone, iPad and Mac. Live Speech lets users type what they want to say and have it spoken out loud during phone and FaceTime calls or for in-person conversations. For users who can still speak but are at risk of losing their ability to do so due to a diagnosis of ALS or other conditions, there's the Personal Voice feature.
It lets them create a voice that sounds like their own by reading along with a randomized set of text prompts to record 15 minutes of audio on iPhone or iPad. It then uses on-device machine learning to keep user information private, and works with Live Speech so users can effectively speak with others using a version of their own voices. "If you can tell [your friends and family] you love them, in a voice that sounds like you, it makes all the difference in the world," said Team Gleason board member and ALS advocate Philip Green, who has had his own voice impacted by ALS.
Finally, Apple has introduced a Point and Speak function in the Magnifier to help users with vision disabilities interact with physical objects. "For example, while using a household appliance — such as a microwave — Point and Speak combines input from the Camera app, the LiDAR Scanner, and on-device machine learning to announce the text on each button as users move their finger across the keypad," it wrote. The feature is built into the Magnifier app on iPhone and iPad, and can be used with other Magnifier features like People Detection, Door Detection and others.
Along with the new functions, Apple is introducing new features, curated collections and more for Global Accessibility Awareness Day. Those include a SignTime launch in Germany, Italy, Spain and South Korea to connect Apple Store and Support customers with on-demand sign language interpreters, along with accessibility informative sessions at select Apple Store locations around the world. It's also offering podcasts, movies and more around the impact of accessible tech. The new Assistive Access and other features are set to roll out later this year, Apple said — for more, check out its press release.
This article originally appeared on Engadget at https://ift.tt/ALpqF74from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/ALpqF74
No comments:
Post a Comment