Apple and the University of Illinois are teaming up with Google, Meta, and more tech companies to collaborate on something called the Speech Accessibility Project. The goal of the initiative is to study and improve how artificial intelligence algorithms can be tuned to improve voice recognition for users with diseases that affect speech, including ALS and Down Syndrome.

Voice-driven features are only as good at the algorithms that power them, however, and that’s critical for reaching users with Lou Gehrig’s disease, cerebral palsy, and other conditions that affect speech.

Read more from Steven Aquino’s interview with the team behind the new project.