We've built our own text-to-speech system with an initial English language model we trained ourselves with fully open source data. It will be added to our App Store soon and then included in GrapheneOS as a default enabled TTS backend once some more improvements are made to it.
We're going to build our own speech-to-text implementation to go along with this too. We're starting with an English model for both but we can add other languages which have high quality training data available. English and Mandarin have by far the most training data available.
Existing implementations of text-to-speech and speech-to-text didn't meet our functionality or usability requirements. We want at least very high quality, low latency and robust implementations of both for English included in the OS. It will help make GrapheneOS more accessible.
Our full time developer working on this already built their own Transcribro app for on-device speech-to-text available in the Accrescent app store. For GrapheneOS itself, we want actual open source implementations of these features rather than OpenAI's phony open source though.
Whisper is actually closed source. Open weights is another way of saying permissively licensed closed source. Our implementation of both text-to-speech and speech-to-text will be actual open source which means people can actually fork it and add/change/remove training data, etc.
@GrapheneOS You guys are the best 🙌
@GrapheneOS i could help with spanish and esperanto models if needed
@GrapheneOS the "largeness" of language models is precisely a measure of the difficulty to reproduce them. this methodology has some similarities to something i proposed to huggingface a few years back in a cover letter. no surprise to see they were not interested in reproducibility or the scientific method
@GrapheneOS i have also been trying to find similarly motivated people to collaborate with on a research project to reproduce the fawkes facial recognition poisoner upon a mobile device (ideally as an asynchronous but fully local image postprocessing technique) cc @xyhhx @bunnyhero
@GrapheneOS @xyhhx @bunnyhero i have been putting it off repeatedly but the fawkes paper itself is very high quality and imo intended to be reproduced. if there are resources your team has developed or considered regarding modern hardware on mobile phones for statistical training and inference (fawkes especially requires a training step with local user input iirc) it would be tremendously helpful for our goals here.
@GrapheneOS @xyhhx @bunnyhero we obviously expect reduced efficacy vs the SANDlab implementation with GPU acceleration but the math and the code are both very approachable and since its publication we have seen phones add specific "NPU" chips for matmul/etc and this would be a fun way to subvert the utility of "AI" ubiquitization to embed panoptic surveillance

@GrapheneOS I replied to one of your posts a couple months ago when yall asked about TTS, suggesting Piper TTS models (https://github.com/OHF-Voice/piper1-gpl). There are def some quality (English) and performant models, though I haven't dug into whether they are truly open source (aka open dataset) or just open weights.

Either way, I am very excited to see more projects by gOS and more quality options in the TTS & STT spaces. People with disabilities deserve equal access to technology, and anything that brings us closer to a world were that is possible is a good thing.

GitHub - OHF-Voice/piper1-gpl: Fast and local neural text-to-speech engine

Fast and local neural text-to-speech engine. Contribute to OHF-Voice/piper1-gpl development by creating an account on GitHub.

GitHub