A report by TechCrunch provides some details about the voice moderation system Riot plans to implement. Audio data will be stored regionally, and then pulled when a report is submitted. Riot says the audio will be evaluated to check for code of conduct violations, and if one has occured, the player in question will have a chance to see it. Afterwards, the recording will be deleted. If no violation is found, the audio will also be deleted.
Riot told TechCrunch that the system for monitoring voice communications is still in development, and may take the form of a voice-to-text transcription system or possibly machine learning. Modulate’s ToxMod software already has the capability to ‘listen’ to human speech and recognise specific words, phrases, or abusive language in general, and Riot may use a similar AI-driven solution in its voice moderation.