How about my problem is this: I find myself developing a software in Universal App (Windows 10) for phones which uses synthesized and speech recognition to interact with the user, but when I test the software on my device (Lumia 830 ) the software fails because when the synthesizer is activated it freezes in certain phrases such as "Say a Command" and ends by saying "say to Commmaaaaaaaaaaaand" ... It is a headache because I did a lot of tests and determined that this happens when I use the synthesized and voice recognition in an instruction .. However probe separating them into different classes (it is asynchronous programming eye C #) and still giving the same problem ..: EYE in the emulator works perfect ... I listen to your suggestions, thanks. This is my code:
sin.Voice = (SpeechSynthesizer.AllVoices.First(x => x.Gender == VoiceGender.Male));
var language = new Windows.Globalization.Language("en-EN");
SpeechRecognizer recognizer = new SpeechRecognizer(language);
List<string> dict= new List<string>();
string[] responses = { "One", "Twoo", "three", "End", "Exit", };
var listConstraint = new Windows.Media.SpeechRecognition.SpeechRecognitionListConstraint(responses, "Comandos");
recognizer.Constraints.Add(listConstraint);
await recognizer.CompileConstraintsAsync();
var recognition = recognizer.RecognizeAsync();
//await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.High, new DispatchedHandler(
// () => { mediaElement.Stop(); mediaElement.SetSource(stream, stream.ContentType); mediaElement.Play(); }));
texto = " " + " " + "Say a COmmand" + " " + " ";
SpeechSynthesisStream stream = await sin.SynthesizeTextToStreamAsync(texto);
if (mediaElement.CurrentState.Equals(MediaElementState.Playing))
{
mediaElement.Stop();
mediaElement.SetSource(stream, stream.ContentType);
mediaElement.Play();
}else
{
//mediaElement.Stop();
mediaElement.SetSource(stream, stream.ContentType);
mediaElement.Play();
}
recognition.Completed += this.Recognition_Completed;