How to set properties of SpeechConfig in createCognitiveServicesSpeechServicesPonyfillFactory ?
I have a question
Hi,
Currently, we are trying to implement a microphone widget, like in the sample below:
https://github.com/microsoft/BotFramework-WebChat/tree/main/samples/06.recomposing-ui/b.speech-ui
The problem is that as soon as the user makes a very short pause, silence is detected and recognition result sent.
I have found in Azure documentation, that it is possible to set some properties on the SpeechConfig (segmentation silence timeout, initial silence timeout):
https://learn.microsoft.com/en-us/azure/ai-services/speech-service/how-to-recognize-speech?pivots=programming-language-csharp#change-how-silence-is-handled
Is it possible to set them using current implementation of botframework webchat library?
Thank you, Nadia.
Hey did you succeed for the above ?
@esperance90 - Are you still needing help with this issue?
Regardless, for any others looking to do the same, I've been digging into this to see if there is a way to adjust either the initial or the end silence timeouts, specifically for Cognitive Services Speech (i.e., not DirectLine Speech). I've been rooting around and testing some options, but it is not looking promising. I'm checking with some colleagues in the hopes that I've overlooked something. I'll post an update when I know more.
Any update on this?
@stephen-keefer-va - Thank you for your patience. Unfortunately, this issue slipped through the cracks. However, it doesn't appear there is a way at this time for passing in properties as a feature/option of Web Chat.
I will mark this as a feature request for future consideration.