将输入流式传输到 System.Speech.Recognition.SpeechRecognitionEngine

发布于 2024-08-11 00:30:29 字数 890 浏览 2 评论 0原文

我正在尝试从 TCP 套接字在 C# 中进行“流式”语音识别。我遇到的问题是 SpeechRecognitionEngine.SetInputToAudioStream() 似乎需要一个可以查找的定义长度的流。现在,我能想到的实现这项工作的唯一方法是,随着更多输入的到来,在 MemoryStream 上重复运行识别器。

下面是一些代码来说明:

            SpeechRecognitionEngine appRecognizer = new SpeechRecognitionEngine();

            System.Speech.AudioFormat.SpeechAudioFormatInfo formatInfo = new System.Speech.AudioFormat.SpeechAudioFormatInfo(8000, System.Speech.AudioFormat.AudioBitsPerSample.Sixteen, System.Speech.AudioFormat.AudioChannel.Mono);

            NetworkStream stream = new NetworkStream(socket,true);
            appRecognizer.SetInputToAudioStream(stream, formatInfo);
            // At the line above a "NotSupportedException" complaining that "This stream does not support seek operations."

有人知道如何解决这个问题吗?它必须支持某种类型的流输入,因为它可以使用 SetInputToDefaultAudioDevice() 与麦克风一起正常工作。

谢谢,肖恩

I am trying to do "streaming" speech recognition in C# from a TCP socket. The problem I am having is that SpeechRecognitionEngine.SetInputToAudioStream() seems to require a Stream of a defined length which can seek. Right now the only way I can think to make this work is to repeatedly run the recognizer on a MemoryStream as more input comes in.

Here's some code to illustrate:

            SpeechRecognitionEngine appRecognizer = new SpeechRecognitionEngine();

            System.Speech.AudioFormat.SpeechAudioFormatInfo formatInfo = new System.Speech.AudioFormat.SpeechAudioFormatInfo(8000, System.Speech.AudioFormat.AudioBitsPerSample.Sixteen, System.Speech.AudioFormat.AudioChannel.Mono);

            NetworkStream stream = new NetworkStream(socket,true);
            appRecognizer.SetInputToAudioStream(stream, formatInfo);
            // At the line above a "NotSupportedException" complaining that "This stream does not support seek operations."

Does anyone know how to get around this? It must support streaming input of some sort, since it works fine with the microphone using SetInputToDefaultAudioDevice().

Thanks, Sean

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

枕梦 2024-08-18 00:30:29

我通过重写流类来实现实时语音识别:

class SpeechStreamer : Stream
{
    private AutoResetEvent _writeEvent;
    private List<byte> _buffer;
    private int _buffersize;
    private int _readposition;
    private int _writeposition;
    private bool _reset;

    public SpeechStreamer(int bufferSize)
    {
        _writeEvent = new AutoResetEvent(false);
         _buffersize = bufferSize;
         _buffer = new List<byte>(_buffersize);
         for (int i = 0; i < _buffersize;i++ )
             _buffer.Add(new byte());
        _readposition = 0;
        _writeposition = 0;
    }

    public override bool CanRead
    {
        get { return true; }
    }

    public override bool CanSeek
    {
        get { return false; }
    }

    public override bool CanWrite
    {
        get { return true; }
    }

    public override long Length
    {
        get { return -1L; }
    }

    public override long Position
    {
        get { return 0L; }
        set {  }
    }

    public override long Seek(long offset, SeekOrigin origin)
    {
        return 0L;
    }

    public override void SetLength(long value)
    {

    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        int i = 0;
        while (i<count && _writeEvent!=null)
        {
            if (!_reset && _readposition >= _writeposition)
            {
                _writeEvent.WaitOne(100, true);
                continue;
            }
            buffer[i] = _buffer[_readposition+offset];
            _readposition++;
            if (_readposition == _buffersize)
            {
                _readposition = 0;
                _reset = false;
            }
            i++;
        }

        return count;
    }

    public override void Write(byte[] buffer, int offset, int count)
    {
        for (int i = offset; i < offset+count; i++)
        {
            _buffer[_writeposition] = buffer[i];
            _writeposition++;
            if (_writeposition == _buffersize)
            {
                _writeposition = 0;
                _reset = true;
            }
        }
        _writeEvent.Set();

    }

    public override void Close()
    {
        _writeEvent.Close();
        _writeEvent = null;
        base.Close();
    }

    public override void Flush()
    {

    }
}

...并使用它的实例作为 SetInputToAudioStream 方法的流输入。一旦流返回长度或返回的计数小于请求的计数,识别引擎就会认为输入已完成。这将设置一个永远不会完成的循环缓冲区。

I got live speech recognition working by overriding the stream class:

class SpeechStreamer : Stream
{
    private AutoResetEvent _writeEvent;
    private List<byte> _buffer;
    private int _buffersize;
    private int _readposition;
    private int _writeposition;
    private bool _reset;

    public SpeechStreamer(int bufferSize)
    {
        _writeEvent = new AutoResetEvent(false);
         _buffersize = bufferSize;
         _buffer = new List<byte>(_buffersize);
         for (int i = 0; i < _buffersize;i++ )
             _buffer.Add(new byte());
        _readposition = 0;
        _writeposition = 0;
    }

    public override bool CanRead
    {
        get { return true; }
    }

    public override bool CanSeek
    {
        get { return false; }
    }

    public override bool CanWrite
    {
        get { return true; }
    }

    public override long Length
    {
        get { return -1L; }
    }

    public override long Position
    {
        get { return 0L; }
        set {  }
    }

    public override long Seek(long offset, SeekOrigin origin)
    {
        return 0L;
    }

    public override void SetLength(long value)
    {

    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        int i = 0;
        while (i<count && _writeEvent!=null)
        {
            if (!_reset && _readposition >= _writeposition)
            {
                _writeEvent.WaitOne(100, true);
                continue;
            }
            buffer[i] = _buffer[_readposition+offset];
            _readposition++;
            if (_readposition == _buffersize)
            {
                _readposition = 0;
                _reset = false;
            }
            i++;
        }

        return count;
    }

    public override void Write(byte[] buffer, int offset, int count)
    {
        for (int i = offset; i < offset+count; i++)
        {
            _buffer[_writeposition] = buffer[i];
            _writeposition++;
            if (_writeposition == _buffersize)
            {
                _writeposition = 0;
                _reset = true;
            }
        }
        _writeEvent.Set();

    }

    public override void Close()
    {
        _writeEvent.Close();
        _writeEvent = null;
        base.Close();
    }

    public override void Flush()
    {

    }
}

... and using an instance of that as the stream input to the SetInputToAudioStream method. As soon as the stream returns a length or the returned count is less than that requested the recognition engine thinks the input has finished. This sets up a circular buffer that never finishes.

樱花细雨 2024-08-18 00:30:29

您是否尝试过将网络流包装在 System.IO.BufferedStream 中?

NetworkStream netStream = new NetworkStream(socket,true);
BufferedStream buffStream = new BufferedStream(netStream, 8000*16*1); // buffers 1 second worth of data
appRecognizer.SetInputToAudioStream(buffStream, formatInfo);

Have you tried wrapping the network stream in a System.IO.BufferedStream?

NetworkStream netStream = new NetworkStream(socket,true);
BufferedStream buffStream = new BufferedStream(netStream, 8000*16*1); // buffers 1 second worth of data
appRecognizer.SetInputToAudioStream(buffStream, formatInfo);
安静被遗忘 2024-08-18 00:30:29

这是我的解决方案。

class FakeStreamer : Stream
{
    public bool bExit = false;
    Stream stream;
    TcpClient client;
    public FakeStreamer(TcpClient client)
    {
        this.client = client;
        this.stream = client.GetStream();
        this.stream.ReadTimeout = 100; //100ms
    }
    public override bool CanRead
    {
        get { return stream.CanRead; }
    }

    public override bool CanSeek
    {
        get { return false; }
    }

    public override bool CanWrite
    {
        get { return stream.CanWrite; }
    }

    public override long Length
    {
        get { return -1L; }
    }

    public override long Position
    {
        get { return 0L; }
        set { }
    }
    public override long Seek(long offset, SeekOrigin origin)
    {
        return 0L;
    }

    public override void SetLength(long value)
    {
        stream.SetLength(value);
    }
    public override int Read(byte[] buffer, int offset, int count)
    {
        int len = 0, c = count;
        while (c > 0 && !bExit)
        {
            try
            {
                len = stream.Read(buffer, offset, c);
            }
            catch (Exception e)
            {
                if (e.HResult == -2146232800) // Timeout
                {
                    continue;
                }
                else
                {
                    //Exit read loop
                    break;
                }
            }
            if (!client.Connected || len == 0)
            {
                //Exit read loop
                return 0;
            }
            offset += len;
            c -= len;
        }
        return count;
    }

    public override void Write(byte[] buffer, int offset, int count)
    {
        stream.Write(buffer,offset,count);
    }

    public override void Close()
    {
        stream.Close();
        base.Close();
    }

    public override void Flush()
    {
        stream.Flush();
    }
}

使用方法:

//client connect in
TcpClient clientSocket = ServerSocket.AcceptTcpClient();
FakeStreamer buffStream = new FakeStreamer(clientSocket);
...
//recognizer init
m_recognizer.SetInputToAudioStream(buffStream , audioFormat);
...
//recognizer end
if (buffStream != null)
    buffStream.bExit = true;

This is my solution.

class FakeStreamer : Stream
{
    public bool bExit = false;
    Stream stream;
    TcpClient client;
    public FakeStreamer(TcpClient client)
    {
        this.client = client;
        this.stream = client.GetStream();
        this.stream.ReadTimeout = 100; //100ms
    }
    public override bool CanRead
    {
        get { return stream.CanRead; }
    }

    public override bool CanSeek
    {
        get { return false; }
    }

    public override bool CanWrite
    {
        get { return stream.CanWrite; }
    }

    public override long Length
    {
        get { return -1L; }
    }

    public override long Position
    {
        get { return 0L; }
        set { }
    }
    public override long Seek(long offset, SeekOrigin origin)
    {
        return 0L;
    }

    public override void SetLength(long value)
    {
        stream.SetLength(value);
    }
    public override int Read(byte[] buffer, int offset, int count)
    {
        int len = 0, c = count;
        while (c > 0 && !bExit)
        {
            try
            {
                len = stream.Read(buffer, offset, c);
            }
            catch (Exception e)
            {
                if (e.HResult == -2146232800) // Timeout
                {
                    continue;
                }
                else
                {
                    //Exit read loop
                    break;
                }
            }
            if (!client.Connected || len == 0)
            {
                //Exit read loop
                return 0;
            }
            offset += len;
            c -= len;
        }
        return count;
    }

    public override void Write(byte[] buffer, int offset, int count)
    {
        stream.Write(buffer,offset,count);
    }

    public override void Close()
    {
        stream.Close();
        base.Close();
    }

    public override void Flush()
    {
        stream.Flush();
    }
}

How to Use:

//client connect in
TcpClient clientSocket = ServerSocket.AcceptTcpClient();
FakeStreamer buffStream = new FakeStreamer(clientSocket);
...
//recognizer init
m_recognizer.SetInputToAudioStream(buffStream , audioFormat);
...
//recognizer end
if (buffStream != null)
    buffStream.bExit = true;
愚人国度 2024-08-18 00:30:29

我最终缓冲了输入,然后将其以连续更大的块发送到语音识别引擎。例如,我可能会首先发送前 0.25 秒,然后发送前 0.5 秒,然后发送前 0.75 秒,依此类推,直到得到结果。我不确定这是否是最有效的方法,但它给我带来了满意的结果。

祝你好运,肖恩

I ended up buffering the input and then sending it to the speech recognition engine in successively larger chunks. For instance, I might send at first the first 0.25 seconds, then the first 0.5 seconds, then the first 0.75 seconds, and so on until I get a result. I am not sure if this is the most efficient way of going about this, but it yields satisfactory results for me.

Best of luck, Sean

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文