编写自定义 DirectShow RTSP/RTP 源推送过滤器 - 来自实时源的时间戳数据

发布于 2024-08-21 05:00:10 字数 1977 浏览 7 评论 0原文

我正在编写自定义 DirectShow 源推送过滤器,该过滤器应该从视频服务器接收 RTP 数据并将它们推送到渲染器。我编写了一个 CVideoPushPin 类,它继承自 CSourceStream 和 CVideoReceiverThread 类,它是从视频服务器接收 RTP 数据包的线程的包装器。接收器线程本质上做三件事:

  • 接收原始RTP数据包并收集接收器报告所需的一些数据
  • 组装帧,将它们复制到缓冲区并将有关它们的信息存储到256中 元素队列,其定义如下:

    struct queue_elem {
       字符*开始; // 指向缓冲区中帧的指针
       整数长度; // 数据长度
       REFERENCE_TIME 接收时间; // 接收帧时的时间戳(流时间)
    };
    
    结构数据{
       结构queue_elem队列[QUEUE_LENGTH];
       int qWrIdx;
       int qRdIdx;
    处理互斥体;
    };
    
  • 每个接收到的帧都带有当前流时间的时间戳

    p->StreamTime(refTime);
    REFERENCE_TIME rt = refTime.GetUnits();
    

问题是我不确定如何为 FillBuffer 方法中的每个 MediaSample 设置时间戳。我尝试了多种方法,但播放要么停止,要么播放速度太慢。 目前 FillBuffer 方法如下所示:

   REFERENCE_TIME thisFrameStartTime, thisFrameEndTime;
// Make sure if there are at least 4 frames in the buffer
    if(noOfFrames >= 4)
    {   
        currentQe = m_myData.queue[m_myData.qRdIdx++]; //Take current frame description     
        if(m_myData.qRdIdx >= QUEUE_LENGTH)
        {
            m_myData.qRdIdx = 0;
        }           
        nextQe = m_myData.queue[m_myData.qRdIdx]; //Take next frame description
        if(currentQe.length > 0)
        {
            memcpy(pData, currentQe.start, currentQe.length);               

             pSample->SetActualDataLength(currentQe.length);                
            CRefTime refTime;
            m_pFilter->StreamTime(refTime);
            REFERENCE_TIME rt;
            rt = refTime.GetUnits();
            pSample->GetTime(&thisFrameStartTime, &thisFrameEndTime);
            thisFrameEndTime = thisFrameStartTime + (nextQe.recvTime - currentQe.recvTime);
            pSample->SetTime(&thisFrameStartTime, &thisFrameEndTime);   
        }
    }
    else 
    {
        pSample->SetActualDataLength(0);
    }

在这种情况下,我注意到队列中的项目数量增加得非常快(由于某种原因 FillBuffer 方法无法足够快地提取数据),结果是播放视频时增加了延迟。有人知道从实时来源接收数据时应该如何添加时间戳吗?

I'm writing custom DirectShow source push filter which is supposed to receive RTP data from video server and push them to the renderer. I wrote a CVideoPushPin class which inherits from CSourceStream and CVideoReceiverThread class which is a wrapper for a thread that receive RTP packets from video server. The receiver thread essentially does three things:

  • receives raw RTP packets and collects some data that is needed for Receiver Reports
  • assembles frames, copies them to the buffer and stores information about them into 256
    element queue, which is defined as follows:

    struct queue_elem {
       char *start; // Pointer to a frame in a buffer
       int length; // Lenght of data
       REFERENCE_TIME recvTime; // Timestamp when the frame was received (stream time)
    };
    
    struct data {
       struct queue_elem queue[QUEUE_LENGTH];
       int qWrIdx;
       int qRdIdx;
    HANDLE mutex;
    };
    
  • every received frame is timestamped with current stream time

    p->StreamTime(refTime);
    REFERENCE_TIME rt = refTime.GetUnits();
    

The problems is that I'm not sure how do I have to set timestamps for every MediaSample in FillBuffer method. I tried several ways, but the playback either stops or it is too slow.
Currently the FillBuffer method looks like this:

   REFERENCE_TIME thisFrameStartTime, thisFrameEndTime;
// Make sure if there are at least 4 frames in the buffer
    if(noOfFrames >= 4)
    {   
        currentQe = m_myData.queue[m_myData.qRdIdx++]; //Take current frame description     
        if(m_myData.qRdIdx >= QUEUE_LENGTH)
        {
            m_myData.qRdIdx = 0;
        }           
        nextQe = m_myData.queue[m_myData.qRdIdx]; //Take next frame description
        if(currentQe.length > 0)
        {
            memcpy(pData, currentQe.start, currentQe.length);               

             pSample->SetActualDataLength(currentQe.length);                
            CRefTime refTime;
            m_pFilter->StreamTime(refTime);
            REFERENCE_TIME rt;
            rt = refTime.GetUnits();
            pSample->GetTime(&thisFrameStartTime, &thisFrameEndTime);
            thisFrameEndTime = thisFrameStartTime + (nextQe.recvTime - currentQe.recvTime);
            pSample->SetTime(&thisFrameStartTime, &thisFrameEndTime);   
        }
    }
    else 
    {
        pSample->SetActualDataLength(0);
    }

In this case I noticed that the number of items in the queue increases very quickly (for some reason FillBuffer method cannot pull out data fast enough), and the result is increasing delay when playing video. Does anybody have a idea how should I do the timestamping when receiving data from live sources?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

感情旳空白 2024-08-28 05:00:10

当图形的流时间达到示例对象上的时间戳时,渲染器将绘制帧。如果我正确地阅读了您的代码,您将使用到达时的流时间对它们进行时间戳记,因此它们在渲染时总是会迟到。这在某种程度上被音频渲染器混淆了:如果音频渲染器提供图形的时钟,那么它将报告当前流时间为当前正在播放的任何样本,这将导致一些不良的时间行为。

  1. 您想要设置一个未来的时间,以允许图表中的延迟以及过滤器中的任何缓冲。尝试将时间设置为未来 300 毫秒(现在的流时间 + 300 毫秒)。

  2. 您希望帧之间保持一致,因此不要根据每个帧的到达时间对它们添加时间戳。对每一帧使用 RTP 时间戳,并将第一帧的基线设置为未来 300 毫秒;后续帧是 (rtp - rtp_at_baseline) + dshow 基线(具有适当的单位转换。

  3. 您需要对音频和视频流添加时间戳同样的方式,使用相同的基线但是,如果我记得的话,RTP 时间戳在每个流中都有不同的基线,因此您需要使用 RTCP 数据包将 RTP 时间戳转换为(绝对)NTP 时间,然后将 NTP 转换为 directshow使用您的初始基线(基线 NTP = 现在的 dshow 流时间 + 300 毫秒)。

G

The renderer will draw the frames when the graph's stream time reaches the timestamp on the sample object. If I read your code correctly, you are timestamping them with the stream time at arrival, so they will always be late at rendering. This is confused somewhat by the audio renderer: if the audio renderer is providing the graph's clock, then it will report the current stream time to be whatever sample it is currently playing, and that is going to cause some undesirable time behaviour.

  1. You want to set a time in the future, to allow for the latency through the graph and any buffering in your filter. Try setting a time perhaps 300ms into the future (stream time now + 300ms).

  2. You want to be consistent between frames, so don't timestamp them based on the arrival time of each frame. Use the RTP timestamp for each frame, and set the baseline for the first one to be 300ms into the future; subsequent frames are then (rtp - rtp_at_baseline) + dshow baseline (with appropriate unit conversions.

  3. You need to timestamp the audio and the video streams in the same way, using the same baseline. However, if I remember, RTP timestamps have a different baseline in each stream, so you need to use the RTCP packets to convert RTP timestamps to (absolute) NTP time, and then convert NTP to directshow using your initial baseline (baseline NTP = dshow stream time now + 300ms).

G

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文