在 C/C++ 中使用 librtmp 发布流

发布于 2024-10-09 12:14:10 字数 776 浏览 0 评论 0原文

如何使用 librtmp 库发布流? 我阅读了 librtmp 手册页,并使用 RTMP_Write() 进行发布。

我正在这样做。

//Code
//Init RTMP code
RTMP *r;
char uri[]="rtmp://localhost:1935/live/desktop";
r= RTMP_Alloc();
RTMP_Init(r);
RTMP_SetupURL(r, (char*)uri);
RTMP_EnableWrite(r);
RTMP_Connect(r, NULL);
RTMP_ConnectStream(r,0);

然后,为了响应来自服务器的 ping/其他消息,我使用一个线程来响应,如下所示:

//Thread
While (ThreadIsRunning && RTMP_IsConnected(r) && RTMP_ReadPacket(r, &packet))
{
   if (RTMPPacket_IsReady(&packet))
   {
 if (!packet.m_nBodySize)
         continue;
    RTMP_ClientPacket(r, &packet); //This takes care of handling ping/other messages
    RTMPPacket_Free(&packet);
   }
}

此后,我陷入如何使用 RTMP_Write() 将文件发布到 Wowza 媒体服务器?

How to publish a stream using librtmp library?
I read the librtmp man page and for publishing , RTMP_Write() is used.

I am doing like this.

//Code
//Init RTMP code
RTMP *r;
char uri[]="rtmp://localhost:1935/live/desktop";
r= RTMP_Alloc();
RTMP_Init(r);
RTMP_SetupURL(r, (char*)uri);
RTMP_EnableWrite(r);
RTMP_Connect(r, NULL);
RTMP_ConnectStream(r,0);

Then to respond to ping/other messages from server, I am using a thread to respond like following:

//Thread
While (ThreadIsRunning && RTMP_IsConnected(r) && RTMP_ReadPacket(r, &packet))
{
   if (RTMPPacket_IsReady(&packet))
   {
 if (!packet.m_nBodySize)
         continue;
    RTMP_ClientPacket(r, &packet); //This takes care of handling ping/other messages
    RTMPPacket_Free(&packet);
   }
}

After this I am stuck at how to use RTMP_Write() to publish a file to Wowza media server?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

吻安 2024-10-16 12:14:10

根据我自己的经验,在 librtmp 方面,将视频数据流式传输到 RTMP 服务器实际上非常简单。棘手的部分是正确打包视频/音频数据并以正确的速率读取它。

假设您使用的是 FLV 视频文件,只要您可以正确隔离文件中的每个标签并使用一个 RTMP_Write 调用发送每个标签,您甚至不需要处理传入数据包。

棘手的部分是了解 FLV 文件是如何制作的。
官方规范可以在这里找到: http://www.adobe.com/devnet/f4v.html

首先,有一个标头,由 9 个字节组成。该标头不得发送到服务器,而只能通读以确保文件确实是 FLV。

然后是标签流。每个标签都有一个 11 字节的标头,其中包含标签类型(视频/音频/元数据)、主体长度和标签的时间戳等。

标签标头可以使用以下结构进行描述:

typedef struct __flv_tag {
  uint8       type;
  uint24_be   body_length; /* in bytes, total tag size minus 11 */
  uint24_be   timestamp; /* milli-seconds */
  uint8       timestamp_extended; /* timestamp extension */
  uint24_be   stream_id; /* reserved, must be "\0\0\0" */
  /* body comes next */
} flv_tag;

主体长度和时间戳以 24 位大端整数表示,如有必要,可使用补充字节将时间戳扩展至 32 位(大约为 4 小时标记)。

读取标签标头后,您可以读取正文本身,因为您现在知道其长度 (body_length)。

之后是一个 32 位大端整数值,其中包含标签的完整长度(11 字节 + body_length)。

您必须在一个 RTMP_Write 调用中写入标签标头 + 正文 + 上一个标签大小(否则无法播放)。

另外,请小心以视频的标称帧速率发送数据包,否则播放会受到很大影响。

我已经编写了一个完整的 FLV 文件解复用器,作为我的 GPL 项目 FLVmeta 的一部分,您可以将其用作参考。

In my own experience, streaming video data to an RTMP server is actually pretty simple on the librtmp side. The tricky part is to correctly packetize video/audio data and read it at the correct rate.

Assuming you are using FLV video files, as long as you can correctly isolate each tag in the file and send each one using one RTMP_Write call, you don't even need to handle incoming packets.

The tricky part is to understand how FLV files are made.
The official specification is available here: http://www.adobe.com/devnet/f4v.html

First, there's a header, that is made of 9 bytes. This header must not be sent to the server, but only read through in order to make sure the file is really FLV.

Then there is a stream of tags. Each tag has a 11 bytes header that contains the tag type (video/audio/metadata), the body length, and the tag's timestamp, among other things.

The tag header can be described using this structure:

typedef struct __flv_tag {
  uint8       type;
  uint24_be   body_length; /* in bytes, total tag size minus 11 */
  uint24_be   timestamp; /* milli-seconds */
  uint8       timestamp_extended; /* timestamp extension */
  uint24_be   stream_id; /* reserved, must be "\0\0\0" */
  /* body comes next */
} flv_tag;

The body length and timestamp are presented as 24-bit big endian integers, with a supplementary byte to extend the timestamp to 32 bits if necessary (that's approximatively around the 4 hours mark).

Once you have read the tag header, you can read the body itself as you now know its length (body_length).

After that there is a 32-bit big endian integer value that contains the complete length of the tag (11 bytes + body_length).

You must write the tag header + body + previous tag size in one RTMP_Write call (else it won't play).

Also, be careful to send packets at the nominal frame rate of the video, else playback will suffer greatly.

I have written a complete FLV file demuxer as part of my GPL project FLVmeta that you can use as reference.

两仪 2024-10-16 12:14:10

事实上,RTMP_Write() 似乎要求您已经在 buf 中形成了 RTMP 数据包。

RTMPPacket *pkt = &r->m_write;
...
pkt->m_packetType = *buf++;

因此,您不能只将 flv 数据推送到那里 - 您需要首先将其分离为数据包。

有一个很好的函数 RTMP_ReadPacket(),但它是从网络套接字读取的。

我和你有同样的问题,希望尽快有解决方案。

编辑:

RTMP_Write() 中存在某些错误。我已经打了补丁,现在可以用了。我要发布那个。

In fact, RTMP_Write() seems to require that you already have the RTMP packet formed in buf.

RTMPPacket *pkt = &r->m_write;
...
pkt->m_packetType = *buf++;

So, you cannot just push the flv data there - you need to separate it to packets first.

There is a nice function, RTMP_ReadPacket(), but it reads from the network socket.

I have the same problem as you, hope to have a solution soon.

Edit:

There are certain bugs in RTMP_Write(). I've made a patch and now it works. I'm going to publish that.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文