多播是否适合本地主机内的一对多流传输?

发布于 2024-12-29 05:00:09 字数 998 浏览 2 评论 0原文

我有一个中央数据源,我想将其重新分发给许多客户。数据馈送产生大约。 1.8 kB/秒。目前,我正在将提要写入文件,每个客户端都会读取文件的末尾。这件事似乎有些不对劲。这是我现在拥有的伪代码...

提要:

o = open('feed.txt','a',0) #no buffering, maybe line buffer would be better
while 1:
    data = feed.read(8192)
    data = parse_data(data)
    o.write(data)
    time.sleep(0.01)

服务器(每个客户端在新线程中连接):

feed = open('feed.txt','r')
feed.seek(-1024,2)
while 1:
   dat = feed.read(1024)
   if len(dat)==0:
       # For some reason if the end of the file is reached
       # i can't read any more data, even there is more.
       # some how backing off seems to fix the problem.
       self.feed.seek(-1024,2)
       self.feed.read(1024)
   buffer += dat
   idx = buffer.rfind('\n')
   if idx>0:
       data = buffer[:idx]
       buffer = buffer[idx+1:]
       for msg in data.split('\n'):
           client.send(msg)
   time.sleep(0.01)

我想做的就是用套接字替换文件并将消息直接写入多播数据包。每当新客户端连接到服务器时,我只需启动一个新线程并开始侦听多播数据包。有没有标准的设计模式来处理这种情况?

I have a central data feed that I want to redistribute to many clients. The data feed produces approx. 1.8 kB/s. Currently I'm writing the feed to a file and each client reads off the end of the file. Something about this just seems wrong. Here is pseudo code for what I have now...

The feed:

o = open('feed.txt','a',0) #no buffering, maybe line buffer would be better
while 1:
    data = feed.read(8192)
    data = parse_data(data)
    o.write(data)
    time.sleep(0.01)

The server (each client connects in a new thread):

feed = open('feed.txt','r')
feed.seek(-1024,2)
while 1:
   dat = feed.read(1024)
   if len(dat)==0:
       # For some reason if the end of the file is reached
       # i can't read any more data, even there is more.
       # some how backing off seems to fix the problem.
       self.feed.seek(-1024,2)
       self.feed.read(1024)
   buffer += dat
   idx = buffer.rfind('\n')
   if idx>0:
       data = buffer[:idx]
       buffer = buffer[idx+1:]
       for msg in data.split('\n'):
           client.send(msg)
   time.sleep(0.01)

What I'd like to do is just replace the file with a socket and write the messages directly to multicast packets. Any time a new client connects to the server I just spin up a new thread and start listening for the multicast packets. Are there any standard design patterns to handle this case?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

春夜浅 2025-01-05 05:00:09

更简单的是,只需让所有客户端在同一端口上进行多播即可。那么您的服务器甚至不需要跟踪伪连接。

基于多播在我们的网络基础设施上“最可靠”的事实,我们对内部网络上的某些软件使用类似的方案。我们已经对负载进行了压力测试,并且在消息/秒超过 30K 之前不会开始丢弃数据包。

#!/usr/bin/python

import sys
import socket

ADDR = "239.239.239.9"
PORT = 7999

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((ADDR,PORT))
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)

while True:
    data, addr = sock.recvfrom(2048)
    print data
    sys.stdout.flush()

Even simpler, just have all clients multicast on the same port. Then your server doesn't even need to track pseudo-connections.

We use a similar scheme for some of the software on our internal network, based on the fact that multicasting is "mostly reliable" on our networking infrastructure. We've stress tested the load and don't start dropping packets until there's over 30K messages/sec.

#!/usr/bin/python

import sys
import socket

ADDR = "239.239.239.9"
PORT = 7999

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((ADDR,PORT))
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)

while True:
    data, addr = sock.recvfrom(2048)
    print data
    sys.stdout.flush()
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文