设计大型服务器程序最常见的方法是什么?
好吧,我知道这很宽泛,但让我缩小范围。我已经完成了一些客户端-服务器编程,但不需要一次处理多个客户端。所以我想知道这些服务器最主流的设计方法是什么。如果人们可以参考教程、书籍或电子书。 哈哈,好吧。并没有真正缩小范围。我想我正在寻找的是一个简单但字面意义的示例,说明如何设置服务器端程序。 我的看法是:客户端发送命令:服务器接收命令并放入队列中,服务器有一个专用线程或一个线程池,不断轮询该队列,然后将适当的响应发送回客户端。非阻塞 I/O 经常使用吗? 我想我真正需要的只是教程、时间和练习。
*编辑:感谢您的回复!我想,这是我想做的更多事情。 这主要是为了学习的目的,所以我宁愿尽可能避免使用框架或库。以这个有点虚构的想法为例: 有一个客户端程序,它执行一些功能并不断将输出流式传输到服务器(可以有很多这样的客户端),然后服务器创建统计数据并存储大部分数据。假设有一个管理客户端可以登录到服务器,如果任何客户端将数据流式传输到服务器,它又会将该数据流式传输到每个连接的管理客户端。 这就是我设想的服务器程序逻辑: 服务器将有 3 个线程用于管理传入连接(每个端口监听一个),然后生成一个线程来管理每个连接: 1)ClientConnection 基本上只接收输出,我们只是说它是文本 2)AdminConnection,用于在服务器和管理客户端之间发送命令 3)AdminDataConnection 基本上用于将客户端输出流式传输到管理客户端
当数据从客户端传入服务器时,服务器会解析相关内容并将该数据放入队列(例如 adminDataQueue)中。反过来,有一个线程监视该队列,并且每 200 毫秒(或其他)会检查队列以查看是否有数据,如果有,则循环遍历 AdminDataConnections 并将其发送到每个队列。
现在对于 AdminConnection,这将适用于任何命令或直接数据请求。因此,您可以请求统计,服务器端将接收统计命令,然后发送一条命令说传入统计,然后立即发送统计对象或数据。
至于 AdminDataConnection,它只是客户端的输出,可能有一些简单的命令交织在一起。
除了带宽问题之外,所有客户端数据都集中到每个管理客户端的逻辑问题。由于扩展问题(再次忽略客户端和服务器之间的带宽;以及管理客户端和服务器之间的带宽),这种设计会出现什么样的问题。
Ok I know this is pretty broad, but let me narrow it down a bit. I've done a little bit of client-server programming but nothing that would need to handle more than just a couple clients at a time. So I was wondering design-wise what the most mainstream approach to these servers is. And if people could reference either tutorials, books, or ebooks.
Haha ok. didn't really narrow it down. I guess what I'm looking for is a simple but literal example of how the server side program is setup.
The way I see it: client sends command: server receives command and puts into queue, server has either a single dedicated thread or a thread pool that constantly polls this queue, then sends the appropriate response back to the client. Is non-blocking I/O often used?
I suppose just tutorials, time and practice are really what I need.
*EDIT: Thanks for your responses! Here is a little more of what I'm trying to do I suppose.
This is mainly for the purpose of learning so I'd rather steer away from use of frameworks or libraries as much as I can. Take for example this somewhat made up idea:
There is a client program it does some function and constantly streams the output to a server(there can be many of these clients), the server then creates statistics and stores most of the data. And lets say there is an admin client that can log into the server and if any clients are streaming data to the server it in turn would stream that data to each of the admin clients connected.
This is how I envision the server program logic:
The server would have 3 Threads for managing incoming connections(one for each port listening on) then spawning a thread to manage each connection:
1)ClientConnection which would basically just receive output, which we'll just say is text
2)AdminConnection which would be for sending commands between server and admin client
3)AdminDataConnection which would basically be for streaming client output to the admin client
When data comes in from a client to the server the server parses what is relevant and puts that data in a queue lets say adminDataQueue. In turn there is a Thread that watches this queue and every 200ms(or whatever) would check the queue to see if there is data, if there is, then cycle through the AdminDataConnections and send it to each.
Now for the AdminConnection, this would be for any commands or direct requests of data. So you could request for statistics, the server-side would receive the command for statistics then send a command saying incoming statistics, then immediately after that send a statistics object or data.
As for the AdminDataConnection, it is just the output from the clients with maybe a few simple commands intertwined.
Aside from the bandwidth concerns of the logical problem of all the client data being funneled together to each of the admin clients. What sort of problems would arise from this design due to scaling issues(again neglecting bandwidth between clients and server; and admin clients and server.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
有几种基本方法可以做到这一点。
select
调用(尽管 FreeBSD 和 Linux 都提供了更优化的替代方案,例如kqueue
)。 lighttpd 使用这种方法并且能够实现非常高的可扩展性,但是任何服务器内计算都会阻塞所有其他请求。并发动态请求处理被传递到单独的进程(通过 CGI)或等待进程(通过 FastCGI 或其等效项)。我没有任何特定的参考资料可供您参考,但如果您查看开源项目的网站,使用不同的方法获取有关其设计的信息,这将是一个不错的开始。
根据我的经验,从头开始构建工作线程/进程设置会更容易。但是,如果您有一个良好的异步框架,可以与其他通信任务(例如数据库查询)完全集成,那么它会非常强大,并且可以使您摆脱一些(但不是全部)线程锁定问题。如果您使用 Python,Twisted 就是这样一个框架。我最近也在 OCaml 中使用 Lwt 并取得了良好的成功。
There are a couple of basic approaches to doing this.
select
call (although both FreeBSD and Linux provide more optimized alternatives such askqueue
). lighttpd uses this approach and is able to achieve very high scalability, but any in-server computation blocks all other requests. Concurrent dynamic request handling is passed on to separate processes (via CGI) or waiting processes (via FastCGI or its equivalent).I don't have any particular references handy to point you to, but if you look at the web sites for open source projects using the different approaches for information on their design wouldn't be a bad start.
In my experience, building a worker thread/process setup is easier when working from the ground up. If you have a good asynchronous framework that integrates fully with your other communications tasks (such as database queries), however, it can be very powerful and frees you from some (but not all) thread locking concerns. If you're working in Python, Twisted is one such framework. I've also been using Lwt for OCaml lately with good success.