JAVA NIO ByteBuffer 分配以适应最大数据集?
我正在开发一款在线游戏,在服务器端工作时遇到了一些障碍。
在 Java 中使用非阻塞套接字时,处理在所有数据可用之前无法处理的完整数据包数据集的最佳操作方案是什么? 例如,通过套接字发送大型 2D 平铺地图。
我可以想到两种方法来处理它:
分配足够大的 ByteBuffer 来处理处理我的示例中的大型 2D 平铺地图所需的完整数据集。 继续将读取的数据添加到缓冲区,直到全部接收完毕并从那里进行处理。
如果 ByteBuffer 较小(可能是 1500),则可以完成后续读取并将其放入文件中,直到可以从文件中完全处理为止。 这可以避免必须使用大的 ByteBuffer,但会因磁盘 I/O 而降低性能。
我为每个 SocketChannel 使用专用的 ByteBuffer,以便我可以继续读取数据,直到处理完成。 问题是,如果我的 2D 平铺地图大小达到 2MB,那么使用 1000 个 2MB ByteBuffers 真的明智吗(假设 1000 个是客户端连接限制并且它们都在使用中)? 一定有我没有想到的更好的方法。
我宁愿让事情变得简单,但我愿意接受任何建议并感谢您的帮助。 谢谢!
I'm working on an online game and I've hit a little snag while working on the server side of things.
When using nonblocking sockets in Java, what is the best course of action to handle complete packet data sets that cannot be processed until all the data is available? For example, sending a large 2D tiled map over a socket.
I can think of two ways to handle it:
Allocate the ByteBuffer large enough to handle the complete data set needed to process a large 2D tiled map from my example. Continue add read data to the buffer until it's all been received and process from there.
If the ByteBuffer is a smaller size (perhaps 1500), subsequent reads can be done and put out to a file until it can be processed completely from the file. This would prevent having to have large ByteBuffers, but degrades performance because of disk I/O.
I'm using a dedicated ByteBuffer for every SocketChannel so that I can keep reading in data until it's complete for processing. The problem is if my 2D Tiled Map amounts to 2MB in size, is it really wise to use 1000 2MB ByteBuffers (assuming 1000 is a client connection limit and they are all in use)? There must be a better way that I'm not thinking of.
I'd prefer to keep things simple, but I'm open to any suggestions and appreciate the help. Thanks!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
目前最好的解决方案可能是使用完整的 2MB ByteBuffer,并让操作系统在必要时负责分页到磁盘(虚拟内存)。 您可能不会立即拥有 1000 个并发用户,当达到时,您可以进行优化。 您可能会对真正的性能问题感到惊讶。
Probably, the best solution for now is to use the full 2MB ByteBuffer and let the OS take care of paging to disk (virtual memory) if that's necessary. You probably won't have 1000 concurrent users right away, and when you do, you can optimize. You may be surprised what your real performance issues are.
我决定最好的做法是简单地减小海量数据集的大小并发送图块更新而不是整个地图更新。 这样我就可以简单地发送地图上已更改的图块列表,而不是再次发送整个地图。 这减少了对如此大缓冲区的需求,我又回到了正轨。 谢谢。
I decided the best course of action was to simply reduce the size of my massive dataset and send tile updates instead of an entire map update. That way I can simply send a list of tiles that have changed on a map instead of the entire map over again. This reduces the need for such a large buffer and I'm back on track. Thanks.