用于传入 UDP 数据包的静态内存
目标:
将传入 UDP 数据报中的数据传递给在各自队列上等待的 4 个线程。 该应用程序应该不间断地工作,将流量输送到 DUT 并处理传入的消息。 这就是我正在做的:
1. Public byte[] receiveData = new byte[512]
2. receivePacket = new DatagramPacket(receiveData, 0 , receiveData.length)
[The above 2 steps are in constructor of the listener class]
3. while (1)
a. ApplicationStart.serversocket.receive(receivePacket)
b. recvData = new String(receivePacket.getData()
.
. {Processing of data}
.
c. recvData = null
问题:
内存不断增加。我怀疑这是因为它正在等待 GC 来声明未使用的内存。我希望我可以在无限 while 循环之外分配一些静态内存。如果我这样做,我面临的问题是“receivePacket.getData()”返回一个字节数组,要处理数据,我需要将其转换为字符串。所有数据均为文本格式(具体为MGCP数据包)。 请建议任何方法来确保内存不被耗尽。 我不想手动调用垃圾收集器。我不确定 GC 的开销。
谢谢
Objective:
To pass data from incoming UDP datagrams to 4 threads waiting on their respective queues.
The application is supposed to work non-stop for pumping traffic to a DUT and process the incoming messages.
This is what I am doing:
1. Public byte[] receiveData = new byte[512]
2. receivePacket = new DatagramPacket(receiveData, 0 , receiveData.length)
[The above 2 steps are in constructor of the listener class]
3. while (1)
a. ApplicationStart.serversocket.receive(receivePacket)
b. recvData = new String(receivePacket.getData()
.
. {Processing of data}
.
c. recvData = null
Problem:
The memory is continuously increasing. I suspect this is because it is waiting for GC to claim the unused memory. I wish I can allocate some static memory outside the infinite while loop. The problem I face if I do this is that the “receivePacket.getData()” returns a byte array and to work on the data, I need to convert it into a string. All the data is in text format (to be specific it is MGCP packets).
Please suggest any way to make sure that the memory is not exhausted.
I don’t want to manually call the garbage collector. I am not sure of the overhead for GC.
Thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
首先,您不需要手动调用 GC,而且这样做通常是一个坏主意。
话虽如此,目前还不清楚你所说的“内存不断增加”是什么意思。
如果您的意思是从外部观察到您的应用程序的内存分配正在增加,那么这是正常的。 Java 会尽可能地分配新对象,并且仅在没有立即可用空间时才运行 GC。从外部看,JVM 使用的内存越来越多。
如果您的意思是 JVM 报告其堆空间不足(即抛出 OutOfMemoryError),那么您就有问题了。但是,这个问题不会通过运行 GC 来解决。相反,您需要运行 Java 内存分析器来查找泄漏源并修复它。
(背景:Java 内存泄漏与(例如)C / C++ 内存泄漏有点不同。在 C / C++ 中,当您的应用程序忽略
free
/delete
时,就会发生泄漏在 Java 中,当您的应用程序意外保留对不再使用的对象的引用时,如果 GC 认为应用程序可能会发生内存泄漏。再次使用该对象,它无法回收它......因此该对象保留在周围。)First, you shouldn't need to call the GC by hand, and it is generally a bad idea to do so.
Having said that, it is unclear what you mean by "the memory is continuously increasing".
If you mean that your application's memory allocation as observed from the outside is increasing, then that is normal. Java will allocate new objects as long as it can, and only run the GC when there is no space immediately available. From the outside, it looks like the JVM is using more and more memory.
If you mean that the JVM is reporting that it is running out of heap space (i.e. by throwing OutOfMemoryError), then you have a problem. However, this problem won't be cured by running the GC. Instead, you need to run a Java memory profiler to find the source of the leak, and fix it.
(Background: Java memory leaks are a bit different to (for instance) C / C++ memory leaks. In C / C++ a leak happens when your application neglects to
free
/delete
an object when it is no longer needed. In Java, a memory leak happens when your application accidentally keeps a reference to an object that is no longer going to be used. If the GC thinks that the application might use the object again, it cannot reclaim it ... and hence the object stays around.)出色地。我当然希望它是作业而不是离岸项目...
回答您的实际问题:
您可以创建许多预分配的数据包,并将它们添加到队列中。
从队列的开头抓取一个用过的包裹,并将其接收到其中。
当处理程序线程处理完数据包后,它会将其放在队列的末尾。
应避免步骤 3.b,因为它创建一个新的字节数组并将数据包中的内容复制到其中,因此重写线程以将数据包(或字节数组)作为输入进行处理。
您接收数据包的速度可能比您处理数据包的速度快;然后你的代码将耗尽所有内存。 (当您分配数据包和字符串并将它们放入处理程序线程的队列中时。)
如果您使用阻塞队列,并等待读入“空闲”数据包,则不会发生这种情况;至少不是以同样的方式。 UDP 数据包将在操作系统或 java 网络堆栈中的某个位置被丢弃或缓冲(或可能两者),因此您需要注意这一点。这就是为什么大多数“面向消息”的协议最终都使用 TCP/IP,尽管它们实际上传输“数据报”
Well. I sure hope its homework and not an offshored project...
Answer to your actual question:
You can create a number of preallocated packets, and add them to a queue.
Grab an usued package from the start of the queue, and receive into it.
When a handler-thread has handled the packet, it puts it at the back of queue.
Step 3.b should be avoided, as it creates a new byte array and copies the content from the packet into it, so rewrite the threads to handle packets (or byte arrays) as input.
It IS possible that you are receiveing packets faster than you can handle them; and then your code will use up all memory. (As you allocate the packets and strings and putting them in the handler threads' queues.)
If you are using a blocking queue, and waiting for "free" packets to read into, that won't happen; at least not in the same way. The UDP packets will be dropped or buffered (or probably both) somewhere in the OS's or javas network stack, so you need to take care of that. Which is why most "message oriented" protocol ends up using TCP/IP though they actually transport "datagrams"