从文件读取与生成数据(快速)

发布于 2024-09-30 12:47:40 字数 1318 浏览 2 评论 0原文

我正在用 C++ 编写一个程序,需要在 xyz 坐标平面上生成一组点。点集根据数字 x 的不同而变化。第一组具有点 (0,0,0)。第二组有 (0,0,0)、(1,0,0)、(1,1,0)、(1,1,1)。第三组包含第二组并添加点 (2,0,0), (2,1,0), (2,1,1), (2,2,0), (2,2,1 ), (2,2,2)。

所以我实际上可以用 3 个 for 循环生成集合,没有太多麻烦。但是,我需要在循环中使用这个集合,其中我给它 x 的输入,并且它需要根据 x 存储该集合(到 ADT 结构中),例如。如果 x =2 那么它将存储的集合是上面提到的第二个集合。

每次运行之后,我都会处理数据,然后程序再次启动,并且必须根据在查看循环之前处理的其他数据集从同一构造中提取另一组数据。

集合的大小由方程决定:(2x3+6x2+4x)/12,所以我猜集合会半快速增长。

我很好奇是否更快地生成这个集合,将其打印到文件中,然后我不再每次都重新生成该集合,而是从生成的文件中读取它并将其存储到ADT 结构。我实际上需要运行从 x = 1x = 1000 的数据,因此无论我做什么,程序的这一部分都必须运行 1000 次。或者我不应该担心这种事情吗?

注意:

我意识到我没有提供足够的信息。这组点实际上将被存储到一个双向链表1中,因为我正在做的是在 3 个空间中获取另一组点,例如 {点1,点2,...}。我采用point1,并且需要找到point1到生成集中的每个点的最小距离。之后,我必须删除生成集中与 point1 具有最小距离的点。然后我继续到 point2 并继续该过程,直到我耗尽另一组中的所有点。

如前所述,我将运行 x = 1x = 1000,这意味着我实际上正在比较 1001 个不同的集合,与 n上面生成的第 th 组。我实际上并不知道我要事先从生成的集中使用哪个集合,我只知道在运行时需要使用哪个集合,因为我正在使用的其他集合只能在运行期间放在一起时间。因此,我对该集合的大小进行评估,并获取大小最接近的第 xth 集合,并执行上述最小距离计算。


1.我的一个朋友提到我可以使用数组而不是双向链表,但我选择双向链表的原因是因为我需要在每次最小距离比较时实际从生成集中删除一个点(我当我完成最小距离比较时,实际上可能会得到一个空列表)。

I'm writing a program in C++ that needs to generate a set of points in the xyz coordinate plane. The set of points varies depending on a number x. The first set has the point (0,0,0). The second set has (0,0,0), (1,0,0), (1,1,0), (1,1,1). The third set is contains the second set and adds the points (2,0,0), (2,1,0), (2,1,1), (2,2,0), (2,2,1), (2,2,2).

So I can actually generate the sets with 3 for loops, without much trouble. However I need to use this set in a loop where I give it an input of x, and it needs to stores the set (into an ADT structure) according to the x, eg. if x =2 then the set that it will store is the second set mentioned above.

After each run of this, I process the data, and then the program starts over again and has to pull some another set from the same construction based on some other set of data that it processes before looking at the loop.

The size of the set is governed by the equation: (2x3+6x2+4x)/12 so the set grows semi-rapidly I guess.

I was curious about whether or not it's faster to generate this set, print it to a file, and then instead of regenerating the set every time, I read it in from the file I generated and store it into the ADT structure. I actually need to run data from x = 1, to x = 1000, so no matter what I do, this portion of my program has to be run 1000 times. Or should I not worry about this kind of things?

Note:

I realized that I haven't given quite enough information. This set of points, actually will be stored into a doubly linked list1 because what I'm doing is that taking another set of points in 3 space, say {point1, point2, ... }. I take point1, and I need to find the minimum distance of point1 to every point in the generated set. After that I have to remove the point that was in the generated set that had the minimum distance with point1. Then I go on to point2 and continue the process, until I have finished exhausting all the points in my other set.

As mentioned earlier, I'm going to be running for x = 1, to x = 1000, that means that I'm actually comparing 1001 different sets, to the nth set generated above. I don't actually know which set I'll want to use from the generated set before hand, I'll only know which set I need to use during run time because the other set I'm using can only be put together during run time. So I make an assessment on my size of that set, and get the xth set that is closest in size and do the minimum distance calculations stated above.


1. A friend of mine had mentioned that I could use an array as opposed to a doubly linked list, but the reason why I chose the doubly linked list is because I need to actually remove a point from my generated set with every minimum distance comparison (I may actually end up with an empty list when I finish doing the minimum distance comparisons).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文