初始化两个401x401的大2D数组,并在C++中进行快速矩阵乘法。

发布于 2025-01-23 14:08:31 字数 727 浏览 1 评论 0原文

我想初始化两个大小401x401的2D矩阵,并以快速的方式乘以它们。 但是很可能是由于堆栈溢出,在此问题中没有初始化两个双2D矩阵:在c ++初始化新数组之后,没有输出。

经过以下建议,我使用了向量的向量存储我的2D矩阵。但是我想做快速的矩阵乘法,因为时间对我来说是一个重要因素。有建议不要为此目的使用向量我们如何加速矩阵乘法,其中使用c ++中的向量(2D vector)初始化矩阵

也许我可以再次将其转换为array,但是我觉得会有stackoverflow! 我该如何完成初始化两个大矩阵和矩阵乘法的任务也很快?,如果我坚持使用向量的向量,就无法使用MKL等内置库的能力。

I want to initialise two 2D matrices of size 401X401 and multiply them in a speedy way.
But most probably due to stack overflow, two double 2D matrices were not initialised as stated in this question: No output after new arrays being initialized in C++.

After following suggestions, I used vectors of vectors to store my 2D matrix. But I wanted to do fast matrix multiplication as time is a significant factor for me. There were suggestions not to use vectors of vectors for this purpose: How can we speedup matrix multiplication where matrices are initialized using vectors of vectors (2D vector) in C++.

Maybe I can convert again to array, but I feel again there would be StackOverflow!!
How can I do both the task of initialising two large matrices and matrix multiplication is also fast? If I stick to vectors of vectors, there is no way to use abilities of inbuilt libraries such as MKL etc.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

冷︶言冷语的世界 2025-01-30 14:08:31

这可以大大加速。

首先,为了消除堆栈用法问题,声明矩阵之类的矩阵:

std::vector<std::array<double,401>> matrix(401);

由于所有矩阵存储器都是连续的,因此可以实现缓存相干性。

接下来,将其中一个矩阵转置。这将使乘法在顺序内存访问中进行。

最后,这可以通过多线程进行进一步的速度改进。例如,创建每个过程100,100,100,101行的4个线程。无需线程同步。由于所有写作都是特定于每个线程的。只需在最后加入他们,就完成了。

我很好奇,并在时间4不同的情况下被黑客入侵了。

  • 简单矩阵乘以
  • 矩阵与转置乘以存储效率。
  • 矩阵乘以4个螺纹
  • 矩阵矩阵乘以4个线程,然后将

结果转换为vector&lt; vector&gt; v vector&lt; array&gt;是:::

  vector<array> structure
Basic multiply          0.0955 secs.
Multiply with transpose 0.0738 secs.
4 Threads               0.0268 secs.
4 Threads w transpose   0.0164 secs.

  vector<vector> structure
Basic multiply          0.1263 secs.
Multiply with transpose 0.0739 secs.
4 Threads               0.0346 secs.
4 Threads w transpose   0.0167 secs.


Matrix product(Matrix const& v0, Matrix const& v1, double& time, int mode = 0)
{
    Matrix ret = create_matrix();
    Matrix v2 = create_matrix();
    Timer timer;
    if (mode == 0)  // Basic matrix multiply
    {
        for (size_t row = 0; row < N; row++)
            for (size_t col = 0; col < N; col++)
                for (size_t col_t = 0; col_t < N; col_t++)
                    ret[row][col] += v0[row][col_t] * v1[col_t][col];
    }
    else if (mode == 1) // Matrix multiply after transposing v1 for better memory access
    {
        for (size_t r = 0; r < N; r++)      // transpose first
            for (size_t c = 0; c < N; c++)
                v2[c][r] = v1[r][c];
        for (size_t row = 0; row < N; row++)
            for (size_t col = 0; col < N; col++)
                for (size_t col_t = 0; col_t < N; col_t++)
                    ret[row][col] += v0[row][col_t] * v2[col][col_t];
    }
    else if (mode==2) // Matrix multiply using threads
    {
        // lambda to process sets of rows in separate threads
        auto do_row_set = [&v0, &v1, &ret](size_t start, size_t last) {
            for (size_t row = start; row < last; row++)
                for (size_t col = 0; col < N; col++)
                    for (size_t col_t = 0; col_t < N; col_t++)
                        ret[row][col] += v0[row][col_t] * v1[col_t][col];
        };

        vector<size_t> seq;
        const size_t NN = N / THREADS;
        // make a sequence of NN+1 rows from start to end
        for (size_t thread_n = 0; thread_n < N; thread_n += NN)
            seq.push_back(thread_n);
        seq.push_back(N);
        vector<std::future<void>> results; results.reserve(NN + 1);
        for (size_t i = 0; i < seq.size() - 1; i++)
            results.emplace_back(std::async(launchType, do_row_set, seq[i], seq[i + 1]));
        for (auto& x : results)
            x.get();
    }
    else
    {
        for (size_t r = 0; r < N; r++)      // transpose first
            for (size_t c = 0; c < N; c++)
                v2[c][r] = v1[r][c];
        // lambda to process sets of rows in separate threads
        auto do_row_set = [&v0, &v2, &ret](size_t start, size_t last) {
            for (size_t row = start; row < last; row++)
                for (size_t col = 0; col < N; col++)
                    for (size_t col_t = 0; col_t < N; col_t++)
                        ret[row][col] += v0[row][col_t] * v2[col][col_t];
        };

        vector<size_t> seq;
        const size_t NN = N / THREADS;
        // make a sequence of NN+1 rows from start to end
        for (size_t thread_n = 0; thread_n < N; thread_n += NN)
            seq.push_back(thread_n);
        seq.push_back(N);
        vector<std::future<void>> results; results.reserve(NN + 1);
        for (size_t i = 0; i < seq.size() - 1; i++)
            results.emplace_back(std::async(launchType, do_row_set, seq[i], seq[i + 1]));
        for (auto& x : results)
            x.get();
    }
    time = timer;
    return ret;
}

bool operator==(Matrix const& v0, Matrix const& v1)
{
    for (size_t r = 0; r < N; r++)
        for (size_t c = 0; c < N; c++)
            if (v0[r][c] != v1[r][c])
                return false;
    return true;
}

int main()
{
    auto fill = [](Matrix& v) {
        std::mt19937_64 r(1);
        std::normal_distribution dist(1.);
        for (size_t row = 0; row < N; row++)
            for (size_t col = 0; col < N; col++)
                v[row][col] = Float(dist(r));
    };
    Matrix m1 = create_matrix(), m2 = create_matrix(), m3 = create_matrix(),
           m4 = create_matrix(), m5 = create_matrix(), m6 = create_matrix();
    fill(m1); fill(m2);

    auto process_test = [&m1, &m2](int mode, Matrix& out) {
        const int rpt_count = 50;
        double sum = 0;
        for (int i = 0; i < rpt_count; i++)
        {
            double time;
            out = product(m1, m2, time, mode);
            sum += time / rpt_count;
        }
        return sum;
    };

    std::cout << std::fixed << std::setprecision(4);
    double time{};
    time = process_test(0, m3);
    std::cout << "Basic multiply          " << time << " secs.\n";
    time = process_test(1, m4);
    std::cout << "Multiply with transpose " << time << " secs.\n";
    time = process_test(2, m5);
    std::cout << "4 Threads               " << time << " secs.\n";
    time = process_test(3, m6);
    std::cout << "4 Threads w transpose   " << time << " secs.\n";
    if (!(m3==m4 && m3==m5 && m3==m6))
        std::cout << "Error\n";
}

This can be sped up considerably.

First, to eliminate the stack usage issue declare the matrix like so:

std::vector<std::array<double,401>> matrix(401);

This achieves cache coherence since all the matrix memory is contiguous.

Next, transpose one of the matrixes. This will make the multiplication proceed in sequential memory accesses.

And finally, this lends itself to further speed improvements by multithreading. For instance create 4 threads that each process 100,100,100,101 rows. No thread sync required. since all writes are specific to each thread. Just join them all at the end and you're done.

I was curious and hacked some code to time 4 different conditions.

  • Simple matrix multiply
  • Matrix multiply with transpose for memory efficiency.
  • Matrix multiply with 4 threads
  • Matrix Multiply with 4 threads and transpose

The results for a vector<vector> v vector<array> are:

  vector<array> structure
Basic multiply          0.0955 secs.
Multiply with transpose 0.0738 secs.
4 Threads               0.0268 secs.
4 Threads w transpose   0.0164 secs.

  vector<vector> structure
Basic multiply          0.1263 secs.
Multiply with transpose 0.0739 secs.
4 Threads               0.0346 secs.
4 Threads w transpose   0.0167 secs.


Matrix product(Matrix const& v0, Matrix const& v1, double& time, int mode = 0)
{
    Matrix ret = create_matrix();
    Matrix v2 = create_matrix();
    Timer timer;
    if (mode == 0)  // Basic matrix multiply
    {
        for (size_t row = 0; row < N; row++)
            for (size_t col = 0; col < N; col++)
                for (size_t col_t = 0; col_t < N; col_t++)
                    ret[row][col] += v0[row][col_t] * v1[col_t][col];
    }
    else if (mode == 1) // Matrix multiply after transposing v1 for better memory access
    {
        for (size_t r = 0; r < N; r++)      // transpose first
            for (size_t c = 0; c < N; c++)
                v2[c][r] = v1[r][c];
        for (size_t row = 0; row < N; row++)
            for (size_t col = 0; col < N; col++)
                for (size_t col_t = 0; col_t < N; col_t++)
                    ret[row][col] += v0[row][col_t] * v2[col][col_t];
    }
    else if (mode==2) // Matrix multiply using threads
    {
        // lambda to process sets of rows in separate threads
        auto do_row_set = [&v0, &v1, &ret](size_t start, size_t last) {
            for (size_t row = start; row < last; row++)
                for (size_t col = 0; col < N; col++)
                    for (size_t col_t = 0; col_t < N; col_t++)
                        ret[row][col] += v0[row][col_t] * v1[col_t][col];
        };

        vector<size_t> seq;
        const size_t NN = N / THREADS;
        // make a sequence of NN+1 rows from start to end
        for (size_t thread_n = 0; thread_n < N; thread_n += NN)
            seq.push_back(thread_n);
        seq.push_back(N);
        vector<std::future<void>> results; results.reserve(NN + 1);
        for (size_t i = 0; i < seq.size() - 1; i++)
            results.emplace_back(std::async(launchType, do_row_set, seq[i], seq[i + 1]));
        for (auto& x : results)
            x.get();
    }
    else
    {
        for (size_t r = 0; r < N; r++)      // transpose first
            for (size_t c = 0; c < N; c++)
                v2[c][r] = v1[r][c];
        // lambda to process sets of rows in separate threads
        auto do_row_set = [&v0, &v2, &ret](size_t start, size_t last) {
            for (size_t row = start; row < last; row++)
                for (size_t col = 0; col < N; col++)
                    for (size_t col_t = 0; col_t < N; col_t++)
                        ret[row][col] += v0[row][col_t] * v2[col][col_t];
        };

        vector<size_t> seq;
        const size_t NN = N / THREADS;
        // make a sequence of NN+1 rows from start to end
        for (size_t thread_n = 0; thread_n < N; thread_n += NN)
            seq.push_back(thread_n);
        seq.push_back(N);
        vector<std::future<void>> results; results.reserve(NN + 1);
        for (size_t i = 0; i < seq.size() - 1; i++)
            results.emplace_back(std::async(launchType, do_row_set, seq[i], seq[i + 1]));
        for (auto& x : results)
            x.get();
    }
    time = timer;
    return ret;
}

bool operator==(Matrix const& v0, Matrix const& v1)
{
    for (size_t r = 0; r < N; r++)
        for (size_t c = 0; c < N; c++)
            if (v0[r][c] != v1[r][c])
                return false;
    return true;
}

int main()
{
    auto fill = [](Matrix& v) {
        std::mt19937_64 r(1);
        std::normal_distribution dist(1.);
        for (size_t row = 0; row < N; row++)
            for (size_t col = 0; col < N; col++)
                v[row][col] = Float(dist(r));
    };
    Matrix m1 = create_matrix(), m2 = create_matrix(), m3 = create_matrix(),
           m4 = create_matrix(), m5 = create_matrix(), m6 = create_matrix();
    fill(m1); fill(m2);

    auto process_test = [&m1, &m2](int mode, Matrix& out) {
        const int rpt_count = 50;
        double sum = 0;
        for (int i = 0; i < rpt_count; i++)
        {
            double time;
            out = product(m1, m2, time, mode);
            sum += time / rpt_count;
        }
        return sum;
    };

    std::cout << std::fixed << std::setprecision(4);
    double time{};
    time = process_test(0, m3);
    std::cout << "Basic multiply          " << time << " secs.\n";
    time = process_test(1, m4);
    std::cout << "Multiply with transpose " << time << " secs.\n";
    time = process_test(2, m5);
    std::cout << "4 Threads               " << time << " secs.\n";
    time = process_test(3, m6);
    std::cout << "4 Threads w transpose   " << time << " secs.\n";
    if (!(m3==m4 && m3==m5 && m3==m6))
        std::cout << "Error\n";
}
呆萌少年 2025-01-30 14:08:31

您可以通过new在堆上获取内存。重复使用不会在这里真正有所帮助,因为new不是快速操作。您可以通过获得一个大的内存并假装为2D来绕过这个问题。您可以做到这一点:

double* raw_data = new double[401*401];
double* matrix[401];
for(size_t i = 0; i < 401; ++i)
    matrix[i] = raw_data + 401*i;
\\ Do_stuff
delete[] raw_data

您可以使用我认为的向量管理的内存或具有智能指针的东西来完成类似的操作。当然,这将不如分配堆栈那样快,但是可以通过。

使用这种方法,您必须小心的内存泄漏,并记住,返回raw_data矩阵也将被无效。其中一些问题是通过使用一些智能容器来解决raw_data的解决方案,而您必须牢记的其他容器。

You can get the memory on the heap via new. Repeated use will not really help here as new is not a fast operation. You can skirt this problem, by getting one large block of memory once and pretending it's 2D. You can do this like:

double* raw_data = new double[401*401];
double* matrix[401];
for(size_t i = 0; i < 401; ++i)
    matrix[i] = raw_data + 401*i;
\\ Do_stuff
delete[] raw_data

You could do similar things with memory managed by vector I think, or something with smart pointers as well. This will of course not be as fast as allocating on the stack, but it will be passable.

With this approach you'll have to be careful of memory leaks as well as remembering that as soon as raw_data is returned matrix gets invalidated as well. Some of these problems are addressed via using some smart containers for raw_data and others you have to keep in mind.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文