使用 OMPR 针对大型数据集优化运输成本

发布于 2025-01-12 18:12:10 字数 2041 浏览 2 评论 0原文

我正在解决给定一组约束的运输优化问题。 以下是我拥有

#demand 文件 的三个关键数据集 需求 - 4821(DPP) 销售点(D) 有需求(DEMAND)

head(demand)
                       D        PP DEMAND                             DPP
1 ADILABAD (V) - T:11001  OPC:PACK 131.00 ADILABAD (V) - T:11001:OPC:PACK
2 ADILABAD (V) - T:13003  OPC:PACK 235.00 ADILABAD (V) - T:13003:OPC:PACK
3  ADILABAD (V) - T:2006  PPC:PACK  30.00  ADILABAD (V) - T:2006:PPC:PACK
4  ADILABAD (V) - T:4001  OPC:PACK  30.00  ADILABAD (V) - T:4001:OPC:PACK
5  ADILABAD (V) - T:7006 OPC:NPACK  34.84 ADILABAD (V) - T:7006:OPC:NPACK
6         AHMEDABAD:1001  OPC:PACK 442.10         AHMEDABAD:1001:OPC:PACK

#Capacity 文件 cc - 1823 个来源 (SOURCE)

head(cc,4)
                     SOURCE MinP  MaxP
1 CHILAMKUR:P:OPC:NPACK:0:R  900 10806
2 CHILAMKUR:P:OPC:NPACK:0:W  900 10806
3  CHILAMKUR:P:OPC:PACK:0:R 5628 67536
4  CHILAMKUR:P:OPC:PACK:0:W 5628 67536

#LandingCost 文件 具有容量限制(MaxP、MinP) LCMat - 这是一个矩阵,其中包含从给定来源 (SOURCE) 跨越需求地点 (DPP) 交付产品的着陆成本。这是一个 1823 x 4821 矩阵。由于给定来源并不存在所有地点的着陆成本,因此我已将其替换为此类 DPP 的巨大成本 (10^6)。

我正在 R 中使用 OMPR 包来优化运输材料以满足需求。 这可能是一个非常简单的运输问题,但需要花费很多时间。我使用的是 16GB 内存机器

以下是代码。谁能指导我应该做得更好吗?

a = Sys.time()
grid = expand.grid(i = 1:nrow(LCMat),j = 1:ncol(LCMat))
grid_solve = grid[which(LCMat < 10^6),]
grid_notsolve = grid[which(LCMat >= 10^6),]

model <- MILPModel() %>% 
  add_variable(x[grid$i, grid$j],lb = 0, type = "continuous") %>%
  add_constraint(x[grid_notsolve$i, grid_notsolve$j] == 0) %>%
  add_constraint(sum_over(x[i,j], i = 1:nrow(LCMat)) <= demand$DEMAND[j], j = 1:ncol(LCMat)) %>%
  add_constraint(sum_over(x[i,j], j = 1:ncol(LCMat)) <= cc$MaxP[i], i = 1:nrow(LCMat)) %>%
  add_constraint(sum_over(x[i,j], j = 1:ncol(LCMat)) >= cc$MinP[i], i = 1:nrow(LCMat)) %>%
  set_objective(sum_expr(LCMat[grid_solve$i,grid_solve$j]*x[grid_solve$i,grid_solve$j]),"min")

solution = model %>% solve_model(with_ROI(solver = "glpk", verbose = TRUE))
Sys.time() - a

I am solving a transport optimization problem given a set of constraints.
The following are the three key data sets that I have

#demand file
demand - has demand(DEMAND) across 4821(DPP) sale points(D)

head(demand)
                       D        PP DEMAND                             DPP
1 ADILABAD (V) - T:11001  OPC:PACK 131.00 ADILABAD (V) - T:11001:OPC:PACK
2 ADILABAD (V) - T:13003  OPC:PACK 235.00 ADILABAD (V) - T:13003:OPC:PACK
3  ADILABAD (V) - T:2006  PPC:PACK  30.00  ADILABAD (V) - T:2006:PPC:PACK
4  ADILABAD (V) - T:4001  OPC:PACK  30.00  ADILABAD (V) - T:4001:OPC:PACK
5  ADILABAD (V) - T:7006 OPC:NPACK  34.84 ADILABAD (V) - T:7006:OPC:NPACK
6         AHMEDABAD:1001  OPC:PACK 442.10         AHMEDABAD:1001:OPC:PACK

#Capacity file
cc - has capacity constraint (MaxP, MinP) across 1823 sources(SOURCE)

head(cc,4)
                     SOURCE MinP  MaxP
1 CHILAMKUR:P:OPC:NPACK:0:R  900 10806
2 CHILAMKUR:P:OPC:NPACK:0:W  900 10806
3  CHILAMKUR:P:OPC:PACK:0:R 5628 67536
4  CHILAMKUR:P:OPC:PACK:0:W 5628 67536

#LandingCost file
LCMat - This is a matrix with the landing cost to deliver the product across the demand location (DPP) from a given source(SOURCE). This is an 1823 x 4821 matrix. Since the landing costs to all locations do not exist from a given source, I have replace that with a huge cost (10^6) to such DPPs.

I am using the OMPR package in R to optimize shipping material to meet the demand.
This is potentially a very simple transport problem but it is taking a lot of time. I am using a 16GB ram machine

The following is the code. Could anyone guide me on what I should do better?

a = Sys.time()
grid = expand.grid(i = 1:nrow(LCMat),j = 1:ncol(LCMat))
grid_solve = grid[which(LCMat < 10^6),]
grid_notsolve = grid[which(LCMat >= 10^6),]

model <- MILPModel() %>% 
  add_variable(x[grid$i, grid$j],lb = 0, type = "continuous") %>%
  add_constraint(x[grid_notsolve$i, grid_notsolve$j] == 0) %>%
  add_constraint(sum_over(x[i,j], i = 1:nrow(LCMat)) <= demand$DEMAND[j], j = 1:ncol(LCMat)) %>%
  add_constraint(sum_over(x[i,j], j = 1:ncol(LCMat)) <= cc$MaxP[i], i = 1:nrow(LCMat)) %>%
  add_constraint(sum_over(x[i,j], j = 1:ncol(LCMat)) >= cc$MinP[i], i = 1:nrow(LCMat)) %>%
  set_objective(sum_expr(LCMat[grid_solve$i,grid_solve$j]*x[grid_solve$i,grid_solve$j]),"min")

solution = model %>% solve_model(with_ROI(solver = "glpk", verbose = TRUE))
Sys.time() - a

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

删除→记忆 2025-01-19 18:12:10

有两个可能加快速度的选项:

  1. 确保使用最新的 CRAN 版本的 ompr 和 listcomp。
  2. 尝试使用过滤条件仅创建/使用与模型相关的变量,而不是添加所有nrow(LCMat)*ncol(LCMat)变量,然后设置(可能)很多都为 0。请参阅下面的代码示例。根据您的问题的稀疏程度,这也可能有所帮助。

以下代码采用稀疏矩阵(即在您的情况下具有许多 0 元素或 10^6 元素的矩阵),并且仅生成具有以下特征的 x[i,j] 变量: sparse_matrix 中大于 0 的条目。它希望说明如何使用该功能并将其应用于您的案例。

library(ompr)
sparse_matrix <- matrix(
  c(
    1, 0, 0, 1,
    0, 1, 0, 1,
    0, 0, 0, 1,
    1, 0, 0, 0
  ), byrow = TRUE, ncol = 4
)
is_connected <- function(i, j) {
  sparse_matrix[i, j] > 0
}
n <- nrow(sparse_matrix)
m <- ncol(sparse_matrix)
model <- MIPModel() |> 
  add_variable(x[i, j], i = 1:n, j = 1:m, is_connected(i, j)) |> 
  set_objective(sum_over(x[i, j], i = 1:n, j = 1:m, is_connected(i, j))) |> 
  add_constraint(sum_over(x[i, j], i = 1:n, is_connected(i, j)) <= 1, j = 1:m)

variable_keys(model)
#> [1] "x[1,1]" "x[1,4]" "x[2,2]" "x[2,4]" "x[3,4]" "x[4,1]"

extract_constraints(model)
#> $matrix
#> 3 x 6 sparse Matrix of class "dgCMatrix"
#>                 
#> [1,] 1 . . . . 1
#> [2,] . . 1 . . .
#> [3,] . 1 . 1 1 .
#> 
#> $sense
#> [1] "<=" "<=" "<="
#> 
#> $rhs
#> [1] 1 1 1

reprex 包 (v2.0.1) 创建于 2022 年 3 月 12 日

Two options to potentially speed things up:

  1. Make sure you use the latest CRAN versions of ompr and listcomp.
  2. Try to use filter conditions to only create/use variables that are relevant to the model, instead of adding all nrow(LCMat)*ncol(LCMat) variables and then setting (potentially) a lot of them to 0. See the code below for an example. Depending on how sparse your problem is that could help as well.

The following code takes a sparse matrix (i.e. a matrix with many 0 elements or 10^6 elements in your case) and only generates x[i,j] variables that have an entry in sparse_matrix which is greater than 0. It hopefully illustrates how to use that feature and apply it to your case.

library(ompr)
sparse_matrix <- matrix(
  c(
    1, 0, 0, 1,
    0, 1, 0, 1,
    0, 0, 0, 1,
    1, 0, 0, 0
  ), byrow = TRUE, ncol = 4
)
is_connected <- function(i, j) {
  sparse_matrix[i, j] > 0
}
n <- nrow(sparse_matrix)
m <- ncol(sparse_matrix)
model <- MIPModel() |> 
  add_variable(x[i, j], i = 1:n, j = 1:m, is_connected(i, j)) |> 
  set_objective(sum_over(x[i, j], i = 1:n, j = 1:m, is_connected(i, j))) |> 
  add_constraint(sum_over(x[i, j], i = 1:n, is_connected(i, j)) <= 1, j = 1:m)

variable_keys(model)
#> [1] "x[1,1]" "x[1,4]" "x[2,2]" "x[2,4]" "x[3,4]" "x[4,1]"

extract_constraints(model)
#> $matrix
#> 3 x 6 sparse Matrix of class "dgCMatrix"
#>                 
#> [1,] 1 . . . . 1
#> [2,] . . 1 . . .
#> [3,] . 1 . 1 1 .
#> 
#> $sense
#> [1] "<=" "<=" "<="
#> 
#> $rhs
#> [1] 1 1 1

Created on 2022-03-12 by the reprex package (v2.0.1)

听你说爱我 2025-01-19 18:12:10
  • 对于大型模型,OMPR 和 GLPK 都很慢。
  • 您正在复制 sum_over(x[i,j], j = 1:ncol(LCMat))。这会导致出现超出需要的非零元素。我通常会尝试阻止这种情况(即使以更多变量为代价)。
  • Both OMPR and GLPK are slow for large models.
  • You are duplicating sum_over(x[i,j], j = 1:ncol(LCMat)). That leads to more nonzero elements than needed. I usually try to prevent that (even at the expense of more variables).
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文