Julia JuMP 确保非线性目标函数具有正确的函数签名,以便自微分正常工作?

发布于 2025-01-16 17:15:26 字数 1530 浏览 0 评论 0原文

所以我写了一个最小的例子来展示我正在尝试做的事情。基本上我想解决具有多个变量的优化问题。当我尝试在 JuMP 中执行此操作时,我遇到了函数 obj 无法获取forwardDiff 对象的问题。

我看了这里:这似乎与函数签名有关:限制函数在 Julia 中使用 ForwardDiff 时的签名 。我在 obj 函数中执行了此操作,并且为了保险起见,也在我的子函数中执行了此操作,但我仍然收到错误

 LoadError: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(my_fun)},Float64},Float64,2})
Closest candidates are:
  Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  Float64(::T) where T<:Number at boot.jl:715
  Float64(::Int8) at float.jl:60

“这仍然不起作用”。我觉得我的大部分代码都是正确的,只是发生了一些奇怪的类型的事情,我必须清理这些事情,以便自动微分工作......

有什么建议吗?

using JuMP
using Ipopt
using LinearAlgebra

function obj(x::Array{<:Real,1})
    println(x)
    x1 = x[1]
    x2 = x[2]
    eye= Matrix{Float64}(I, 4, 4)
    obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
   println(obj_val)
   return obj_val
end

function mat_fun(var::T) where {T<:Real}
    eye= Matrix{Float64}(I, 2, 2)
    eye[2,2]=var
    return eye

end

m = Model(Ipopt.Optimizer)

my_fun(x...) = obj(collect(x))

@variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
@NLobjective(m, Min, my_fun(x...))

optimize!(m)

# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))

so I wrote a minimum example to show what I'm trying to do. Basically I want to solve a optimization problem with multiple variables. When I try to do this in JuMP I was having issues with my function obj not being able to take a forwardDiff object.

I looked here: and it seemed to do with the function signature :Restricting function signatures while using ForwardDiff in Julia . I did this in my obj function, and for insurance did it in my sub-function as well, but I still get the error

 LoadError: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(my_fun)},Float64},Float64,2})
Closest candidates are:
  Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  Float64(::T) where T<:Number at boot.jl:715
  Float64(::Int8) at float.jl:60

This still does not work. I feel like I have the bulk of the code correct, just some weird of type thing going on that I have to clear up so autodifferentiate works...

Any suggestions?

using JuMP
using Ipopt
using LinearAlgebra

function obj(x::Array{<:Real,1})
    println(x)
    x1 = x[1]
    x2 = x[2]
    eye= Matrix{Float64}(I, 4, 4)
    obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
   println(obj_val)
   return obj_val
end

function mat_fun(var::T) where {T<:Real}
    eye= Matrix{Float64}(I, 2, 2)
    eye[2,2]=var
    return eye

end

m = Model(Ipopt.Optimizer)

my_fun(x...) = obj(collect(x))

@variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
@NLobjective(m, Min, my_fun(x...))

optimize!(m)

# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

笨笨の傻瓜 2025-01-23 17:15:26

使用替代

function obj(x::Vector{T}) where {T}
    println(x)
    x1 = x[1]
    x2 = x[2]
    eye= Matrix{T}(I, 4, 4)
    obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
   println(obj_val)
   return obj_val
end

function mat_fun(var::T) where {T}
    eye= Matrix{T}(I, 2, 2)
    eye[2,2]=var
    return eye
end

本质上,在任何看到 Float64 的地方,将其替换为传入参数中的类型。

Use instead

function obj(x::Vector{T}) where {T}
    println(x)
    x1 = x[1]
    x2 = x[2]
    eye= Matrix{T}(I, 4, 4)
    obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
   println(obj_val)
   return obj_val
end

function mat_fun(var::T) where {T}
    eye= Matrix{T}(I, 2, 2)
    eye[2,2]=var
    return eye
end

Essentially, anywhere you see Float64, replace it by the type in the incoming argument.

老娘不死你永远是小三 2025-01-23 17:15:26

我发现了问题:
在我的 mat_fun 中,返回的类型必须是“Real”才能传播。之前是 Float64,这与我猜想所有类型都必须是具有自动微分功能的 Real 的事实不一致。尽管 Float64 显然是 Real,但看起来继承并没有得到保留,即您必须确保返回和输入的所有内容都是 Real 类型。

using JuMP
using Ipopt
using LinearAlgebra

function obj(x::AbstractVector{T}) where {T<:Real}
    println(x)
    x1 = x[1]
    x2 = x[2]
    eye= Matrix{Float64}(I, 4, 4)
    obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
   #println(obj_val)
   return obj_val
end

function mat_fun(var::T) where {T<:Real}
    eye= zeros(Real,(2,2))
    eye[2,2]=var
    return eye

end

m = Model(Ipopt.Optimizer)

my_fun(x...) = obj(collect(x))

@variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
@NLobjective(m, Min, my_fun(x...))

optimize!(m)

# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))

I found the problem:
in my mat_fun the type of the return had to be "Real" in order for it to propgate through. Before it was Float64, which was not consistent with the fact I guess all types have to be Real with the autodifferentiate. Even though a Float64 is clearly Real, it looks like the inheritence isn't perserved i.e you have to make sure everything that is returned and inputed are type Real.

using JuMP
using Ipopt
using LinearAlgebra

function obj(x::AbstractVector{T}) where {T<:Real}
    println(x)
    x1 = x[1]
    x2 = x[2]
    eye= Matrix{Float64}(I, 4, 4)
    obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
   #println(obj_val)
   return obj_val
end

function mat_fun(var::T) where {T<:Real}
    eye= zeros(Real,(2,2))
    eye[2,2]=var
    return eye

end

m = Model(Ipopt.Optimizer)

my_fun(x...) = obj(collect(x))

@variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
@NLobjective(m, Min, my_fun(x...))

optimize!(m)

# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文