在PYOMO中建模Xnor
我正在写一名护士患者,使其匹配算法,我想将某些内容纳入目标功能中,以衡量新的患者 - 尿素分配与前一天的患者和病态分配的符合程度。我已经引入了一个新的二进制变量模型。Matches_previous,应该等于Model.prev_assignments和model.Signments的XNOR操作(如果两个是0或两个是1,则Model.Matches_previous为1,否则为0)。
#set
model.PatientIDs = {0, 1, 2}
model.NurseIDs = {a, b}
索引参数(由(患者,护士)索引)
model.Prev_Assignments = {(0, a): 1, (0, b): 0, (1, a) : 0, (1, b) : 1, (2, a) : 1, (2, b) : 0}
索引变量(由(患者,护士索引))
model.Assignments = {(0, a): 1, (0, b): 0, (1, a) : 0, (1, b) : 1, (2, a) : 0, (2, b) : 1}
model.Matches_Previous = {(0, a): 1, (0, b): 1, (1, a) : 1, (1, b) : 1, (2, a) : 0, (2, b) : 0}
当前,我正在尝试使用以下约束来实施此功能,我认为这是将Xnor转换为一个适当的方法线性表达式:
def matches_previous(self, patient, nurse):
return model.Matches_Previous[patient, nurse] == (model.Assignments[patient, nurse] * model.Prev_Assignments[patient, nurse]) + (1 - model.Assignments[patient, nurse]) * (1 - model.Prev_Assignments[patient, nurse])
在目标函数中,我包括(-model.matches_previous)以及其他组件(由于我最小化目标,这将最大化model.matches_previous)。
但是,这不是给出我想要的行为。请注意,该模型还有其他方面会激励其产生与先前任务不同的作业(例如,更改患者工作量),但我希望它尽可能地匹配以前的任务。
知道如何更好地实施它吗?我研究了pyomo.gdp和LogicalConstriants,但是我无法使其工作,并且缺少这些建模扩展的文档。
I am writing a nurse patient matching algorithm, and I want to incorporate something into the objective function that measures how well the new patient-nurse assignments match the patient-nurse assignments from the previous day. I've introduced a new binary variable model.MATCHES_PREVIOUS which should be equal to the XNOR operation of model.Prev_Assignments and model.Assignments (if both are 0 or both are 1, then model.MATCHES_PREVIOUS is 1, otherwise it is 0).
#Sets
model.PatientIDs = {0, 1, 2}
model.NurseIDs = {a, b}
Indexed parameter (indexed by (patient, nurse))
model.Prev_Assignments = {(0, a): 1, (0, b): 0, (1, a) : 0, (1, b) : 1, (2, a) : 1, (2, b) : 0}
Indexed variable (indexed by (patient, nurse))
model.Assignments = {(0, a): 1, (0, b): 0, (1, a) : 0, (1, b) : 1, (2, a) : 0, (2, b) : 1}
model.Matches_Previous = {(0, a): 1, (0, b): 1, (1, a) : 1, (1, b) : 1, (2, a) : 0, (2, b) : 0}
Currently, I'm trying to implement this with the below constraint, which I thought was the proper way to translate XNOR into a linear expression:
def matches_previous(self, patient, nurse):
return model.Matches_Previous[patient, nurse] == (model.Assignments[patient, nurse] * model.Prev_Assignments[patient, nurse]) + (1 - model.Assignments[patient, nurse]) * (1 - model.Prev_Assignments[patient, nurse])
In the objective function, I include (-model.Matches_Previous) along with the other components (since I am minimizing the objective, this would maximize model.Matches_Previous).
However, this isn't giving the behavior I want. Note that there are other aspects of the model which would motivate it to produce assignments that are different from the previous assignments (e.g. changing patient workloads), but I want it to match the previous assignments as best as possible.
Any idea how to implement this better? I looked into Pyomo.GDP and LogicalConstriants, but I haven't been able to get this to work and the documentation is lacking for these modeling extensions.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我们总是可以写入
x,y,x是二进制变量的位置
。这是严格的(派生为
有时,由于目标(或约束)的工作原理,我们可以放弃< = or> =不平等。
We always can write
as
where x,y,x are binary variables. This is rigorous (derivation is here) and tight (we can even relax z to be continuous between 0 and 1).
Sometimes we can drop the <= or >= inequalities because of how the objective (or constraint) works.
因此,这是您可能尝试的策略。请注意,您上面的尝试使问题通过乘变量而非线性,这可能是不需要的。
假设上面有一些“压力”,您可以以线性方式对Xnor变量进行编码...让我解释一下。
在伪代码中:
因此,我们打算将
p
作为可以索引(可选的)并求和的惩罚,然后介绍2个约束,在您的目标功能中,使用P来诱导a nundyty 罚款 /em>不是奖励,它应该有效...类似:
So here is a strategy you might try. Note your attempt above makes the problem non-linear by multiplying variables, which is probably NOT desired.
You can code the XNOR variable in linear fashion, assuming there is some "pressure" on it... Let me explain.
In pseudocode:
So, we intend to make
P
a penalty that can be indexed (optionally) and summed up, then introduce 2 constraintsAnd in your objective function, use P to induce a penalty not a reward and it should work... something like: