非常有线的是,转换为ONNX后BERT的推断时间增加

发布于 01-22 01:21 字数 2151 浏览 3 评论 0原文

遵守Stantard覆盖模型到ONNX的过程,推论时间实际上增加了,
我已经检查了等于BERT的ONNX的输出。除了BERT,其他模型(例如MLP)通常可以通过ONNX缩短推理时间,那么为什么会发生这种情况?

class qwe(nn.Module):
def __init__(self, path):
    super(qwe,self).__init__()
    self.bert=BertModel.from_pretrained(path)
    self.ll=nn.Linear(768,10)

def forward(self, input_ids, attention_mask, token_type_ids):
    out=self.bert(input_ids, attention_mask, token_type_ids)[1]
    out=self.ll(out)
    return out

input_ids=torch.from_numpy(np.random.randint(1,500,(1,512))).to(torch.long)
attention_mask=torch.from_numpy(np.ones((1,512))).to(torch.long)
token_type_ids=torch.tensor(np.zeros((1,512))).to(torch.long)

inputs=(input_ids, attention_mask, token_type_ids)
input_dict={
    'input_ids':np.random.randint(1,500,(1,512), dtype=np.long),
    'attention_mask':np.ones((1,512), dtype=np.long),
    'token_type_ids':np.zeros((1,512), dtype=np.long)
}

model=qwe(path)
model.eval()
torch.onnx.export(
    model,
    f=ONNX,
    args=inputs,
    opset_version=12,
    export_params=True,
    input_names=['input_ids','attention_mask','token_type_ids'],
    output_names=['output'],
    dynamic_axes={'input_ids':[0],'attention_mask':[0],'token_type_ids':[0]}
)
sesssion=onnxruntime.InferenceSession(ONNX,  providers=['CPUExecutionProvider'])
with torch.no_grad():
    start_time = time.time()
    output = sesssion.run(None, input_dict)
    print('onnx  : {}'.format(time.time() - start_time))
    st=time.time()
    res=model(torch.tensor(input_dict['input_ids'] ,dtype=torch.long),
          torch.tensor(input_dict['attention_mask'], dtype=torch.long),
          torch.tensor(input_dict['token_type_ids'], dtype=torch.long))
    print('torch  : {}'.format(time.time() - st))

bert : 0.45925045013427734
onnx  : 0.5993204116821289

bert_output
tensor([[ 1.1763,  0.2471,  1.0245,  0.1245, -0.5310, -0.6783, -0.1928, -0.1242,
     -0.0139,  0.9107]])
onnx_output
[array([[ 1.1763551 ,  0.24712242,  1.0244701 ,  0.12455264, -0.5310014 ,
    -0.67832315, -0.19275273, -0.12419312, -0.01386394,  0.91070646]],
  dtype=float32)]

obey the stantard process of coverting model to ONNX, the inferencing time actually increased, and
i have checked the output of onnx that is equal to that of bert. except bert, other model like MLP can normally reduce inference time by onnx, so why this happen?

class qwe(nn.Module):
def __init__(self, path):
    super(qwe,self).__init__()
    self.bert=BertModel.from_pretrained(path)
    self.ll=nn.Linear(768,10)

def forward(self, input_ids, attention_mask, token_type_ids):
    out=self.bert(input_ids, attention_mask, token_type_ids)[1]
    out=self.ll(out)
    return out

input_ids=torch.from_numpy(np.random.randint(1,500,(1,512))).to(torch.long)
attention_mask=torch.from_numpy(np.ones((1,512))).to(torch.long)
token_type_ids=torch.tensor(np.zeros((1,512))).to(torch.long)

inputs=(input_ids, attention_mask, token_type_ids)
input_dict={
    'input_ids':np.random.randint(1,500,(1,512), dtype=np.long),
    'attention_mask':np.ones((1,512), dtype=np.long),
    'token_type_ids':np.zeros((1,512), dtype=np.long)
}

model=qwe(path)
model.eval()
torch.onnx.export(
    model,
    f=ONNX,
    args=inputs,
    opset_version=12,
    export_params=True,
    input_names=['input_ids','attention_mask','token_type_ids'],
    output_names=['output'],
    dynamic_axes={'input_ids':[0],'attention_mask':[0],'token_type_ids':[0]}
)
sesssion=onnxruntime.InferenceSession(ONNX,  providers=['CPUExecutionProvider'])
with torch.no_grad():
    start_time = time.time()
    output = sesssion.run(None, input_dict)
    print('onnx  : {}'.format(time.time() - start_time))
    st=time.time()
    res=model(torch.tensor(input_dict['input_ids'] ,dtype=torch.long),
          torch.tensor(input_dict['attention_mask'], dtype=torch.long),
          torch.tensor(input_dict['token_type_ids'], dtype=torch.long))
    print('torch  : {}'.format(time.time() - st))

bert : 0.45925045013427734
onnx  : 0.5993204116821289

bert_output
tensor([[ 1.1763,  0.2471,  1.0245,  0.1245, -0.5310, -0.6783, -0.1928, -0.1242,
     -0.0139,  0.9107]])
onnx_output
[array([[ 1.1763551 ,  0.24712242,  1.0244701 ,  0.12455264, -0.5310014 ,
    -0.67832315, -0.19275273, -0.12419312, -0.01386394,  0.91070646]],
  dtype=float32)]

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

手心的温暖2025-01-29 01:21:00

我已经弄清楚了。我的环境是Python3.6,其中ONNX表现不佳。使用相同的代码,ONNX在Python3.7中享有50%的推断速度。

i have figured it out. my environment is python3.6, in which onnx doesn't perform well. and with the same code, onnx enjoy a 50% inferencing speed up in python3.7.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文