pytorch-forecasting ::单变量断言:过滤器不应删除所有条目

发布于 2025-01-30 07:23:52 字数 1046 浏览 3 评论 0原文

我试图通过Pytorch-forecasting进行单变量预测。

但是我在limeseriesdataset上有以下错误

essertionError:过滤器不应删除所有条目 - 检查 编码器/解码器长度和滞后

我尝试搜索错误以获取错误,阅读建议并确保我的训练_DF具有足够数量的行。 (我有很多:196)。另外,我只有1个group_id是1。

我的数据框样本:

注意:所有行的组值相同= 1

  putcall_ratio_total time_idx组 
 

日期
2006-02-24 11119.140000 0 1 2006-02-25 7436.31667 1 1
2006-02-26 3753.49333 2 1

我有训练_DF,长度为196

len(triench_df) 196

,这是我的时间列表的部分:

 context_length = 28*7
 prediction_length = 7 
     # setup Pytorch Forecasting TimeSeriesDataSet for training data
     training_data = TimeSeriesDataSet(
         training_df,
         time_idx="time_idx",
         target="PutCall_Ratio_Total",
         group_ids=["group"],
         time_varying_unknown_reals=["PutCall_Ratio_Total"],
         max_encoder_length=context_length,
         max_prediction_length=prediction_length
     )```

I tried to do univariate forecasting with Pytorch-Forecasting.

But I got following error on TimeSeriesDataSet

AssertionError: filters should not remove entries all entries - check
encoder/decoder lengths and lags

I have tried googling for the error, read the suggestion and make sure my training_df has sufficient number of rows. (I have plenty: 196). Also I only has 1 group_id which is 1. No other group_id, so all those 196 should be in same group.

My dataframes sample:

note: all rows has same group value = 1

    PutCall_Ratio_Total  time_idx  group 

Date
2006-02-24 11119.140000 0 1
2006-02-25 7436.316667 1 1
2006-02-26 3753.493333 2 1

I have training_df with length of 196

len(training_df)
196

And here is my TimeSeriesDataSet portion:

 context_length = 28*7
 prediction_length = 7 
     # setup Pytorch Forecasting TimeSeriesDataSet for training data
     training_data = TimeSeriesDataSet(
         training_df,
         time_idx="time_idx",
         target="PutCall_Ratio_Total",
         group_ids=["group"],
         time_varying_unknown_reals=["PutCall_Ratio_Total"],
         max_encoder_length=context_length,
         max_prediction_length=prediction_length
     )```

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

难得心□动 2025-02-06 07:23:52

经过一些实验,看来训练_DF长度(196)应该更大
比或等于(context_length + prediction_length)。

因此,例如,一旦我将Context_Length将其更新为27 * 7而不是28 * 7


而28 * 7 + 7> 196。

After some experiment, it seems that the training_df length (196) should be larger
than or equal to (context_length + prediction_length).

So for example above it works once I update the context_length to 27 * 7 instead of 28 * 7.

Since 27 * 7 + 7 = 196.
While 28 * 7 + 7 > 196.

蓝天 2025-02-06 07:23:52

至于我,解决方案是放下更高的滞后lags = {'target':[7,30]}而不是lags = {'target':[7,30,365] },因为有些时间表足够短

As for me, the solution was to drop higher lags lags={'target': [7, 30]} instead of lags={'target': [7, 30, 365]}, because some timeseries were short enough

王权女流氓 2025-02-06 07:23:52

我在“ min_prediction_length”参数中找到了问题。
最初,我将其设置为MAX_PREDICTION_LENGTH,但是将其更改为较低的值后,模型工作正常。

I found the issue in the "min_prediction_length" parameter.
Initially I set it same as the max_prediction_length , but after changing it to a lower value, model was working fine.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文