Weka 标准化柱

发布于 2024-08-22 02:09:02 字数 245 浏览 4 评论 0原文

我有一个包含 14 个数字列的 ARFF 文件。我想分别对每一列执行标准化,即将每列的值修改为 (actual_value - min(this_column)) / (max(this_column) - min(this_column))。因此,列中的所有值都将在 [0, 1] 范围内。一列的最小值和最大值可能与另一列的最小值和最大值不同。

如何使用 Weka 过滤器做到这一点?

谢谢

I have an ARFF file containing 14 numerical columns. I want to perform a normalization on each column separately, that is modifying the values from each colum to (actual_value - min(this_column)) / (max(this_column) - min(this_column)). Hence, all values from a column will be in the range [0, 1]. The min and max values from a column might differ from those of another column.

How can I do this with Weka filters?

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

柒夜笙歌凉 2024-08-29 02:09:02

这可以使用

weka.filters.unsupervised.attribute.Normalize

应用此过滤器后,每列中的所有值将在 [0, 1] 范围内来完成

This can be done using

weka.filters.unsupervised.attribute.Normalize

After applying this filter all values in each column will be in the range [0, 1]

染火枫林 2024-08-29 02:09:02

这是正确的。只是想提醒一下“标准化”和“标准化”的区别。问题中提到的是“标准化”,而“归一化”假设高斯分布并通过每个属性的均值和标准差进行归一化。如果数据中有异常值,标准化过滤器可能会损害您的数据分布,因为最小值或最大值可能比其他实例远得多。

That's right. Just wanted to remind about the difference of "normalization" and "standardization". What mentioned in the question is "standardization", while "normalization" assumes Gaussian distribution and normalizes by mean, and standard variation of each attribute. If you have an outlier in your data, the standardize filter might hurt your data distribution as the min, or max might be much farther than the other instances.

榕城若虚 2024-08-29 02:09:02

在这种情况下,我们可以使用 weka.filters.unsupervised.attribute.Normalize 过滤器进行标准化,但如果我们只想标准化某些列,以下将是最好的方法。

对所选列应用标准化

unsupervised.attribute.PartitionedMultiFilter 可用于此任务。
因此,您必须根据需要配置过滤器范围部分。
例如:如果我只想对湿度属性进行标准化

步骤01:

添加 ParrririanedMultiFilter 后 ->点击过滤器文本框 ->从 weka.filters.unsupervised.attribute.Normalize 选择 Normalize ->并根据需要编辑标准化过滤器(通过给出比例和平移值)

在此处输入图像描述

步骤 02:

点击范围文本框 ->删除默认过滤器(从第一个到最后一个)->然后添加你要过滤的列号->点击确定->单击“应用”
输入图片此处描述

现在过滤器将仅添加到选定的(湿度)列。

In this case, we can use weka.filters.unsupervised.attribute.Normalize filter to normalize but if we want to normalize only some columns the following will be the best approach.

To apply normalize on selected columns

The unsupervised.attribute.PartitionedMultiFilter can be used for this task.
Thereby you have to configure the filters and ranges sections as per your need.
For Ex: If I want to normalize only on humidity attribute

Step 01 :

After adding the ParririonedMultiFilter -> Tap on filter text box -> choose Normalize from weka.filters.unsupervised.attribute.Normalize -> And edit the Normalize filter as of your need(by giving the scale and translation values)

enter image description here

Step 02:

Tap on ranges text box -> Delete the default filter( which is first-last) -> Then add the column number you want to filter -> Click ok -> Click on Apply
enter image description here

Now the filter will be added only to the selected(humidity) column.

感情洁癖 2024-08-29 02:09:02

这是 JAVA 中 K-Means 的工作标准化示例。

final SimpleKMeans kmeans = new SimpleKMeans();

final String[] options = weka.core.Utils
        .splitOptions("-init 0 -max-candidates 100 -periodic-pruning 10000 -min-density 2.0 -t1 -1.25 -t2 -1.0 -N 10 -A \"weka.core.EuclideanDistance -R first-last\" -I 500 -num-slots 1 -S 50");
kmeans.setOptions(options);

kmeans.setSeed(10);
kmeans.setPreserveInstancesOrder(true);
kmeans.setNumClusters(25);
kmeans.setMaxIterations(1000);

final BufferedReader datafile = new BufferedReader(new FileReader("/Users/data.arff");
Instances data = new Instances(datafile);

//normalize
final Normalize normalizeFilter = new Normalize();
normalizeFilter.setInputFormat(data);
data = Filter.useFilter(data, normalizeFilter);

//remove class column[0] from cluster
data.setClassIndex(0);
final Remove removeFilter = new Remove();
removeFilter.setAttributeIndices("" + (data.classIndex() + 1));
removeFilter.setInputFormat(data);
data = Filter.useFilter(data, removeFilter);

kmeans.buildClusterer(data);

System.out.println(kmeans.toString());

// evaluate clusterer
final ClusterEvaluation eval = new ClusterEvaluation();
eval.setClusterer(kmeans);
eval.evaluateClusterer(data);
System.out.println(eval.clusterResultsToString());

如果您有 CSV 文件,请将上面的 BufferedReader 行替换为下面提到的数据源:

final DataSource source = new DataSource("/Users/data.csv");
final Instances data = source.getDataSet();

Here is the working normalization example with K-Means in JAVA.

final SimpleKMeans kmeans = new SimpleKMeans();

final String[] options = weka.core.Utils
        .splitOptions("-init 0 -max-candidates 100 -periodic-pruning 10000 -min-density 2.0 -t1 -1.25 -t2 -1.0 -N 10 -A \"weka.core.EuclideanDistance -R first-last\" -I 500 -num-slots 1 -S 50");
kmeans.setOptions(options);

kmeans.setSeed(10);
kmeans.setPreserveInstancesOrder(true);
kmeans.setNumClusters(25);
kmeans.setMaxIterations(1000);

final BufferedReader datafile = new BufferedReader(new FileReader("/Users/data.arff");
Instances data = new Instances(datafile);

//normalize
final Normalize normalizeFilter = new Normalize();
normalizeFilter.setInputFormat(data);
data = Filter.useFilter(data, normalizeFilter);

//remove class column[0] from cluster
data.setClassIndex(0);
final Remove removeFilter = new Remove();
removeFilter.setAttributeIndices("" + (data.classIndex() + 1));
removeFilter.setInputFormat(data);
data = Filter.useFilter(data, removeFilter);

kmeans.buildClusterer(data);

System.out.println(kmeans.toString());

// evaluate clusterer
final ClusterEvaluation eval = new ClusterEvaluation();
eval.setClusterer(kmeans);
eval.evaluateClusterer(data);
System.out.println(eval.clusterResultsToString());

If you have CSV file then replace BufferedReader line above with below mentioned Datasource:

final DataSource source = new DataSource("/Users/data.csv");
final Instances data = source.getDataSet();
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文