将多维特征分类为两组的神经网络示例

发布于 2024-09-25 12:41:46 字数 1073 浏览 0 评论 0原文

我正在寻找一个好的监督神经网络源代码示例,它接受两个以上的特征(与大多数 XY 示例不同)并将数据分类为两组。根据我的阅读,支持向量机 (SVM) 可能是一个解决方案?

我发现的所有分类示例都是二维的。以下是一些:

我试图将罕见事件与许多通常稳定的输入区分开来。 特征是键值对,其中值通常可以离散化为一个小数字。第一类的可用训练数据很大,但第二类的训练集很少,如果这有什么区别的话。

示例训练集

类别 A

[2, 1, 0, 1, 4, 3] -> A  
[1, 1, 2, 3, 3, 0] -> A
[0, 0, 1, 3, 2, 0] -> A

类别 B

[0, 4, 4, 4, 4, 3] -> B

分类示例

[1, 3, 4, 4, 4, 0] -> ??? (probably B)

A 置信度评级,例如。 “B 的确定性为 85%”将有助于区分罕见事件的阈值。

神经网络是最好的解决方案吗?是否有内置的 .NET 库?

I'm looking for a good source code example of a supervised neural network which accepts more than two features (unlike most XY-examples) and classifies data into two sets. From what I've read, a Support Vector Machine (SVM) might be a solution?

All the classifying examples I have found are two-dimensional. Here are a few:

I'm trying to distinguish rare events from a number of inputs which are normally stable. Features are key-value pairs where the value can usually discretized as a small number. Available training data for the first category is huge, but with only few training sets for the second category, if that makes a difference.

Example Training Set

Category A

[2, 1, 0, 1, 4, 3] -> A  
[1, 1, 2, 3, 3, 0] -> A
[0, 0, 1, 3, 2, 0] -> A

Category B

[0, 4, 4, 4, 4, 3] -> B

Classifying Example

[1, 3, 4, 4, 4, 0] -> ??? (probably B)

A confidence rating, eg. "85% certain of B", would be helpful in distinguishing a threshold for a rare event.

Is a neural network the best solution and are there any .NET libraries with this built-in?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

如梦 2024-10-02 12:41:46

事实上,所有这些机器学习技术都有其优点和缺点。
在使用NN(单层感知器)时,你需要考虑是否有足够的训练数据。从技术上讲,您需要能够覆盖维度内的所有单元格才能获得良好的结果。

另一方面,SVM 尝试找到分隔数据点的边界,因此如果在不靠近该边界的区域中存在间隙,也没关系。

+/- boosting 周围有 5-6 个分类器,说实话,似乎大多数时候分类器的类型都是主观选择的。另一方面,有些人使用多个分类器并比较结果。

使用 OpenCV,可以很容易地插入不同的分类器,因此您已经走上了正确的道路。我在我的项目中使用了 C++ 中的 OpenCV 和 NN 分类器,结果非常好:

链接

In reality, all these machine learning techniques have their pros and cons.
In using NN (single layer perceptron), you need to consider if you have enough training data. Technically speaking, you need to be able to cover all cells inside the dimensions to have a good result.

SVM on the other hand, tries to find a border separating your data points so if you have gaps in the areas which are not close to this border, it is fine.

There are 5-6 classifiers around +/- boosting and to be honest, it seems that most of the time type of the classifier is chosen subjectively. On the other hand, some people use multiple classifiers and compare the result.

With OpenCV, it is so easy to pluggin a different classifier so you are on right track for it. I used OpenCV in C++ with NN classifiers for my project and result was very good:

Link

无人问我粥可暖 2024-10-02 12:41:46

SVM 是 n 维的 - 只是示例通常是 2D,因为一旦超过 3 个,解决方案就不再适合 2D 插图了。

它只有两个输出类(通常是 Good 和 Bad),但它具有任意数量的功能。这就是为什么分割两个 SVM 类的线被称为“超平面”,因为它存在于多维空间中——每个特征一个维度。

SVM is n-dimensional - it's just that the EXAMPLES are usually 2D, since once you get to any more than 3 the solution doesn't really fit into 2D illustrations anymore.

It only has two output classes (usually Good and Bad), but it has as many features as you like. That's why the line splitting your two SVM classes is called a 'hyperplane', since it exists in multi-dimensional space - one dimension for each feature.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文