保存 PyML.classifiers.multi.OneAgainstRest(SVM()) 对象?

发布于 2024-08-29 11:55:00 字数 1388 浏览 4 评论 0原文

我正在使用 PYML 构建多类线性支持向量机(SVM)。训练 SVM 后,我希望能够保存分类器,以便在后续运行中我可以立即使用分类器而无需​​重新训练。不幸的是,该分类器没有实现 .save() 函数,并且尝试对其进行 pickle(使用标准 pickle 和 cPickle)会产生以下错误消息:

pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject

有谁知道解决此问题的方法或没有此问题的替代库?谢谢。

编辑/更新
我现在正在训练并尝试使用以下代码保存分类器:

mc = multi.OneAgainstRest(SVM());
mc.train(dataset_pyml,saveSpace=False);
    for i, classifier in enumerate(mc.classifiers):
        filename=os.path.join(prefix,labels[i]+".svm");
        classifier.save(filename);

请注意,我现在使用 PyML 保存机制而不是 pickling 进行保存,并且我已将“saveSpace=False”传递给训练函数。但是,我仍然收到错误:

ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False)

但是,我传递了 saveSpace=False... 那么,如何保存分类器?

PS
我使用的项目是 pyimgattr,以防您想要一个完整的可测试示例...该程序使用“./pyimgaattr.py train”运行...这会给你带来这个错误。另外,关于版本信息的说明:

[michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python
Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) 
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import PyML
>>> print PyML.__version__
0.7.0

I'm using PYML to construct a multiclass linear support vector machine (SVM). After training the SVM, I would like to be able to save the classifier, so that on subsequent runs I can use the classifier right away without retraining. Unfortunately, the .save() function is not implemented for that classifier, and attempting to pickle it (both with standard pickle and cPickle) yield the following error message:

pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject

Does anyone know of a way around this or of an alternative library without this problem? Thanks.

Edit/Update
I am now training and attempting to save the classifier with the following code:

mc = multi.OneAgainstRest(SVM());
mc.train(dataset_pyml,saveSpace=False);
    for i, classifier in enumerate(mc.classifiers):
        filename=os.path.join(prefix,labels[i]+".svm");
        classifier.save(filename);

Notice that I am now saving with the PyML save mechanism rather than with pickling, and that I have passed "saveSpace=False" to the training function. However, I am still gettting an error:

ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False)

However, I am passing saveSpace=False... so, how do I save the classifier(s)?

P.S.
The project I am using this in is pyimgattr, in case you would like a complete testable example... the program is run with "./pyimgattr.py train"... that will get you this error. Also, a note on version information:

[michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python
Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) 
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import PyML
>>> print PyML.__version__
0.7.0

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

戒ㄋ 2024-09-05 11:55:01

在 multi.py 的第 96 行“self.classifiers[i].train(datai)”被调用而不传递“**args”,因此如果你调用“mc.train(data, saveSpace=False)”,这个 saveSpace - 争论消失了。这就是如果您尝试将分类器单独保存在多类分类器中时会收到错误消息的原因。但是,如果更改此行以传递所有参数,则可以单独保存每个分类器:

#!/usr/bin/python

import numpy

from PyML.utils import misc
from PyML.evaluators import assess
from PyML.classifiers.svm import SVM, loadSVM
from PyML.containers.labels import oneAgainstRest
from PyML.classifiers.baseClassifiers import Classifier
from PyML.containers.vectorDatasets import SparseDataSet
from PyML.classifiers.composite import CompositeClassifier

class OneAgainstRestFixed(CompositeClassifier) :

    '''A one-against-the-rest multi-class classifier'''

    def train(self, data, **args) :
        '''train k classifiers'''

        Classifier.train(self, data, **args)

        numClasses = self.labels.numClasses
        if numClasses <= 2:
            raise ValueError, 'Not a multi class problem'

        self.classifiers = [self.classifier.__class__(self.classifier)
                            for i in range(numClasses)]

        for i in range(numClasses) :
            # make a copy of the data; this is done in case the classifier modifies the data
            datai = data.__class__(data, deepcopy = self.classifier.deepcopy)
            datai =  oneAgainstRest(datai, data.labels.classLabels[i])

            self.classifiers[i].train(datai, **args)

        self.log.trainingTime = self.getTrainingTime()

    def classify(self, data, i):

        r = numpy.zeros(self.labels.numClasses, numpy.float_)
        for j in range(self.labels.numClasses) :
            r[j] = self.classifiers[j].decisionFunc(data, i)

        return numpy.argmax(r), numpy.max(r)

    def preproject(self, data) :

        for i in range(self.labels.numClasses) :
            self.classifiers[i].preproject(data)

    test = assess.test

train_data = """
0 1:1.0 2:0.0 3:0.0 4:0.0
0 1:0.9 2:0.0 3:0.0 4:0.0
1 1:0.0 2:1.0 3:0.0 4:0.0
1 1:0.0 2:0.8 3:0.0 4:0.0
2 1:0.0 2:0.0 3:1.0 4:0.0
2 1:0.0 2:0.0 3:0.9 4:0.0
3 1:0.0 2:0.0 3:0.0 4:1.0
3 1:0.0 2:0.0 3:0.0 4:0.9
"""
file("foo_train.data", "w").write(train_data.lstrip())

test_data = """
0 1:1.1 2:0.0 3:0.0 4:0.0
1 1:0.0 2:1.2 3:0.0 4:0.0
2 1:0.0 2:0.0 3:0.6 4:0.0
3 1:0.0 2:0.0 3:0.0 4:1.4
"""
file("foo_test.data", "w").write(test_data.lstrip())

train = SparseDataSet("foo_train.data")
mc = OneAgainstRestFixed(SVM())
mc.train(train, saveSpace=False)

test = SparseDataSet("foo_test.data")
print [mc.classify(test, i) for i in range(4)]

for i, classifier in enumerate(mc.classifiers):
    classifier.save("foo.model.%d" % i)

classifiers = []
for i in range(4):
    classifiers.append(loadSVM("foo.model.%d" % i))

mcnew = OneAgainstRestFixed(SVM())
mcnew.labels = misc.Container()
mcnew.labels.addAttributes(test.labels, ['numClasses', 'classLabels'])
mcnew.classifiers = classifiers
print [mcnew.classify(test, i) for i in range(4)]

In multi.py on line 96 "self.classifiers[i].train(datai)" is called without passing "**args", so that if you call "mc.train(data, saveSpace=False)", this saveSpace-Argument gets lost. This is why you get an error message if you try to save the classifiers in your multiclass-classifier individually. But if you change this line to pass all arguments, you can save each classifier individually:

#!/usr/bin/python

import numpy

from PyML.utils import misc
from PyML.evaluators import assess
from PyML.classifiers.svm import SVM, loadSVM
from PyML.containers.labels import oneAgainstRest
from PyML.classifiers.baseClassifiers import Classifier
from PyML.containers.vectorDatasets import SparseDataSet
from PyML.classifiers.composite import CompositeClassifier

class OneAgainstRestFixed(CompositeClassifier) :

    '''A one-against-the-rest multi-class classifier'''

    def train(self, data, **args) :
        '''train k classifiers'''

        Classifier.train(self, data, **args)

        numClasses = self.labels.numClasses
        if numClasses <= 2:
            raise ValueError, 'Not a multi class problem'

        self.classifiers = [self.classifier.__class__(self.classifier)
                            for i in range(numClasses)]

        for i in range(numClasses) :
            # make a copy of the data; this is done in case the classifier modifies the data
            datai = data.__class__(data, deepcopy = self.classifier.deepcopy)
            datai =  oneAgainstRest(datai, data.labels.classLabels[i])

            self.classifiers[i].train(datai, **args)

        self.log.trainingTime = self.getTrainingTime()

    def classify(self, data, i):

        r = numpy.zeros(self.labels.numClasses, numpy.float_)
        for j in range(self.labels.numClasses) :
            r[j] = self.classifiers[j].decisionFunc(data, i)

        return numpy.argmax(r), numpy.max(r)

    def preproject(self, data) :

        for i in range(self.labels.numClasses) :
            self.classifiers[i].preproject(data)

    test = assess.test

train_data = """
0 1:1.0 2:0.0 3:0.0 4:0.0
0 1:0.9 2:0.0 3:0.0 4:0.0
1 1:0.0 2:1.0 3:0.0 4:0.0
1 1:0.0 2:0.8 3:0.0 4:0.0
2 1:0.0 2:0.0 3:1.0 4:0.0
2 1:0.0 2:0.0 3:0.9 4:0.0
3 1:0.0 2:0.0 3:0.0 4:1.0
3 1:0.0 2:0.0 3:0.0 4:0.9
"""
file("foo_train.data", "w").write(train_data.lstrip())

test_data = """
0 1:1.1 2:0.0 3:0.0 4:0.0
1 1:0.0 2:1.2 3:0.0 4:0.0
2 1:0.0 2:0.0 3:0.6 4:0.0
3 1:0.0 2:0.0 3:0.0 4:1.4
"""
file("foo_test.data", "w").write(test_data.lstrip())

train = SparseDataSet("foo_train.data")
mc = OneAgainstRestFixed(SVM())
mc.train(train, saveSpace=False)

test = SparseDataSet("foo_test.data")
print [mc.classify(test, i) for i in range(4)]

for i, classifier in enumerate(mc.classifiers):
    classifier.save("foo.model.%d" % i)

classifiers = []
for i in range(4):
    classifiers.append(loadSVM("foo.model.%d" % i))

mcnew = OneAgainstRestFixed(SVM())
mcnew.labels = misc.Container()
mcnew.labels.addAttributes(test.labels, ['numClasses', 'classLabels'])
mcnew.classifiers = classifiers
print [mcnew.classify(test, i) for i in range(4)]
我们的影子 2024-09-05 11:55:01

获取更新版本的 PyML。从版本 0.7.4 开始,可以保存 OneAgainstRest 分类器(使用 .save() 和 .load());在该版本之前,保存/加载分类器非常重要并且容易出错。

Get a newer version of PyML. Since version 0.7.4, it is possible to save the OneAgainstRest classifier (with .save() and .load()); prior to that version, saving/loading the classifier is non-trivial and error-prone.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文