在 Python 中生成 py.test 测试

发布于 2024-10-16 01:02:40 字数 1356 浏览 7 评论 0原文

先提问,如果有兴趣的话再解释。

在 py.test 的上下文中,如何从一小组测试函数模板生成一大组测试函数?

类似:

models = [model1,model2,model3]
data_sets = [data1,data2,data3]

def generate_test_learn_parameter_function(model,data):
    def this_test(model,data):
        param = model.learn_parameters(data)
        assert((param - model.param) < 0.1 )
    return this_test

for model,data in zip(models,data_sets):
    # how can py.test can see the results of this function?
    generate_test_learn_parameter_function(model,data)

说明:

我正在编写的代码采用模型结构、一些数据,并学习模型的参数。因此,我的单元测试由一堆模型结构和预先生成的数据集组成,然后在每个结构+数据上完成一组大约 5 个机器学习任务。

因此,如果我手动编写代码,则每个任务的每个模型都需要进行一次测试。每次我想出一个新模型时,我都需要复制并粘贴 5 个任务,更改我指向的腌制结构+数据。这对我来说是一种不好的做法。理想情况下,我想要的是 5 个模板函数,它们定义了我的 5 个任务中的每一个,然后为我指定的结构列表吐出测试函数。

谷歌搜索让我要么 a) 工厂,要么 b) 闭包,这两者都让我的大脑感到困惑,并告诉我必须有一种更简单的方法,因为这个问题必须由合适的程序员定期面对。那么有吗?


编辑:所以这是解决这个问题的方法!

def pytest_generate_tests(metafunc):
    if "model" in metafunc.funcargnames:
        models = [model1,model2,model3]
        for model in models:
            metafunc.addcall(funcargs=dict(model=model))

def test_awesome(model):
    assert model == "awesome"

这会将 test_awesome 测试应用于我的模型列表中的每个模型!谢谢@dfichter!

(注意:该断言总是通过,顺便说一句)

Question first, then an explanation if you're interested.

In the context of py.test, how do I generate a large set of test functions from a small set of test-function templates?

Something like:

models = [model1,model2,model3]
data_sets = [data1,data2,data3]

def generate_test_learn_parameter_function(model,data):
    def this_test(model,data):
        param = model.learn_parameters(data)
        assert((param - model.param) < 0.1 )
    return this_test

for model,data in zip(models,data_sets):
    # how can py.test can see the results of this function?
    generate_test_learn_parameter_function(model,data)

Explanation:

The code I'm writing takes a model structure, some data, and learns the parameters of the model. So my unit testing consists of a bunch of model structures and pre-generated data sets, and then a set of about 5 machine learning tasks to complete on each structure+data.

So if I hand code this I need one test per model per task. Every time I come up with a new model I need to then copy and paste the 5 tasks, changing which pickled structure+data I'm pointing at. This feels like bad practice to me. Ideally what I'd like is 5 template functions that define each of my 5 tasks and then to just spit out test functions for a list of structures that I specify.

Googling about brings me to either a) factories or b) closures, both of which addle my brain and suggest to me that there must be an easier way, as this problem must be faced regularly by proper programmers. So is there?


EDIT: So here's how to solve this problem!

def pytest_generate_tests(metafunc):
    if "model" in metafunc.funcargnames:
        models = [model1,model2,model3]
        for model in models:
            metafunc.addcall(funcargs=dict(model=model))

def test_awesome(model):
    assert model == "awesome"

This will apply the test_awesome test to each model in my list of models! Thanks @dfichter!

(NOTE: that assert always passes, btw)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

铁轨上的流浪者 2024-10-23 01:02:40

良好的直觉。 py.test 通过其 pytest_generate_tests() 钩子完全支持您所说的内容。他们在此处进行了解释。

Good instincts. py.test supports exactly what you're talking about with its pytest_generate_tests() hook. They explain it here.

假装不在乎 2024-10-23 01:02:40

您还可以使用参数化装置来做到这一点。 hooks 是一种为 Py.test 构建插件的 API,而参数化装置是制作可输出多个值并为其生成附加测试用例的装置的通用方法。

插件意味着一些项目范围(或包范围)的功能,而不是测试用例特定的功能,而参数化的固定装置正是参数化测试用例的某些资源所需要的。

所以你的解决方案可以重写为:

@pytest.fixture(params=[model1, model2, model3])
def model(request):
    return request.param

def test_awesome(model):
    assert model == "awesome"

You also could do that using parametrized fixtures. While hooks, is an API to build plugins for Py.test, parametrized fixtures is a generalized way to make a fixtures that outputs multiple values and generates additional test cases for them.

Plugins are meant to be some project-wide (or package-wide) features, not test case specific features and parametrized fixtures are exactly what's needed to parametrize some resource for test case(s).

So your solution could be rewritten as that:

@pytest.fixture(params=[model1, model2, model3])
def model(request):
    return request.param

def test_awesome(model):
    assert model == "awesome"
为人所爱 2024-10-23 01:02:40

今天最简单的解决方案是使用 pytest.mark.parametrize

@pytest.mark.parametrize('model', ["awesome1", "awesome2", "awesome3"])
def test_awesome(model):
    assert model.startswith("awesome")

The simplest solution today is to use pytest.mark.parametrize:

@pytest.mark.parametrize('model', ["awesome1", "awesome2", "awesome3"])
def test_awesome(model):
    assert model.startswith("awesome")
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文