操作错误:数据库已锁定

发布于 2024-09-08 17:53:04 字数 183 浏览 5 评论 0原文

我在我的应用程序中进行了一些重复操作(测试它),突然出现一个奇怪的错误:

OperationalError: database is locked

我已经重新启动了服务器,但错误仍然存​​在。这到底是怎么回事?

I have made some repetitive operations in my application (testing it), and suddenly I’m getting a weird error:

OperationalError: database is locked

I've restarted the server, but the error persists. What can it be all about?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(30

酷炫老祖宗 2024-09-15 17:53:04

来自 Django 文档

SQLite 是一个轻量级数据库,因此无法支持高
并发级别。 OperationalError: 数据库已锁定 错误指示
您的应用程序正在经历比 sqlite 更多的并发性
默认配置中的句柄。此错误意味着一个线程或进程已
数据库连接上的独占锁和另一个线程超时
等待锁被释放。

Python 的 SQLite 包装器具有
默认超时值,确定允许第二个线程运行多长时间
在锁超时并引发OperationalError:数据库已锁定错误之前等待锁。

如果您遇到此错误,可以通过以下方式解决:

  • 切换到另一个数据库后端。在某个时刻 SQLite 会变成
    对于现实世界的应用程序来说太“精简”了,而且这些并发性
    错误表明您已到达该点。

  • 重写代码以减少并发性并确保数据库
    交易是短暂的。

  • 通过设置timeout数据库增加默认超时值
    选项:

    <前><代码>“选项”:{
    # ...
    “超时”:20,
    # ...
    }

    这将使 SQLite 在抛出“数据库已锁定”之前等待更长的时间
    错误;它不会真正解决这些问题。

From Django's documentation:

SQLite is meant to be a lightweight database, and thus can't support a high
level of concurrency. OperationalError: database is locked errors indicate
that your application is experiencing more concurrency than sqlite can
handle in default configuration. This error means that one thread or process has
an exclusive lock on the database connection and another thread timed out
waiting for the lock the be released.

Python's SQLite wrapper has
a default timeout value that determines how long the second thread is allowed to
wait on the lock before it times out and raises the OperationalError: database is locked error.

If you're getting this error, you can solve it by:

  • Switching to another database backend. At a certain point SQLite becomes
    too "lite" for real-world applications, and these sorts of concurrency
    errors indicate you've reached that point.

  • Rewriting your code to reduce concurrency and ensure that database
    transactions are short-lived.

  • Increase the default timeout value by setting the timeout database
    option:

    "OPTIONS": {
        # ...
        "timeout": 20,
        # ...
    }
    

    This will make SQLite wait a bit longer before throwing "database is locked"
    errors; it won't really do anything to solve them.

甜心小果奶 2024-09-15 17:53:04

就我而言,这是因为我从 DB Browser for SQLite 打开数据库。当我从浏览器中关闭它时,问题就消失了。

In my case, it was because I open the database from DB Browser for SQLite. When I close it from the browser, the problem is gone.

木緿 2024-09-15 17:53:04

我稍微不同意接受的答案,通过引用此文档,隐式地将OP的问题(数据库已锁定)与此联系起来:

切换到另一个数据库后端。在某个时刻,SQLite 对于现实世界的应用程序来说变得过于“精简”,而这些类型的并发错误表明您已经达到了这一点。

对于这个问题来说,指责 SQlite 有点“太容易”了(正确时非常强大)使用;它不仅仅是小型数据库的玩具,有趣的事实是:SQLite 数据库的大小限制为 140 TB)。

除非您的服务器非常繁忙,同一秒有数千个连接,否则数据库已锁定错误的原因可能更多的是 API 使用不当,而不是 SQlite 固有的问题这将是“太轻”。以下是有关SQLite 实现限制的更多信息。


现在的解决方案:

当我使用两个脚本同时使用同一数据库时,我遇到了同样的问题:

  • 一个通过写操作访问数据库,
  • 另一个以只读方式访问数据库

解决方案:总是<完成(甚至只读)查询后尽快 code>cursor.close()。

这里有更多详细信息

I slightly disagree with the accepted answer which, by quoting this doc, implicitly links OP's problem (Database is locked) to this:

Switching to another database backend. At a certain point SQLite becomes too "lite" for real-world applications, and these sorts of concurrency errors indicate you've reached that point.

This is a bit "too easy" to incriminate SQlite for this problem (which is very powerful when correctly used; it's not only a toy for small databases, fun fact: An SQLite database is limited in size to 140 terabytes).

Unless you have a very busy server with thousands of connections at the same second, the reason for this Database is locked error is probably more a bad use of the API, than a problem inherent to SQlite which would be "too light". Here are more informations about Implementation Limits for SQLite.


Now the solution:

I had the same problem when I was using two scripts using the same database at the same time:

  • one was accessing the DB with write operations
  • the other was accessing the DB in read-only

Solution: always do cursor.close() as soon as possible after having done a (even read-only) query.

Here are more details.

往昔成烟 2024-09-15 17:53:04

造成这种情况的实际原因通常是 python 或 django shell 打开了对数据库的请求,但没有正确关闭;终止你的终端访问通常可以释放它。我今天运行命令行测试时遇到此错误。

编辑:我对此定期进行投票。如果您想在不重新启动终端的情况下终止访问,那么您可以从命令行执行以下操作:

from django import db
db.connections.close_all()

The practical reason for this is often that the python or django shells have opened a request to the DB and it wasn't closed properly; killing your terminal access often frees it up. I had this error on running command line tests today.

Edit: I get periodic upvotes on this. If you'd like to kill access without rebooting the terminal, then from commandline you can do:

from django import db
db.connections.close_all()
我的奇迹 2024-09-15 17:53:04

正如其他人所说,还有另一个进程正在使用 SQLite 文件并且尚未关闭连接。如果您使用的是 Linux,您可以使用 fuser 命令查看哪些进程正在使用该文件(例如 db.sqlite3),如下所示

$ sudo fuser -v db.sqlite3
                     USER        PID ACCESS COMMAND
/path/to/db.sqlite3:
                     user        955 F....  apache2

:要释放锁定,请使用 fuser -k 向所有访问该文件的进程发送 KILL 信号:

sudo fuser -k db.sqlite3

请注意,这很危险,因为它可能会停止 Web 服务器进程生产服务器。

感谢@cz-game 指出fuser

As others have told, there is another process that is using the SQLite file and has not closed the connection. In case you are using Linux, you can see which processes are using the file (for example db.sqlite3) using the fuser command as follows:

$ sudo fuser -v db.sqlite3
                     USER        PID ACCESS COMMAND
/path/to/db.sqlite3:
                     user        955 F....  apache2

If you want to stop the processes to release the lock, use fuser -k which sends the KILL signal to all processes accessing the file:

sudo fuser -k db.sqlite3

Note that this is dangerous as it might stop the web server process in a production server.

Thanks to @cz-game for pointing out fuser!

爱你是孤单的心事 2024-09-15 17:53:04

我在帕特里克的答案中链接的帮助信息未(明确)解决的情况下遇到了此错误消息。

当我使用 transaction.atomic() 包装对 FooModel.objects.get_or_create() 的调用并从两个不同的线程同时调用该代码时,只有一个线程会成功,而另一个会收到“数据库已锁定”错误。更改超时数据库选项对行为没有影响。

我认为这是因为 sqlite 无法处理多个并发写入器,因此应用程序必须序列化写入他们自己的。

当我的 Django 应用程序在 sqlite 后端运行时,我通过使用 threading.RLock 对象而不是 transaction.atomic() 解决了这个问题。这并不完全等效,因此您可能需要在应用程序中执行其他操作。

这是我的代码,它从两个不同的线程同时运行 FooModel.objects.get_or_create ,以防有帮助:

from concurrent.futures import ThreadPoolExecutor

import configurations
configurations.setup()

from django.db import transaction
from submissions.models import ExerciseCollectionSubmission

def makeSubmission(user_id):
    try:
        with transaction.atomic():
            e, _ = ExerciseCollectionSubmission.objects.get_or_create(
                student_id=user_id, exercise_collection_id=172)
    except Exception as e:
        return f'failed: {e}'

    e.delete()

    return 'success'


futures = []

with ThreadPoolExecutor(max_workers=2) as executor:
    futures.append(executor.submit(makeSubmission, 296))
    futures.append(executor.submit(makeSubmission, 297))

for future in futures:
    print(future.result())

I encountered this error message in a situation that is not (clearly) addressed by the help info linked in patrick's answer.

When I used transaction.atomic() to wrap a call to FooModel.objects.get_or_create() and called that code simultaneously from two different threads, only one thread would succeed, while the other would get the "database is locked" error. Changing the timeout database option had no effect on the behavior.

I think this is due to the fact that sqlite cannot handle multiple simultaneous writers, so the application must serialize writes on their own.

I solved the problem by using a threading.RLock object instead of transaction.atomic() when my Django app is running with a sqlite backend. That's not entirely equivalent, so you may need to do something else in your application.

Here's my code that runs FooModel.objects.get_or_create simultaneously from two different threads, in case it is helpful:

from concurrent.futures import ThreadPoolExecutor

import configurations
configurations.setup()

from django.db import transaction
from submissions.models import ExerciseCollectionSubmission

def makeSubmission(user_id):
    try:
        with transaction.atomic():
            e, _ = ExerciseCollectionSubmission.objects.get_or_create(
                student_id=user_id, exercise_collection_id=172)
    except Exception as e:
        return f'failed: {e}'

    e.delete()

    return 'success'


futures = []

with ThreadPoolExecutor(max_workers=2) as executor:
    futures.append(executor.submit(makeSubmission, 296))
    futures.append(executor.submit(makeSubmission, 297))

for future in futures:
    print(future.result())
软糖 2024-09-15 17:53:04

当使用 WSL (\\wsl$ ...) 下保存的数据库文件并运行 Windows python 解释器时,我收到此错误。

您可以不将数据库保存在 WSL 树中,也可以在发行版中使用基于 Linux 的解释器。

I got this error when using a database file saved under WSL (\\wsl$ ...) and running a windows python interpreter.

You can either not save the database in your WSL-tree or use a linux based interpreter in your distro.

昔梦 2024-09-15 17:53:04

我在 Flask 应用程序中遇到了这个问题,因为我在 SQLite 浏览器中打开数据库并忘记写入更改。

如果您还在 SQLite 浏览器中进行了任何更改,那么单击“写入更改”,一切都会好起来的

输入图像描述这里

I was facing this issue in my flask app because I opened the database in SQLite Browser and forgot to write the changes.

If you have also made any changes in SQLite Browser, then click on write changes and everything will be fine.

enter image description here

今天小雨转甜 2024-09-15 17:53:04

对我来说,一旦我关闭了使用 python manage.py shell 打开的 django shell,问题就解决了

For me it gets resolved once I closed the django shell which was opened using python manage.py shell

从此见与不见 2024-09-15 17:53:04

如果您通过 pycharm 通过 dbbrowser 插件连接到 sqlite 数据库,也可能会发生这种情况。断开连接即可解决问题

This also could happen if you are connected to your sqlite db via dbbrowser plugin through pycharm. Disconnection will solve the problem

你是我的挚爱i 2024-09-15 17:53:04

我有同样的错误!原因之一是数据库连接未关闭。
因此,请检查未关闭的数据库连接。另外,在关闭连接之前检查您是否已提交数据库。

I've got the same error! One of the reasons was the DB connection was not closed.
Therefore, check for unclosed DB connections. Also, check if you have committed the DB before closing the connection.

鸵鸟症 2024-09-15 17:53:04

在第一次实例化 Django (v3.0.3) 后,我遇到了类似的错误。这里的所有建议都不起作用,除了:

  • 删除 db.sqlite3 文件并丢失其中的数据(如果有)、
  • python manage.py makemigrations
  • python manage.py migrate

顺便说一句,如果您只想测试 PostgreSQL:

docker run --rm --name django-postgres \
  -e POSTGRES_PASSWORD=mypassword \
  -e PGPORT=5432 \
  -e POSTGRES_DB=myproject \
  -p 5432:5432 \
  postgres:9.6.17-alpine

更改 settings.py 以添加此 DATABASES

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'myproject',
        'USER': 'postgres',
        'PASSWORD': 'mypassword',
        'HOST': 'localhost',
        'PORT': '5432',
    }
}

...并添加数据库适配器:

pip install psycopg2-binary

然后是通常的:

python manage.py makemigrations
python manage.py migrate

I had a similar error, right after the first instantiation of Django (v3.0.3). All recommendations here did not work apart from:

  • deleted the db.sqlite3 file and lose the data there, if any,
  • python manage.py makemigrations
  • python manage.py migrate

Btw, if you want to just test PostgreSQL:

docker run --rm --name django-postgres \
  -e POSTGRES_PASSWORD=mypassword \
  -e PGPORT=5432 \
  -e POSTGRES_DB=myproject \
  -p 5432:5432 \
  postgres:9.6.17-alpine

Change the settings.py to add this DATABASES:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'myproject',
        'USER': 'postgres',
        'PASSWORD': 'mypassword',
        'HOST': 'localhost',
        'PORT': '5432',
    }
}

...and add database adapter:

pip install psycopg2-binary

Then the usual:

python manage.py makemigrations
python manage.py migrate
停顿的约定 2024-09-15 17:53:04

检查您的数据库是否在另一个数据库浏览器上打开。

如果在其他应用程序上打开它,则关闭该应用程序并再次运行该程序。

Check if your database is opened on another DB Browser.

If it is opened on an other application, then close the application and run the program again.

怂人 2024-09-15 17:53:04

只需关闭(停止)并打开(启动)数据库即可。这解决了我的问题。

Just close (stop) and open (start) the database. This solved my problem.

來不及說愛妳 2024-09-15 17:53:04

我发现这适合我的需要。 (螺纹锁固)YMMV
conn = sqlite3.connect(database, timeout=10)

https://docs.python. org/3/library/sqlite3.html

sqlite3.connect(database[, timeout, detector_types,isolation_level, check_same_thread,factory,cached_statements, uri])

当一个数据库被多个连接访问,并且其中一个进程修改时数据库中,SQLite 数据库将被锁定,直到该事务提交为止。 timeout 参数指定连接应等待锁定消失的时间,直到引发异常。超时参数的默认值为 5.0(五秒)。

I found this worked for my needs. (thread locking) YMMV
conn = sqlite3.connect(database, timeout=10)

https://docs.python.org/3/library/sqlite3.html

sqlite3.connect(database[, timeout, detect_types, isolation_level, check_same_thread, factory, cached_statements, uri])

When a database is accessed by multiple connections, and one of the processes modifies the database, the SQLite database is locked until that transaction is committed. The timeout parameter specifies how long the connection should wait for the lock to go away until raising an exception. The default for the timeout parameter is 5.0 (five seconds).

笑红尘 2024-09-15 17:53:04

就我而言,我添加了一条手动保存的新记录,并再次通过 shell 尝试添加新记录,这次它工作得很好,检查一下。

In [7]: from main.models import Flight

In [8]: f = Flight(origin="Florida", destination="Alaska", duration=10)

In [9]: f.save()

In [10]: Flight.objects.all() 
Out[10]: <QuerySet [<Flight: Flight object (1)>, <Flight: Flight object (2)>, <Flight: Flight object (3)>, <Flight: Flight object (4)>]>

In my case, I added a new record manually saved and again through shell tried to add new record this time it works perfectly check it out.

In [7]: from main.models import Flight

In [8]: f = Flight(origin="Florida", destination="Alaska", duration=10)

In [9]: f.save()

In [10]: Flight.objects.all() 
Out[10]: <QuerySet [<Flight: Flight object (1)>, <Flight: Flight object (2)>, <Flight: Flight object (3)>, <Flight: Flight object (4)>]>
黒涩兲箜 2024-09-15 17:53:04

实际上我也遇到了同样的问题,当我使用“transaction.atomic()和select_for_update()”时,我收到错误消息“OperationalError:数据库已锁定”,

经过多次尝试/搜索/阅读django文档,
我发现 SQLite 本身存在问题,它不支持 django DOC 所说的 select_for_update 方法,请查看以下网址并深入阅读:

https://docs.djangoproject.com/en/dev/ref/databases/#database-is-locked-errors

,当我迁移到 MySQL 时,一切都很顺利。

正如 django DOCs 还说,当数据库超时发生时,可能会发生“数据库被锁定”,
他们建议您通过设置以下选项来更改数据库超时:

'OPTIONS': {
    # ...
    'timeout': 20,
    # ...
}

最后,即使您在开发环境中工作,我也建议您使用MySQL/PostgreSQL。

我希望这对您有帮助。

actually I have faced same problem , when I use "transaction.atomic() with select_for_update() " i got error message "the OperationalError: database is locked" ,

and after many tries / searching / read django docs ,
i found the problem from SQLite itself it is not support select_for_update method as django DOCs says , kindly have a look at the following url and read it deeply:

https://docs.djangoproject.com/en/dev/ref/databases/#database-is-locked-errors

, and when i moved to MySQL everything goes fine .

as django DOCs also says "database is locked" may happen when database timeout occur ,
they recommend you to change database timeout by setting up the following option :

'OPTIONS': {
    # ...
    'timeout': 20,
    # ...
}

finally, I recommend you to use MySQL/PostgreSQL even if you working on development environment .

I hope this helpful for you .

浪推晚风 2024-09-15 17:53:04

就我而言,我没有保存在 SQLite 浏览器中执行的数据库操作。保存它解决了这个问题。

In my case, I had not saved a database operation I performed within the SQLite Browser. Saving it solved the issue.

憧憬巴黎街头的黎明 2024-09-15 17:53:04

一个非常不寻常的场景,发生在我身上。

有无限递归,不断创建对象。

更具体地说,使用 DRF,我重写了视图中的创建方法,并且我做到了

def create(self, request, *args, **kwargs):
    ....
    ....

    return self.create(request, *args, **kwargs)

A very unusual scenario, which happened to me.

There was infinite recursion, which kept creating the objects.

More specifically, using DRF, I was overriding create method in a view, and I did

def create(self, request, *args, **kwargs):
    ....
    ....

    return self.create(request, *args, **kwargs)
猫九 2024-09-15 17:53:04

这里已经有很多答案了,即使我想分享我的案例,这可能会对某人有所帮助。

我已经在Python API中打开了连接来更新值,只有在收到服务器响应后我才会关闭连接。我所做的是在 Python API 中关闭连接之前,我已经打开了连接以在服务器中执行一些其他操作。

Already lot of Answers are available here, even I want to share my case , this may help someone..

I have opened the connection in Python API to update values, I'll close connection only after receiving server response. Here what I did was I have opened connection to do some other operation in server as well before closing the connection in Python API.

伊面 2024-09-15 17:53:04

如果您在使用 manage.py shell 时遇到此错误,一个可能的原因是您正在运行一个正在锁定数据库的开发服务器 (manage.py runserver)。在使用 shell 时停止服务器总是可以解决我的问题。

If you get this error while using manage.py shell, one possible reason is that you have a development server running (manage.py runserver) which is locking the database. Stoping the server while using the shell has always fixed the problem for me.

旧情勿念 2024-09-15 17:53:04

当我尝试在 SQLite 中创建新表时遇到此错误,但 session 对象包含未提交(尽管已刷新)的更改。

请确保:

  1. 在创建新表之前提交会话
  2. 关闭所有会话并在新连接中执行表创建
  3. ...

I got this error when attempting to create a new table in SQLite but the session object contained uncommitted (though flushed) changes.

Make sure to either:

  1. Commit the session(s) before creating a new table
  2. Close all sessions and perform the table creation in a new connection
  3. ...
滥情稳全场 2024-09-15 17:53:04

@Shilp Thapak的答案是正确的:错误的原因是您没有将手动更改写入在运行应用程序之前,先查看 DB Browser for SQLite 中的数据。

如果您没有在您使用的任何 SQL 客户端中编写更改,您仍然可以创建引擎,但是

engine.connect()

会抛出有关数据库被锁定的操作错误。

您可以通过检查回滚日志是否存在来检查您的引擎是否可以连接。回滚日志的默认模式是在事务开始和结束时创建和删除。

它与数据库位于同一目录中,与数据库文件同名,并附加后缀“-journal”。

如果未更改模式,请访问 SQLite 数据库浏览器中的编辑编译指示面板中的日志模式

您可以像这样检查临时文件是否存在:

if os.path.isfile('your-database.sqlite-journal'):
    print("The database is locked. Please write your changes in your SQL client before proceeding.\n")

此处了解有关临时文件的更多信息。

因此无需为此关闭 SQLite 的服务器或数据库浏览器。事实上,只要写入所有更改,您就可以让多个客户端同时连接到数据库,并且仍然同时运行您的应用程序。

@Shilp Thapak's answer is correct: the reason for the error is that you did not write your manual changes to the data in your DB Browser for SQLite before running your application.

If you didn't write the changes in whatever SQL client you are using, you can still create the engine but

engine.connect()

will throw the operational error about the database being locked.

You can check whether your engine can connect by checking the existence of a rollback journal. The default mode of a rollback journal is to be created and deleted at the start and end of a transaction.

It is exists in the same directory where your database is, it has the same name as the database file and the suffix "-journal" appended.

If the mode is not changed, at Journal mode in Edit pragmas panel in DB Browser for SQLite.

You can check the existence of the temp file like so:

if os.path.isfile('your-database.sqlite-journal'):
    print("The database is locked. Please write your changes in your SQL client before proceeding.\n")

Read more about temporary files here.

So no need to close the server or DB Browser for SQLite for that sake. In fact, as long as all the changes are written, you can have several clients connected to the database simultaneously and still run your application at the same time.

天荒地未老 2024-09-15 17:53:04

对我来说,我尝试使用 Gunicorn 将数据库部署到主机,但我只使用了一名工作人员,因此数据库被锁定。添加到gunicorn_config.py

工人=3

for me i tried to deploy the database to host using gunicorn and I used only one worker so the database was locked. add to gunicorn_config.py

workers=3

浅沫记忆 2024-09-15 17:53:04

我在服务生产 DJANGO 服务器上遇到了同样的问题。
我们通过使用数据库工具(例如“数据库浏览器”)缩小 sqlite 数据库来解决该问题。

I met very same problem on a service production DJANGO server.
We fix the problem by shrinking the sqlite DB using a DB tool such as 'DB Browser'.

甜味超标? 2024-09-15 17:53:04

我的解决方案是执行查询后忘记使用 conn.commit() 。

My solution to this was that I forgot to use conn.commit() after executing the query.

紫南 2024-09-15 17:53:04

当我将 Excel 文件中的值插入到表中时,出现了该错误。当我执行此操作时,我的 Excel 文件被打开。关闭 Excel 文件后一切正常。因此,也许检查另一个程序是否访问了您的文件。我认为它可以帮助处于类似情况的其他人。

The error showed up when I was inserting values into a table from an Excel file. My Excel file was opened when I was doing this. Everything worked fine after I closed the Excel file. So maybe check if another program has accessed your file. Figured it could help others in a similar situation.

征﹌骨岁月お 2024-09-15 17:53:04

我刚刚解决了这个问题,很难找到支持 WAL - 预写日志记录的记录标志。

这在其他数据库中是相当标准和默认的,并且提供了更好的并发性,因为读取器不会阻止写入器,写入器也不会阻止读取器。读取和写入可以同时进行。请参阅https://www.sqlite.org/wal。因此

,只需确保通过执行以下命令更改journal_mode:

PRAGMA journal_mode=WAL

I just solved the issue with very hard to find documented flag that supports WAL - Write-Ahead Logging.

This is pretty standards and default in other databases, and provides much better concurrency as readers do not block writers and a writer does not block readers. Reading and writing can proceed concurrently. see https://www.sqlite.org/wal.html

so just make sure to change the journal_mode by executing the following:

PRAGMA journal_mode=WAL
混吃等死 2024-09-15 17:53:04

更新 django 版本 2.1.7

我在 django 中使用 pytest 时收到此错误sqlite3.OperationalError:数据库已锁定

解决方案:

如果我们使用@pytest.mark.django_db装饰器。它的作用是创建一个内存数据库用于测试。

命名为:file:memorydb_default?mode=memory&cache=shared我们可以通过以下方式获取此名称:

from django.db import connection
db_path = connection.settings_dict['NAME']

要访问此数据库并对其进行编辑,请执行以下操作:

连接到数据库:

with sqlite3.connect(db_path, uri=True) as conn:
    c = conn.cursor()

使用uri= True 指定要打开的 SQLite 数据库的磁盘文件。

为了避免错误,请在装饰器中激活事务:

@pytest.mark.django_db(transaction=True)

最终函数:

from django.db import connection

@pytest.mark.django_db(transaction=True)
def test_mytest():
    db_path = connection.settings_dict['NAME']
    with sqlite3.connect(db_path, uri=True) as conn:
        c = conn.cursor()
        c.execute('my amazing query')
        conn.commit()
    assert ... == ....

UPDATE django version 2.1.7

I got this error sqlite3.OperationalError: database is locked using pytest with django.

Solution:

If we are using @pytest.mark.django_db decorator. What it does is create a in-memory-db for testing.

Named: file:memorydb_default?mode=memory&cache=shared We can get this name with:

from django.db import connection
db_path = connection.settings_dict['NAME']

To access this database and also edit it, do:

Connect to the data base:

with sqlite3.connect(db_path, uri=True) as conn:
    c = conn.cursor()

Use uri=True to specifies the disk file that is the SQLite database to be opened.

To avoid the error activate transactions in the decorator:

@pytest.mark.django_db(transaction=True)

Final function:

from django.db import connection

@pytest.mark.django_db(transaction=True)
def test_mytest():
    db_path = connection.settings_dict['NAME']
    with sqlite3.connect(db_path, uri=True) as conn:
        c = conn.cursor()
        c.execute('my amazing query')
        conn.commit()
    assert ... == ....
最单纯的乌龟 2024-09-15 17:53:04

我只需要将 alias sqlite='sqlite3' 添加到我的 ~/.zshrc 中,

然后删除了 virtualenv 中部分失败的创建~/.pyenv/versions/new-virtualenv 并重新运行 pyenv virtualenv 并且运行顺利

I just needed to add alias sqlite='sqlite3' to my ~/.zshrc

I then deleted the partially-failed creation of the virtualenv in ~/.pyenv/versions/new-virtualenv and reran pyenv virtualenv <name> and it worked swimmingly

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文