如何使Docker-Compose“ .env”文件优先于Shell Env vars?

发布于 2025-02-12 05:50:26 字数 2133 浏览 1 评论 0 原文

我希望我的docker-compose.yml文件使用与“ docker-compose.yml”文件在同一目录中使用的“ .env”文件来设置某些设想变量,并且对于设置的任何其他Env vars in外壳。现在我有

$ echo $DB_USER
tommyboy

并且在我的.env文件中,我

$ cat .env
DB_NAME=directory_data
DB_USER=myuser
DB_PASS=mypass
DB_SERVICE=postgres
DB_PORT=5432

在我的docker-compose.yml文件中都有这个...

version: '3'

services:

  postgres:
    image: postgres:10.5
    ports:
      - 5105:5432
    environment:
      POSTGRES_DB: directory_data
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: password

  web:
    restart: always
    build: ./web
    ports:           # to access the container from outside
      - "8000:8000"
    environment:
      DEBUG: 'true'
      SERVICE_CREDS_JSON_FILE: '/my-app/credentials.json'
      DB_SERVICE: host.docker.internal
      DB_NAME: directory_data
      DB_USER: ${DB_USER}
      DB_PASS: password
      DB_PORT: 5432
    command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 2 -b :8000
    volumes:
    - ./web/:/app
    depends_on:
      - postgres 

在我的python 3/django 3项目中,我在我的应用程序的settings.py文件中都有这个

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': os.environ['DB_NAME'],
        'USER': os.environ['DB_USER'],
        'PASSWORD': os.environ['DB_PASS'],
        'HOST': os.environ['DB_SERVICE'],
        'PORT': os.environ['DB_PORT']
    }
}

,但是当我运行我的时项目,使用“ docker-compose up”,我看到

maps-web-1       |   File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
maps-web-1       |     connection = Database.connect(**conn_params)
maps-web-1       |   File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
maps-web-1       |     conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
maps-web-1       | psycopg2.OperationalError: FATAL:  role "tommyboy" does not exist

Django容器似乎正在使用Shell的env var而不是传递的内容,我想知道是否有一种方法可以使Python/Django容器使用“ .env” “它的env var的根部文件。

I would like my docker-compose.yml file to use the ".env" file in the same directory as the "docker-compose.yml" file to set some envrionment variables and for those to take precedence for any other env vars set in the shell. Right now I have

$ echo $DB_USER
tommyboy

and in my .env file I have

$ cat .env
DB_NAME=directory_data
DB_USER=myuser
DB_PASS=mypass
DB_SERVICE=postgres
DB_PORT=5432

I have this in my docker-compose.yml file ...

version: '3'

services:

  postgres:
    image: postgres:10.5
    ports:
      - 5105:5432
    environment:
      POSTGRES_DB: directory_data
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: password

  web:
    restart: always
    build: ./web
    ports:           # to access the container from outside
      - "8000:8000"
    environment:
      DEBUG: 'true'
      SERVICE_CREDS_JSON_FILE: '/my-app/credentials.json'
      DB_SERVICE: host.docker.internal
      DB_NAME: directory_data
      DB_USER: ${DB_USER}
      DB_PASS: password
      DB_PORT: 5432
    command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 2 -b :8000
    volumes:
    - ./web/:/app
    depends_on:
      - postgres 

In my Python 3/Django 3 project, I have this in my application's settings.py file

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': os.environ['DB_NAME'],
        'USER': os.environ['DB_USER'],
        'PASSWORD': os.environ['DB_PASS'],
        'HOST': os.environ['DB_SERVICE'],
        'PORT': os.environ['DB_PORT']
    }
}

However when I run my project, using "docker-compose up", I see

maps-web-1       |   File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
maps-web-1       |     connection = Database.connect(**conn_params)
maps-web-1       |   File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
maps-web-1       |     conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
maps-web-1       | psycopg2.OperationalError: FATAL:  role "tommyboy" does not exist

It seems like the Django container is using the shell's env var instead of what is passed in and I was wondering if there's a way to have the Python/Django container use the ".env" file at the root for it's env vars.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

初相遇 2025-02-19 05:50:26

我以为起初我误解了您的问题,但我认为我的原始评论是正确的。正如我前面提到的,您本地的壳环境通常是在 .env 文件中覆盖事物。这使您可以在命令行上覆盖设置。换句话说,如果您在 .env 文件中都有:

DB_USER=tommyboy

并且要覆盖单个 db_user 的值 docker-compockose 调用,您可以运行:

DB_USER=alice docker-compose up

这就是为什么您本地环境中的价值观优先的原因。


当使用 Docker-Compose 与存储持久数据的内容时 - 如Postgres! - 您有时会看到使用用于配置容器的环境变量时似乎是奇怪的行为。考虑这一事件的顺序:

  1. 我们首次使用 .env 文件中的值运行 docker-compose

  2. 我们确认我们可以连接到数据库 myuser 用户:

      $ docker -compose exec postgres psql -u myuser directory_data
    PSQL(10.5(Debian 10.5-2.pgdg90+1))
    键入“帮助”寻求帮助。
    
    Directory_data =#
     
  3. 我们通过键入 ctrl-c

    来停止容器。

  4. 我们在我们的 db_user 的新值启动容器
    环境变量:

      db_user = tommyboy docker-compose
     
  5. 我们尝试使用 tommyboy 用户名...

      $ docker -compose exec postgres psql -u tommyboy directory_data
    PSQL:致命:不存在角色“ Tommyboy”
     

    ...并且失败。

这是怎么回事?

postgres _*您用于配置的环境变量
Postgres仅相关,如果数据库尚未
初始化
。当您停止并重新启动服务时
Docker-Compose ,它不会创建一个新容器;它只是重新启动
现有的。

这意味着在上述事件序列中,数据库是
最初是用 myuser 用户名创建的,然后开始
第二次在我们的环境中设置 db_user 没有更改
任何事物。

这里的解决方案是使用 docker-compose down 命令,
删除容器...

docker-compose down

然后使用更新的环境变量创建一个新的容器:

DB_USER=tommyboy docker-compose up

现在我们可以按预期访问数据库:

$ docker-compose exec postgres psql -U tommyboy directory_data
psql (10.5 (Debian 10.5-2.pgdg90+1))
Type "help" for help.

directory_data=#

I thought at first I had misread your question, but I think my original comment was correct. As I mentioned earlier, it is common for your local shell environment to override things in a .env file; this allows you to override settings on the command line. In other words, if you have in your .env file:

DB_USER=tommyboy

And you want to override the value of DB_USER for a single docker-compose up invocation, you can run:

DB_USER=alice docker-compose up

That's why values in your local environment take precedence.


When using docker-compose with things that store persistent data -- like Postgres! -- you will occasionally see what seems to be weird behavior when working with environment variables that are used to configure the container. Consider this sequence of events:

  1. We run docker-compose up for the first time, using the values in your .env file.

  2. We confirm that we can connect to the database us the myuser user:

    $ docker-compose exec postgres psql -U myuser directory_data
    psql (10.5 (Debian 10.5-2.pgdg90+1))
    Type "help" for help.
    
    directory_data=#
    
  3. We stop the container by typing CTRL-C.

  4. We start the container with a new value for DB_USER in our
    environment variable:

    DB_USER=tommyboy docker-compose up
    
  5. We try connecting using the tommyboy username...

    $ docker-compose exec postgres psql -U tommyboy directory_data
    psql: FATAL:  role "tommyboy" does not exist
    

    ...and it fails.

What's going on here?

The POSTGRES_* environment variables you use to configure the
Postgres are only relevant if the database hasn't already been
initialized
. When you stop and restart a service with
docker-compose, it doesn't create a new container; it just restarts
the existing one.

That means that in the above sequence of events, the database was
originally created with the myuser username, and starting it the
second time when setting DB_USER in our environment didn't change
anything.

The solution here is use the docker-compose down command, which
deletes the containers...

docker-compose down

And then create a new one with the updated environment variable:

DB_USER=tommyboy docker-compose up

Now we can access the database as expected:

$ docker-compose exec postgres psql -U tommyboy directory_data
psql (10.5 (Debian 10.5-2.pgdg90+1))
Type "help" for help.

directory_data=#
寂寞美少年 2025-02-19 05:50:26

外壳中的值优先于.env文件中指定的值。

如果将标签设置为外壳中的其他值,则图像中的替换使用:

export TAG=v2.0

docker compose convert

 
version: '3'
services:
  web:
    image: 'webapp:v1.5'

请参考链接以获取更多详细信息: https://docs.docker.com/compose/environment-variables/

Values in the shell take precedence over those specified in the .env file.

If you set TAG to a different value in your shell, the substitution in image uses that instead:

export TAG=v2.0

docker compose convert

 
version: '3'
services:
  web:
    image: 'webapp:v1.5'

Please refer link for more details: https://docs.docker.com/compose/environment-variables/

养猫人 2025-02-19 05:50:26

我不能比@larsks提供的出色答案更好,但是请让我尝试给您一些想法。

正如@larsks还指出的那样,任何Shell环境变量都将优先于您的Docker-Compose .env 文件中定义的。

当大约“ nofollow noreferrer”>环境变量, ,强调我的:

您可以使用 .env 文件设置环境变量的默认值
在项目目录中自动撰写(父文件夹)
您的撰写文件)。 在外壳环境中设置的值覆盖了这些
设置在 .env 文件

这意味着,例如提供这样的外壳变量:

DB_USER= tommyboy docker-compose up

将一定会覆盖您在 .env 文件中可以定义的任何变量。

解决该问题的一种可能解决方案是尝试直接使用 .env 文件,而不是环境变量。

在搜索有关您问题的信息时,我遇到了这篇很棒的文章

除其他事项外,除了解释您的问题外,它还在帖子末尾提到了一种注释,基于使用 django-environ package

我不知道库,但是看来它提供了一种直接从配置文件读取配置的应用程序的替代方法:

import environ
import os

env = environ.Env(
    # set casting, default value
    DEBUG=(bool, False)
)

# Set the project base directory
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

# Take environment variables from .env file
environ.Env.read_env(os.path.join(BASE_DIR, '.env'))

# False if not in os.environ because of casting above
DEBUG = env('DEBUG')

# Raises Django's ImproperlyConfigured
# exception if SECRET_KEY not in os.environ
SECRET_KEY = env('SECRET_KEY')

# Parse database connection url strings
# like psql://user:[email protected]:8458/db
DATABASES = {
    # read os.environ['DATABASE_URL'] and raises
    # ImproperlyConfigured exception if not found
    #
    # The db() method is an alias for db_url().
    'default': env.db(),

    # read os.environ['SQLITE_URL']
    'extra': env.db_url(
        'SQLITE_URL',
        default='sqlite:////tmp/my-tmp-sqlite.db'
    )
}

#...

如果需要,似乎您可以

可能 python-dotenv 允许您遵循类似的方法。

当然,值得一提的是,如果您决定使用此方法,则需要使 .env 文件添加到您的Docker-Compose Web服务和关联的容器,也许是安装和其他卷或复制 .env 文件到 Web 目录您已经将其安装为卷。

您仍然需要应对PostgreSQL容器配置,但是以某种方式可以帮助您实现评论中指出的目标,因为您可以使用相同的 .env file(当然是重复的一)。

根据您的评论,另一个可能的解决方案可能是使用Docker Secrets。

例如,以与秘密在kubernetes的工作相似的方式,如

在Docker swarm服务方面,一个秘密是一大堆数据,例如
作为密码,SSH私钥,SSL证书或其他作品
不应通过网络传输或存储的数据
在Dockerfile或应用程序的源代码中未加密。
您可以使用Docker Secret来集中管理此数据,并且
将其安全地传输到需要访问的那些容器
它。秘密在公交期间和在Docker中休息进行了加密
群。给定的秘密只能用于那些服务
已被授予明确访问它,并且只有当
服务任务正在运行。

简而言之,它为跨Docker Swarm服务存储敏感数据提供了一种方便的方法。

重要的是要了解,只有在使用 docker swarm swarm模式

Docker Swarm是Docker提供的一项编排服务,与Kubernetes再次相似,当然还有差异。

假设您正在以Swarm模式运行Docker,则可以基于官方docker-compose docker秘密示例示例

version: '3'

services:

  postgres:
    image: postgres:10.5
    ports:
      - 5105:5432
    environment:
      POSTGRES_DB: directory_data
      POSTGRES_USER: /run/secrets/db_user
      POSTGRES_PASSWORD: password
    secrets:
       - db_user
  web:
    restart: always
    build: ./web
    ports:           # to access the container from outside
      - "8000:8000"
    environment:
      DEBUG: 'true'
      SERVICE_CREDS_JSON_FILE: '/my-app/credentials.json'
      DB_SERVICE: host.docker.internal
      DB_NAME: directory_data
      DB_USER_FILE: /run/secrets/db_user
      DB_PASS: password
      DB_PORT: 5432
    command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 2 -b :8000
    volumes:
    - ./web/:/app
    depends_on:
      - postgres
    secrets:
       - db_user

secrets:
   db_user:
     external: true

请注意以下内容。

我们正在定义一个秘密 db_user secrets 部分。

这个秘密可以是基于文件或计算例如,从中的标准来看:

echo "tommyboy" | docker secret create db_user -

秘密应暴露于需要的每个容器。

对于Postgres的情况,如,您可以使用Docker Secrets定义 Postgres_initdb_args 的值 postgres_password Postgres_user Postgres_db :秘密变量的名称与带有后缀 _file的普通范围相同

在我们的用例中,我们定义了:

POSTGRES_USER_FILE: /run/secrets/db_user

在Django容器的情况下,此功能不受欢迎,但是由于您可以编辑 settings.py ,因为您需要,例如,在此简单但很棒的文章您可以使用助手功能在 settings.py 文件中读取所需值,例如:

import os

def get_secret(key, default):
    value = os.getenv(key, default)
    if os.path.isfile(value):
        with open(value) as f:
            return f.read()
    return value

DB_USER = get_secret("DB_USER_FILE", "")

# Use the value to configure your database connection parameters

这可能更有意义地存储数据库密码,但这也可能是数据库用户的有效解决方案。

请考虑评论也是如此出色的文章

基于这个问题似乎是由Django容器中的环境变量变化引起的,您可以尝试的最后一件事就是以下内容。

settings.py 文件的唯一要求是用配置声明不同的全局变量。但这并不是什么可以看待它们的说法:实际上,我在答案中揭示了不同的方法,毕竟是Python,您可以使用该语言来满足您的需求。

此外,重要的是要了解,除非在您的dockerfile中更改任何变量,否则当Postgres和Django容器都是创建时,将会收到完全相同的 .env 。具有完全相同的配置的文件。

考虑到这两件事,您可以尝试在您的 settings-py 文件中创建Django容器本地副本,并在重新启动之间或在任何原因之间使用它导致变量更改。

在您的 settings.py (请为代码的简单性来原谅我,我希望您能得到这个想法):

import os
import ast

env_vars = ['DB_NAME', 'DB_USER', 'DB_PASS', 'DB_SERVICE', 'DB_PORT']

if not os.path.exists('/tmp/.env'):
    with open('/tmp/.env', 'w') as f:
        for env_var in env_vars:
            f.write(env_var)
            f.write('=')
            f.write(os.environ[env_var])
            f.write('\n')



with open('/tmp/.env') as f:
    cached_env_vars = f.read()
      
cached_env_vars_dict = ast.literal_eval(cached_env_vars)

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': cached_env_vars_dict['DB_NAME'],
        'USER': cached_env_vars_dict['DB_USER'],
        'PASSWORD': cached_env_vars_dict['DB_PASS'],
        'HOST': cached_env_vars_dict['DB_SERVICE'],
        'PORT': cached_env_vars_dict['DB_PORT']
    }

    #...
}

我认为上述任何批准都更好,但是当然可以确保环境变量一致性累积的环境和容器重新启动。

I cannot provide a better answer than the excellent one provided by @larsks but please, let me try giving you some ideas.

As @larsks also pointed out, any shell environment variable will take precedence over those defined in your docker-compose .env file.

This fact is stated as well in the docker-compose documentation when taking about environment variables, emphasis mine:

You can set default values for environment variables using a .env file,
which Compose automatically looks for in project directory (parent folder
of your Compose file). Values set in the shell environment override those
set in the .env file
.

This mean that, for example, providing a shell variable like this:

DB_USER= tommyboy docker-compose up

will definitively overwrite any variable you could have defined in your .env file.

One possible solution to the problem is trying using the .env file directly, instead of the environment variables.

Searching for information about your problem I came across this great article.

Among other things, in addition to explaining your problem too, it mentions as a note at the end of the post an alternative approach based on the use of the django-environ package.

I was unaware of the library, but it seems it provides an alternative way for configuring your application reading your configuration directly from a configuration file:

import environ
import os

env = environ.Env(
    # set casting, default value
    DEBUG=(bool, False)
)

# Set the project base directory
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

# Take environment variables from .env file
environ.Env.read_env(os.path.join(BASE_DIR, '.env'))

# False if not in os.environ because of casting above
DEBUG = env('DEBUG')

# Raises Django's ImproperlyConfigured
# exception if SECRET_KEY not in os.environ
SECRET_KEY = env('SECRET_KEY')

# Parse database connection url strings
# like psql://user:[email protected]:8458/db
DATABASES = {
    # read os.environ['DATABASE_URL'] and raises
    # ImproperlyConfigured exception if not found
    #
    # The db() method is an alias for db_url().
    'default': env.db(),

    # read os.environ['SQLITE_URL']
    'extra': env.db_url(
        'SQLITE_URL',
        default='sqlite:////tmp/my-tmp-sqlite.db'
    )
}

#...

If required, it seems you could mix the variables defined in the environment as well.

Probably python-dotenv would allow you to follow a similar approach.

Of course, it is worth mentioning that if you decide to use this approach you need to make accesible the .env file to your docker-compose web service and associated container, perhaps mounting and additional volume or copying the .env file to the web directory you already mounted as volume.

You still need to cope with the PostgreSQL container configuration, but in a certain way it could help you achieve the objective you pointed out in your comment because you could use the same .env file (certainly, a duplicated one).

According to your comment as well, another possible solution could be using Docker secrets.

In a similar way as secrets works in Kubernetes, for example, as explained in the official documentation:

In terms of Docker Swarm services, a secret is a blob of data, such
as a password, SSH private key, SSL certificate, or another piece
of data that should not be transmitted over a network or stored
unencrypted in a Dockerfile or in your application’s source code.
You can use Docker secrets to centrally manage this data and
securely transmit it to only those containers that need access to
it. Secrets are encrypted during transit and at rest in a Docker
swarm. A given secret is only accessible to those services which
have been granted explicit access to it, and only while those
service tasks are running.

In a nutshell, it provides a convenient way for storing sensitive data across Docker Swarm services.

It is important to understand that Docker secrets is only available when using Docker Swarm mode.

Docker Swarm is an orchestrator service offered by Docker, similar again to Kubernetes, with their differences of course.

Assuming you are running Docker in Swarm mode, you could deploy your compose services in a way similar to the following, based on the official docker-compose docker secrets example:

version: '3'

services:

  postgres:
    image: postgres:10.5
    ports:
      - 5105:5432
    environment:
      POSTGRES_DB: directory_data
      POSTGRES_USER: /run/secrets/db_user
      POSTGRES_PASSWORD: password
    secrets:
       - db_user
  web:
    restart: always
    build: ./web
    ports:           # to access the container from outside
      - "8000:8000"
    environment:
      DEBUG: 'true'
      SERVICE_CREDS_JSON_FILE: '/my-app/credentials.json'
      DB_SERVICE: host.docker.internal
      DB_NAME: directory_data
      DB_USER_FILE: /run/secrets/db_user
      DB_PASS: password
      DB_PORT: 5432
    command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 2 -b :8000
    volumes:
    - ./web/:/app
    depends_on:
      - postgres
    secrets:
       - db_user

secrets:
   db_user:
     external: true

Please, note the following.

We are defining a secret named db_user in a secrets section.

This secret could be based on a file or computed from standard in, for example:

echo "tommyboy" | docker secret create db_user -

The secret should be exposed to every container in which it is required.

In the case of Postgres, as explained in the section Docker secrets in the official Postgres docker image description, you can use Docker secrets to define the value of POSTGRES_INITDB_ARGS, POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB: the name of the variable for the secret is the same as the normal ones with the suffix _FILE.

In our use case we defined:

POSTGRES_USER_FILE: /run/secrets/db_user

In the case of the Django container, this functionality is not supported out of the box but, due to the fact you can edit your settings.py as you need to, as suggested for example in this simple but great article you can use a helper function to read the required value in your settings.py file, something like:

import os

def get_secret(key, default):
    value = os.getenv(key, default)
    if os.path.isfile(value):
        with open(value) as f:
            return f.read()
    return value

DB_USER = get_secret("DB_USER_FILE", "")

# Use the value to configure your database connection parameters

Probably this would make more sense to store the database password, but it could be a valid solution for the database user as well.

Please, consider review this excellent article too.

Based on the fact that the problem seems to be caused by the change in your environment variables in the Django container one last thing you could try is the following.

The only requirement for your settings.py file is to declare different global variables with your configuration. But it didn't say nothing about how to read them: in fact, I exposed different approaches in the answer, and, after all, is Python and you can use the language to fill your needs.

In addition, it is important to understand that, unless in your Dockerfile you change any variables, when both the Postgres and Django containers are created the will receive exactly the same .env file with exactly the same configuration.

With these two things in mind you could try creating a Django container local copy of the provided environment in your settings-py file and use it between restarts or between whatever reason is causing the variables to change.

In your settings.py (please, forgive me for the simplicity of the code, I hope you get the idea):

import os
import ast

env_vars = ['DB_NAME', 'DB_USER', 'DB_PASS', 'DB_SERVICE', 'DB_PORT']

if not os.path.exists('/tmp/.env'):
    with open('/tmp/.env', 'w') as f:
        for env_var in env_vars:
            f.write(env_var)
            f.write('=')
            f.write(os.environ[env_var])
            f.write('\n')



with open('/tmp/.env') as f:
    cached_env_vars = f.read()
      
cached_env_vars_dict = ast.literal_eval(cached_env_vars)

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': cached_env_vars_dict['DB_NAME'],
        'USER': cached_env_vars_dict['DB_USER'],
        'PASSWORD': cached_env_vars_dict['DB_PASS'],
        'HOST': cached_env_vars_dict['DB_SERVICE'],
        'PORT': cached_env_vars_dict['DB_PORT']
    }

    #...
}

I think any of the aforementioned approches is better, but certainly it will ensure environment variables consistency accross changes in the environment and container restarts.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文