养猫人

文章 评论 浏览 29

养猫人 2025-02-20 11:32:42

如果您愿意,可以通过公式进行。

=SUM(--ISTEXT(A2:C2))

If you wish you can do it by formula.

=SUM(--ISTEXT(A2:C2))

enter image description here

在PowerQuery表中如何计数行,仅包含文本值

养猫人 2025-02-20 01:11:37

这样您将替换所有具有最大值的元素

#get the max value of the array
max_value = np.max(r2)
#all the element with the max value will be replaced with the first value
r2[r2 == max_value] = r2[0][0]
#put the max value in the first position
r2[0][0] = max_value

In this way you will replace all the element that have the max value

#get the max value of the array
max_value = np.max(r2)
#all the element with the max value will be replaced with the first value
r2[r2 == max_value] = r2[0][0]
#put the max value in the first position
r2[0][0] = max_value

找到最大的元素,并在Python中与第一个交换

养猫人 2025-02-19 20:12:44

对我来说,问题是我没有为indexformat属性提供值(在 elasticsearchSinkoptions 对象中)。相反,我将其放在端点上,就像您通过REST插入数据时应该做的那样。总而言之,以下代码为我解决了问题:

var jsonFormatter = new CompactJsonFormatter();

var loggerConfig = new LoggerConfiguration()
  .Enrich.FromLogContext()
  .WriteTo.Map("Name", "**error**", (name, writeTo) =>
  {
    var currentYear = DateTime.Today.Year;
    var currentWeek = calendar.GetWeekOfYear(DateTime.Now, 
                                CalendarWeekRule.FirstDay, 
                                DayOfWeek.Monday);
    writeTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("<opensearch endpoint>"))
    {
      CustomFormatter = jsonFormatter,
      TypeName = "_doc",
      IndexFormat = $"my-index-{currentYear}-{currentWeek}",
      MinimumLogEventLevel = LogEventLevel.Information,
      EmitEventFailure = EmitEventFailureHandling.RaiseCallback | 
                          EmitEventFailureHandling.ThrowException,
      FailureCallback = e =>
        Console.WriteLine(
          "An error occured in Serilog ElasticSearch sink: " +
          $"{e.Exception.Message} | {e.Exception.InnerException?.Message}")
    });
  });
Log.Logger = loggerConfig.CreateLogger();

当然,您还需要正确设置OpenSearch,以便它可以自动将策略自动到您的索引等。

The issue for me was that I hadn't provided the value for the IndexFormat Property (in the ElasticSearchSinkOptions object). I had instead put this in the endpoint, as you should do if you insert data through REST. All in all, the below code solved the issue for me:

var jsonFormatter = new CompactJsonFormatter();

var loggerConfig = new LoggerConfiguration()
  .Enrich.FromLogContext()
  .WriteTo.Map("Name", "**error**", (name, writeTo) =>
  {
    var currentYear = DateTime.Today.Year;
    var currentWeek = calendar.GetWeekOfYear(DateTime.Now, 
                                CalendarWeekRule.FirstDay, 
                                DayOfWeek.Monday);
    writeTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("<opensearch endpoint>"))
    {
      CustomFormatter = jsonFormatter,
      TypeName = "_doc",
      IndexFormat = 
quot;my-index-{currentYear}-{currentWeek}",
      MinimumLogEventLevel = LogEventLevel.Information,
      EmitEventFailure = EmitEventFailureHandling.RaiseCallback | 
                          EmitEventFailureHandling.ThrowException,
      FailureCallback = e =>
        Console.WriteLine(
          "An error occured in Serilog ElasticSearch sink: " +
          
quot;{e.Exception.Message} | {e.Exception.InnerException?.Message}")
    });
  });
Log.Logger = loggerConfig.CreateLogger();

Of course, you also need to setup OpenSearch correctly such that it can autoapply policies to your index, etc.

Amazon OpenSearch的设置Serilog

养猫人 2025-02-19 05:50:26

我不能比@larsks提供的出色答案更好,但是请让我尝试给您一些想法。

正如@larsks还指出的那样,任何Shell环境变量都将优先于您的Docker-Compose .env 文件中定义的。

当大约“ nofollow noreferrer”>环境变量, ,强调我的:

您可以使用 .env 文件设置环境变量的默认值
在项目目录中自动撰写(父文件夹)
您的撰写文件)。 在外壳环境中设置的值覆盖了这些
设置在 .env 文件

这意味着,例如提供这样的外壳变量:

DB_USER= tommyboy docker-compose up

将一定会覆盖您在 .env 文件中可以定义的任何变量。

解决该问题的一种可能解决方案是尝试直接使用 .env 文件,而不是环境变量。

在搜索有关您问题的信息时,我遇到了这篇很棒的文章

除其他事项外,除了解释您的问题外,它还在帖子末尾提到了一种注释,基于使用 django-environ package

我不知道库,但是看来它提供了一种直接从配置文件读取配置的应用程序的替代方法:

import environ
import os

env = environ.Env(
    # set casting, default value
    DEBUG=(bool, False)
)

# Set the project base directory
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

# Take environment variables from .env file
environ.Env.read_env(os.path.join(BASE_DIR, '.env'))

# False if not in os.environ because of casting above
DEBUG = env('DEBUG')

# Raises Django's ImproperlyConfigured
# exception if SECRET_KEY not in os.environ
SECRET_KEY = env('SECRET_KEY')

# Parse database connection url strings
# like psql://user:[email protected]:8458/db
DATABASES = {
    # read os.environ['DATABASE_URL'] and raises
    # ImproperlyConfigured exception if not found
    #
    # The db() method is an alias for db_url().
    'default': env.db(),

    # read os.environ['SQLITE_URL']
    'extra': env.db_url(
        'SQLITE_URL',
        default='sqlite:////tmp/my-tmp-sqlite.db'
    )
}

#...

如果需要,似乎您可以

可能 python-dotenv 允许您遵循类似的方法。

当然,值得一提的是,如果您决定使用此方法,则需要使 .env 文件添加到您的Docker-Compose Web服务和关联的容器,也许是安装和其他卷或复制 .env 文件到 Web 目录您已经将其安装为卷。

您仍然需要应对PostgreSQL容器配置,但是以某种方式可以帮助您实现评论中指出的目标,因为您可以使用相同的 .env file(当然是重复的一)。

根据您的评论,另一个可能的解决方案可能是使用Docker Secrets。

例如,以与秘密在kubernetes的工作相似的方式,如

在Docker swarm服务方面,一个秘密是一大堆数据,例如
作为密码,SSH私钥,SSL证书或其他作品
不应通过网络传输或存储的数据
在Dockerfile或应用程序的源代码中未加密。
您可以使用Docker Secret来集中管理此数据,并且
将其安全地传输到需要访问的那些容器
它。秘密在公交期间和在Docker中休息进行了加密
群。给定的秘密只能用于那些服务
已被授予明确访问它,并且只有当
服务任务正在运行。

简而言之,它为跨Docker Swarm服务存储敏感数据提供了一种方便的方法。

重要的是要了解,只有在使用 docker swarm swarm模式

Docker Swarm是Docker提供的一项编排服务,与Kubernetes再次相似,当然还有差异。

假设您正在以Swarm模式运行Docker,则可以基于官方docker-compose docker秘密示例示例

version: '3'

services:

  postgres:
    image: postgres:10.5
    ports:
      - 5105:5432
    environment:
      POSTGRES_DB: directory_data
      POSTGRES_USER: /run/secrets/db_user
      POSTGRES_PASSWORD: password
    secrets:
       - db_user
  web:
    restart: always
    build: ./web
    ports:           # to access the container from outside
      - "8000:8000"
    environment:
      DEBUG: 'true'
      SERVICE_CREDS_JSON_FILE: '/my-app/credentials.json'
      DB_SERVICE: host.docker.internal
      DB_NAME: directory_data
      DB_USER_FILE: /run/secrets/db_user
      DB_PASS: password
      DB_PORT: 5432
    command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 2 -b :8000
    volumes:
    - ./web/:/app
    depends_on:
      - postgres
    secrets:
       - db_user

secrets:
   db_user:
     external: true

请注意以下内容。

我们正在定义一个秘密 db_user secrets 部分。

这个秘密可以是基于文件或计算例如,从中的标准来看:

echo "tommyboy" | docker secret create db_user -

秘密应暴露于需要的每个容器。

对于Postgres的情况,如,您可以使用Docker Secrets定义 Postgres_initdb_args 的值 postgres_password Postgres_user Postgres_db :秘密变量的名称与带有后缀 _file的普通范围相同

在我们的用例中,我们定义了:

POSTGRES_USER_FILE: /run/secrets/db_user

在Django容器的情况下,此功能不受欢迎,但是由于您可以编辑 settings.py ,因为您需要,例如,在此简单但很棒的文章您可以使用助手功能在 settings.py 文件中读取所需值,例如:

import os

def get_secret(key, default):
    value = os.getenv(key, default)
    if os.path.isfile(value):
        with open(value) as f:
            return f.read()
    return value

DB_USER = get_secret("DB_USER_FILE", "")

# Use the value to configure your database connection parameters

这可能更有意义地存储数据库密码,但这也可能是数据库用户的有效解决方案。

请考虑评论也是如此出色的文章

基于这个问题似乎是由Django容器中的环境变量变化引起的,您可以尝试的最后一件事就是以下内容。

settings.py 文件的唯一要求是用配置声明不同的全局变量。但这并不是什么可以看待它们的说法:实际上,我在答案中揭示了不同的方法,毕竟是Python,您可以使用该语言来满足您的需求。

此外,重要的是要了解,除非在您的dockerfile中更改任何变量,否则当Postgres和Django容器都是创建时,将会收到完全相同的 .env 。具有完全相同的配置的文件。

考虑到这两件事,您可以尝试在您的 settings-py 文件中创建Django容器本地副本,并在重新启动之间或在任何原因之间使用它导致变量更改。

在您的 settings.py (请为代码的简单性来原谅我,我希望您能得到这个想法):

import os
import ast

env_vars = ['DB_NAME', 'DB_USER', 'DB_PASS', 'DB_SERVICE', 'DB_PORT']

if not os.path.exists('/tmp/.env'):
    with open('/tmp/.env', 'w') as f:
        for env_var in env_vars:
            f.write(env_var)
            f.write('=')
            f.write(os.environ[env_var])
            f.write('\n')



with open('/tmp/.env') as f:
    cached_env_vars = f.read()
      
cached_env_vars_dict = ast.literal_eval(cached_env_vars)

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': cached_env_vars_dict['DB_NAME'],
        'USER': cached_env_vars_dict['DB_USER'],
        'PASSWORD': cached_env_vars_dict['DB_PASS'],
        'HOST': cached_env_vars_dict['DB_SERVICE'],
        'PORT': cached_env_vars_dict['DB_PORT']
    }

    #...
}

我认为上述任何批准都更好,但是当然可以确保环境变量一致性累积的环境和容器重新启动。

I cannot provide a better answer than the excellent one provided by @larsks but please, let me try giving you some ideas.

As @larsks also pointed out, any shell environment variable will take precedence over those defined in your docker-compose .env file.

This fact is stated as well in the docker-compose documentation when taking about environment variables, emphasis mine:

You can set default values for environment variables using a .env file,
which Compose automatically looks for in project directory (parent folder
of your Compose file). Values set in the shell environment override those
set in the .env file
.

This mean that, for example, providing a shell variable like this:

DB_USER= tommyboy docker-compose up

will definitively overwrite any variable you could have defined in your .env file.

One possible solution to the problem is trying using the .env file directly, instead of the environment variables.

Searching for information about your problem I came across this great article.

Among other things, in addition to explaining your problem too, it mentions as a note at the end of the post an alternative approach based on the use of the django-environ package.

I was unaware of the library, but it seems it provides an alternative way for configuring your application reading your configuration directly from a configuration file:

import environ
import os

env = environ.Env(
    # set casting, default value
    DEBUG=(bool, False)
)

# Set the project base directory
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

# Take environment variables from .env file
environ.Env.read_env(os.path.join(BASE_DIR, '.env'))

# False if not in os.environ because of casting above
DEBUG = env('DEBUG')

# Raises Django's ImproperlyConfigured
# exception if SECRET_KEY not in os.environ
SECRET_KEY = env('SECRET_KEY')

# Parse database connection url strings
# like psql://user:[email protected]:8458/db
DATABASES = {
    # read os.environ['DATABASE_URL'] and raises
    # ImproperlyConfigured exception if not found
    #
    # The db() method is an alias for db_url().
    'default': env.db(),

    # read os.environ['SQLITE_URL']
    'extra': env.db_url(
        'SQLITE_URL',
        default='sqlite:////tmp/my-tmp-sqlite.db'
    )
}

#...

If required, it seems you could mix the variables defined in the environment as well.

Probably python-dotenv would allow you to follow a similar approach.

Of course, it is worth mentioning that if you decide to use this approach you need to make accesible the .env file to your docker-compose web service and associated container, perhaps mounting and additional volume or copying the .env file to the web directory you already mounted as volume.

You still need to cope with the PostgreSQL container configuration, but in a certain way it could help you achieve the objective you pointed out in your comment because you could use the same .env file (certainly, a duplicated one).

According to your comment as well, another possible solution could be using Docker secrets.

In a similar way as secrets works in Kubernetes, for example, as explained in the official documentation:

In terms of Docker Swarm services, a secret is a blob of data, such
as a password, SSH private key, SSL certificate, or another piece
of data that should not be transmitted over a network or stored
unencrypted in a Dockerfile or in your application’s source code.
You can use Docker secrets to centrally manage this data and
securely transmit it to only those containers that need access to
it. Secrets are encrypted during transit and at rest in a Docker
swarm. A given secret is only accessible to those services which
have been granted explicit access to it, and only while those
service tasks are running.

In a nutshell, it provides a convenient way for storing sensitive data across Docker Swarm services.

It is important to understand that Docker secrets is only available when using Docker Swarm mode.

Docker Swarm is an orchestrator service offered by Docker, similar again to Kubernetes, with their differences of course.

Assuming you are running Docker in Swarm mode, you could deploy your compose services in a way similar to the following, based on the official docker-compose docker secrets example:

version: '3'

services:

  postgres:
    image: postgres:10.5
    ports:
      - 5105:5432
    environment:
      POSTGRES_DB: directory_data
      POSTGRES_USER: /run/secrets/db_user
      POSTGRES_PASSWORD: password
    secrets:
       - db_user
  web:
    restart: always
    build: ./web
    ports:           # to access the container from outside
      - "8000:8000"
    environment:
      DEBUG: 'true'
      SERVICE_CREDS_JSON_FILE: '/my-app/credentials.json'
      DB_SERVICE: host.docker.internal
      DB_NAME: directory_data
      DB_USER_FILE: /run/secrets/db_user
      DB_PASS: password
      DB_PORT: 5432
    command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 2 -b :8000
    volumes:
    - ./web/:/app
    depends_on:
      - postgres
    secrets:
       - db_user

secrets:
   db_user:
     external: true

Please, note the following.

We are defining a secret named db_user in a secrets section.

This secret could be based on a file or computed from standard in, for example:

echo "tommyboy" | docker secret create db_user -

The secret should be exposed to every container in which it is required.

In the case of Postgres, as explained in the section Docker secrets in the official Postgres docker image description, you can use Docker secrets to define the value of POSTGRES_INITDB_ARGS, POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB: the name of the variable for the secret is the same as the normal ones with the suffix _FILE.

In our use case we defined:

POSTGRES_USER_FILE: /run/secrets/db_user

In the case of the Django container, this functionality is not supported out of the box but, due to the fact you can edit your settings.py as you need to, as suggested for example in this simple but great article you can use a helper function to read the required value in your settings.py file, something like:

import os

def get_secret(key, default):
    value = os.getenv(key, default)
    if os.path.isfile(value):
        with open(value) as f:
            return f.read()
    return value

DB_USER = get_secret("DB_USER_FILE", "")

# Use the value to configure your database connection parameters

Probably this would make more sense to store the database password, but it could be a valid solution for the database user as well.

Please, consider review this excellent article too.

Based on the fact that the problem seems to be caused by the change in your environment variables in the Django container one last thing you could try is the following.

The only requirement for your settings.py file is to declare different global variables with your configuration. But it didn't say nothing about how to read them: in fact, I exposed different approaches in the answer, and, after all, is Python and you can use the language to fill your needs.

In addition, it is important to understand that, unless in your Dockerfile you change any variables, when both the Postgres and Django containers are created the will receive exactly the same .env file with exactly the same configuration.

With these two things in mind you could try creating a Django container local copy of the provided environment in your settings-py file and use it between restarts or between whatever reason is causing the variables to change.

In your settings.py (please, forgive me for the simplicity of the code, I hope you get the idea):

import os
import ast

env_vars = ['DB_NAME', 'DB_USER', 'DB_PASS', 'DB_SERVICE', 'DB_PORT']

if not os.path.exists('/tmp/.env'):
    with open('/tmp/.env', 'w') as f:
        for env_var in env_vars:
            f.write(env_var)
            f.write('=')
            f.write(os.environ[env_var])
            f.write('\n')



with open('/tmp/.env') as f:
    cached_env_vars = f.read()
      
cached_env_vars_dict = ast.literal_eval(cached_env_vars)

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': cached_env_vars_dict['DB_NAME'],
        'USER': cached_env_vars_dict['DB_USER'],
        'PASSWORD': cached_env_vars_dict['DB_PASS'],
        'HOST': cached_env_vars_dict['DB_SERVICE'],
        'PORT': cached_env_vars_dict['DB_PORT']
    }

    #...
}

I think any of the aforementioned approches is better, but certainly it will ensure environment variables consistency accross changes in the environment and container restarts.

如何使Docker-Compose“ .env”文件优先于Shell Env vars?

养猫人 2025-02-19 04:40:37

我知道您想按日期获得订单详细信息

orders.GroupBy(x => x.date.Date)
.Select(x => new { date = x.Key, list = x.ToList().Sum(y => y.OrderDetails.Sum(od => CommonUtils.ComputeQuantity(od))))})

I understand that you want to get order details count by date

orders.GroupBy(x => x.date.Date)
.Select(x => new { date = x.Key, list = x.ToList().Sum(y => y.OrderDetails.Sum(od => CommonUtils.ComputeQuantity(od))))})

如何提高此EF代码的速度?

养猫人 2025-02-18 23:00:08

在中间件中的数据库中检查电子邮件已经存在以下流程

    public class InputModelValidator : AbstractValidator<InputModel>
     {
    private EditContext _editContext;
    public InputModelValidator(EditContext editContext)
    {
        _editContext=editContext;
        RuleFor(e => e.FirstName).NotEmpty().WithMessage("First name is required.");
        RuleFor(e => e.LastName).NotEmpty().WithMessage("Last name is required.");
        RuleFor(e => e.Email).NotEmpty().WithMessage("Email is required.");
        RuleFor(e => e.Email).EmailAddress().WithMessage("Email is not valid.").Must(IsEmailexist).WithMessage("{PropertyName} Is Already Exist.");;
    }


    private bool IsEmailexist(string Email)
    {
        return _editContext.userInfo.where(em=>em.EmailId==Email).FirstOrDefault()!=null?true:false;
    }
   }

checking email already exist in DB in middleware follow below process

    public class InputModelValidator : AbstractValidator<InputModel>
     {
    private EditContext _editContext;
    public InputModelValidator(EditContext editContext)
    {
        _editContext=editContext;
        RuleFor(e => e.FirstName).NotEmpty().WithMessage("First name is required.");
        RuleFor(e => e.LastName).NotEmpty().WithMessage("Last name is required.");
        RuleFor(e => e.Email).NotEmpty().WithMessage("Email is required.");
        RuleFor(e => e.Email).EmailAddress().WithMessage("Email is not valid.").Must(IsEmailexist).WithMessage("{PropertyName} Is Already Exist.");;
    }


    private bool IsEmailexist(string Email)
    {
        return _editContext.userInfo.where(em=>em.EmailId==Email).FirstOrDefault()!=null?true:false;
    }
   }

流利验证:如何检查电子邮件是否已经存在

养猫人 2025-02-18 07:31:07

清除应用程序缓存后,它对我有用

php artisan optimize:clear

It works for me after clearing the application cache

php artisan optimize:clear

为什么Laravel Vite指令在我的项目中不起作用?

养猫人 2025-02-17 20:04:41

如果我处于这种情况下,我会修改我的设计,以使X落在A和B之外。这是因为它会更有效。因为每次切换片段时,可以说从A到B,A Enters 暂停状态>状态和B ENTER 恢复状态。状态的变化将需要组件来对其小部件采取行动,这将消耗处理能力。如果x在外面,则不需要处理,因为它没有重新渲染。如果您仍然想以这种方式实现事物,请制作 Data 持久,而不是整个 fragment 。当在屏幕上呈现片段时,还必须将其呈现。没有其他选择。您不能仅仅在屏幕上渲染一个块,但希望在父母渲染之前已经在屏幕上渲染了一些孩子。这就是为什么您会看到 onCreate 传递 bundle param的原因。该参数包含所有需要保留的数据,这就是为什么事物可以呈现相同的原因。将数据 Architecture 分开是一个好主意。希望这会有所帮助。

if i was in this situation, i would modify my design such that x falls outside of a and b. this is because it would be more efficient. because every time you switch the fragments, lets say from a to b, a enters paused state and b enters resumed state. change of state would need component to act on their widgets which would consume processing power. if x was outside, then no processing is required because it's not re-rendered. if you still want to implement things this way, then make the data persistent, not the entire fragment. when a fragment is rendered on screen, it's children must also be rendered. there is no other option. you cannot just render a block on screen but expect some of its children to already be rendered on screen before parent has rendered. this is why you'll see onCreate passes a Bundle param. the param contains all data that required retention and that's why things are able to be rendered the same. it's a good idea to separate data from architecture. hope this helps.

创建一个碎片并重复使用

养猫人 2025-02-16 22:13:00

那是正确的语法。您正在使用哪个版本的SinglestoredB?

singlestore [test]> create table y (col2 datetime default now());
Query OK, 0 rows affected (1.691 sec)

编辑:这需要一个我相信SinglestoredB的7.0版。较旧的版本仅支持时间戳列上的默认值(),而不是DateTime。

That is the right syntax. Which version of SinglestoreDB are you using ?

singlestore [test]> create table y (col2 datetime default now());
Query OK, 0 rows affected (1.691 sec)

edit: This requires a I believe at least version 7.0 of SinglestoreDB. Older versions only support default now() on timestamp columns and not datetime.

如何在MEMSQL中添加DateTime数据类型的默认值

养猫人 2025-02-16 21:04:41

对于任何寻找真正简单的ES6解决方案以复制,粘贴和采用的人:

const dateToString = d => `${d.getFullYear()}-${('00' + (d.getMonth() + 1)).slice(-2)}-${('00' + d.getDate()).slice(-2)}` 

// how to use:
const myDate = new Date(Date.parse('04 Dec 1995 00:12:00 GMT'))
console.log(dateToString(myDate)) // 1995-12-04

For any one looking for a really simple ES6 solution to copy, paste and adopt:

const dateToString = d => `${d.getFullYear()}-${('00' + (d.getMonth() + 1)).slice(-2)}-${('00' + d.getDate()).slice(-2)}` 

// how to use:
const myDate = new Date(Date.parse('04 Dec 1995 00:12:00 GMT'))
console.log(dateToString(myDate)) // 1995-12-04

如何在JavaScript中格式化日期?

养猫人 2025-02-16 17:56:11

安装模块

pip install flask-wtf

install the module

pip install flask-wtf

python模块安装问题与烧瓶-WTF

养猫人 2025-02-16 11:19:49

感谢@Granier发布解决方案。它帮助我解决了同一问题。这是下一个解决方案略有不同的解决方案。

  1. 运行Granier的解决方案,直到您进行 sudo gitlab-ctl重新启动
    您的gitlab服务器。
  2. SSH进入您的Gitlab-Runner系统,并根据以下两个命令基于
    <

openssl s_client -showcerts -connect gitlab.example.com:443
-servername gitlab.example.com&lt; /dev/null 2&gt;/dev/null | OpenSSL X509 -OUTFORM PEM&GT; /etc/gitlab-runner/certs/gitlab.example.com.crt

sudo gitlab-runner寄存器-tls-ca-file =/etc/gitlab-runner/certs/gitlab.example.com.crt

在1中,在gitlab服务器上创建了一个新证书。在2中,您使用第一个命令将该证书下载到指定的DIR中,然后在第二个命令中使用它来注册您的跑步者。

请注意,MyGitlab-site.com和gitlab.example.com需要更改为您的URL的方式。

Thanks @granier for posting the solution. It helped me to fix the same issue. Here is a slightly different solution for the next one to stumble upon this.

  1. Run granier's solution until you did sudo gitlab-ctl restart on
    your Gitlab server.
  2. SSH into your gitlab-runner's system and run the following two commands based on
    Gitlab's docs.

openssl s_client -showcerts -connect gitlab.example.com:443
-servername gitlab.example.com < /dev/null 2>/dev/null | openssl x509 -outform PEM > /etc/gitlab-runner/certs/gitlab.example.com.crt

sudo gitlab-runner register --tls-ca-file=/etc/gitlab-runner/certs/gitlab.example.com.crt

In 1, a new certificate is created on the Gitlab server. In 2, you download that cert into the specified dir using the first command and then use it in the second command to register your runner.

Note, that mygitlab-site.com and gitlab.example.com need to be changed to how your url is called.

如何解决此错误“证书”依赖于遗产通用名称字段,而是使用sans;在GitLab Runner注册期间?

养猫人 2025-02-16 04:10:06

NX具有自己的样式加载程序,它是您合并的配置的一部分。

要解决此问题,您必须从NX Config删除该加载程序。我通过循环浏览所有 module.rules 并忽略我不想要的任何一个。因此,要解决您的问题,您将这样做:

module.exports = (config, context) => {
  const conf = merge(config, {
    module: {
      rules: [
        {
          test: /\.sass$/i,
          use: [
            'style-loader',
            {
              loader: 'css-loader',
              options: {
                modules: {
                  localIdentName: '[local]__[hash:base64:5]'
                }
              }
            },
            {
              loader: 'sass-loader',
              options: {
                implementation: sass
              }
            }
          ]
        }
      ]
    }
  });

  // Remove unwanted NX rules
  const mods = [];
  conf.module.rules.forEach((rule) => {
    if (rule.test != '/\\.css$|\\.scss$|\\.sass$|\\.less$|\\.styl$/') {
      mods.push(rule);
    }
  });
  conf.module.rules = mods;

  return conf;
};

这花了我很长时间来弄清楚。根据您的NX版本,您的CSS规则可能有所不同。只需 console.log(rule.test)检查您要忽略的规则是什么。

NX has it's own style-loader which is part of the config which You are merging.

To resolve this, You will have to remove that loader from NX config. I did it manually by looping through all of the module.rules and ignoring whichever I don't want. So to fix Your issue, You would do it this way:

module.exports = (config, context) => {
  const conf = merge(config, {
    module: {
      rules: [
        {
          test: /\.sass$/i,
          use: [
            'style-loader',
            {
              loader: 'css-loader',
              options: {
                modules: {
                  localIdentName: '[local]__[hash:base64:5]'
                }
              }
            },
            {
              loader: 'sass-loader',
              options: {
                implementation: sass
              }
            }
          ]
        }
      ]
    }
  });

  // Remove unwanted NX rules
  const mods = [];
  conf.module.rules.forEach((rule) => {
    if (rule.test != '/\\.css$|\\.scss$|\\.sass$|\\.less$|\\.styl$/') {
      mods.push(rule);
    }
  });
  conf.module.rules = mods;

  return conf;
};

This took me a long time to figure out. Your CSS rule might be different, depending on Your NX version. Just console.log(rule.test) to check what is the rule which You want to ignore.

NX WebPack CSS-Loader问题

养猫人 2025-02-15 23:45:50

在“详细信息”视图中,使用@obseverObject var项目:item

并且在主视图中需要 @fetchrequest 。我建议在XCode中创建一个新项目,然后检查核心数据框以学习结构。

In the detail View use @ObservedObject var item: Item

And you need @FetchRequest in the master view. I recommend creating a new project in Xcode and checking the Core Data box to learn the structure.

如何通过数组通过可选课程

养猫人 2025-02-15 17:47:49

关注点的分离。您可以在 onModeLcreating 的情况下流利地配置每个实体。也就是说,在您的应用程序增长之前,您发现 onModeLcreating 突然包含一千行代码,事物开始出现不顺序。然后是时候开始将事情分开以保持理智了。

ientityTypeconfiguration 接口提供了一种无需自己实施逻辑的方法。每个实体的配置可以分为自己的独立单元,如果您在 onModeLcreating 中调用 applyconfigurations fromAssembly ,则可以自动应用它们。

这与启动类相似,从.NET 6之前,它处理了依赖项注入和请求管道/中间件的服务配置。这一切都是在申请启动时完成的,但是没有理由在同一代码单元中必须发生。

现在,.NET 6的入门模板将所有这些模板放在 program.cs 使用顶级语句中,这对于小应用程序很好,但是最终您将需要类似于>的东西启动类,以防止 program.cs 变得肿。

Separation of concerns. You can configure every entity fluently in OnModelCreating without issue. That is, until your application grows and you find that OnModelCreating all of a sudden contains a thousand lines of code and things start appearing out of order. Then it's time to start separating things out in order to keep your sanity.

The IEntityTypeConfiguration interface provides a means of doing that without having to implement the logic yourself. The configuration for each entity can be separated out into its own self-contained unit, and they can all be applied autimatically if you call ApplyConfigurationsFromAssembly in OnModelCreating.

It's a similar concept to the Startup class from before .NET 6, which handled configuration of services for dependency injection and the request pipeline/middleware. It's all done at the time of application startup, but there's no reason everything has to happen in the same unit of code.

The starter templates for .NET 6 now puts all of that inside Program.cs using top level statements, which is fine for small applications, but eventually you're going to need something akin to the Startup class to prevent Program.cs from becoming bloated.

是否有很好的具体理由在流利的配置上使用ientityTypeconfiguration

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文