萌︼了一个春

文章 评论 浏览 32

萌︼了一个春 2025-02-20 06:33:51

因为只有标量可以传递给 获取 shift> shift> shift 没有 0 的唯一值(否移位),仅分配匹配条件:

for x in df.loc[df['shift'].ne(0), 'shift'].unique():
    m = df['shift'].eq(x)
    df.loc[m, 'gg_shift'] = df['gg'].shift(x)
    
print (df)
     gg   bool  shift  gg_shift
0  0.88  False      0       NaN
1  0.87   True      0       NaN
2  0.94  False      1      0.87
3  0.17  False      2      0.87
4  0.92   True      0       NaN
5  0.51  False      1      0.92
6  0.10   True      0       NaN
7  0.88  False      1      0.10
8  0.36  False      2      0.10
9  0.14   True      0       NaN

Because only scalar is possible pass to Series.shift get unique values of shift without 0 (no shift) and assign only match condition:

for x in df.loc[df['shift'].ne(0), 'shift'].unique():
    m = df['shift'].eq(x)
    df.loc[m, 'gg_shift'] = df['gg'].shift(x)
    
print (df)
     gg   bool  shift  gg_shift
0  0.88  False      0       NaN
1  0.87   True      0       NaN
2  0.94  False      1      0.87
3  0.17  False      2      0.87
4  0.92   True      0       NaN
5  0.51  False      1      0.92
6  0.10   True      0       NaN
7  0.88  False      1      0.10
8  0.36  False      2      0.10
9  0.14   True      0       NaN

如何将另一列的值用作DF的参数

萌︼了一个春 2025-02-19 15:11:40

总而言之

,问题是“*”! *在集数阵列的成员中找到的 *是为什么您每次都会获得相同的数组的原因。

详细信息

regexp 是一个概念开发人员很难理解(我是这样的顺便说一句)。

我将从 regexp on mdn

REGEXP是用于匹配字符串中字符组合的模式-MDN

您想了解代码发生的事情。
当您创建诸如/a*/测试“ AA3” 之类的正则>时,将匹配的是 a , aa aa < /代码>等。这是JavaScript中的真实性。您可能希望与 $ 或严格 \ d \ d /a>。

我将您的代码重写为以下功能:

arr = [
    ["AA*", "ABC", "XYZ"],
    ["A*", "AXY", "AAJ"],
];

findInArray(arr, "AA3") // prints both array
findInArray(arr, "AAJ") // prints second array
findInArray(arr, "ABC") // prints first array

function findInArray(array, value) {
    return array.filter((subArray) =>
        subArray.some((item) => {
            const check = new RegExp(value);
            return check.test(item);
        })
    );
}

Summary

In short, the issue is "*"! The * found in the members of the set array is why you're getting the same array each time.

Detailed Info

Regexp is a one concept most developers find hard to understand (I am one of such btw ????).

I'll start off with an excerpt intro to Regexp on MDN

Regexp are patterns used to match character combinations in strings - MDN

With that in mind you want to understand what goes on with your code.
When you create a Regex like /A*/ to test "AA3", what would be matched would be A, AA, etc. This is a truthy in javascript. You would want a more stricter matching with ^ or $ or strictly matching a number with \d.

I rewrote your code as a function below:

arr = [
    ["AA*", "ABC", "XYZ"],
    ["A*", "AXY", "AAJ"],
];

findInArray(arr, "AA3") // prints both array
findInArray(arr, "AAJ") // prints second array
findInArray(arr, "ABC") // prints first array

function findInArray(array, value) {
    return array.filter((subArray) =>
        subArray.some((item) => {
            const check = new RegExp(value);
            return check.test(item);
        })
    );
}

在数组中使用REGEX在JavaScript中搜索文本

萌︼了一个春 2025-02-18 20:27:17

我认为您忘记了 value 属性,如果您使用React Hooks,也许您可​​以做这样的事情:

import { useState } from "react";

const [inputValue, setInputValue] = useState("");

...


<input className="input" type="number" placeholder="$" onChange={(evt) => setInputValue(evt.target.value / coinId.price)} value={inputValue} />```

You forgot the value attribute I think, and if you use React Hooks maybe you can do something like this :

import { useState } from "react";

const [inputValue, setInputValue] = useState("");

...


<input className="input" type="number" placeholder="
quot; onChange={(evt) => setInputValue(evt.target.value / coinId.price)} value={inputValue} />```

如何在React JS中显示H1中的输入值

萌︼了一个春 2025-02-18 18:53:10

您可以做的是首先为新列“想要”设置一个值

mtcars$want <- 2

library(dplyr)
mtcars %>%
  mutate(want = ifelse(carb == 1, qsec, want)) %>%
  head(5)
#>                    mpg cyl disp  hp drat    wt  qsec vs am gear carb  want
#> Mazda RX4         21.0   6  160 110 3.90 2.620 16.46  0  1    4    4  2.00
#> Mazda RX4 Wag     21.0   6  160 110 3.90 2.875 17.02  0  1    4    4  2.00
#> Datsun 710        22.8   4  108  93 3.85 2.320 18.61  1  1    4    1 18.61
#> Hornet 4 Drive    21.4   6  258 110 3.08 3.215 19.44  1  0    3    1 19.44
#> Hornet Sportabout 18.7   8  360 175 3.15 3.440 17.02  0  0    3    2  2.00

What you can do is first set a value to your new column "want" for example 2. You can use ifelse to do your criteria and return "want" if do nothing like this:

mtcars$want <- 2

library(dplyr)
mtcars %>%
  mutate(want = ifelse(carb == 1, qsec, want)) %>%
  head(5)
#>                    mpg cyl disp  hp drat    wt  qsec vs am gear carb  want
#> Mazda RX4         21.0   6  160 110 3.90 2.620 16.46  0  1    4    4  2.00
#> Mazda RX4 Wag     21.0   6  160 110 3.90 2.875 17.02  0  1    4    4  2.00
#> Datsun 710        22.8   4  108  93 3.85 2.320 18.61  1  1    4    1 18.61
#> Hornet 4 Drive    21.4   6  258 110 3.08 3.215 19.44  1  0    3    1 19.44
#> Hornet Sportabout 18.7   8  360 175 3.15 3.440 17.02  0  0    3    2  2.00

Created on 2022-06-30 by the reprex package (v2.0.1)

如果A列等于C列中B列的标准返回值

萌︼了一个春 2025-02-18 15:49:41

在Swift 5.0+中,

let ids = [1, 2]

let idsString = ids.map { id -> String in
    return "{\"packageId\"" + ":" + "\"\(id)\"}"
}.joined(separator: ",")

let result = "[" + idsString + "]"
print(result)

您将获得结果:

[{“ packageid”:“ 1”},{“ packageid”:“ 2”}]

In Swift 5.0+

let ids = [1, 2]

let idsString = ids.map { id -> String in
    return "{\"packageId\"" + ":" + "\"\(id)\"}"
}.joined(separator: ",")

let result = "[" + idsString + "]"
print(result)

you will get the results:

[{"packageId":"1"},{"packageId":"2"}]

将数组用作字符串并删除双引号符号以将数组内容类型从字符串更改为任何,因为整个数组将为字符串,其类型可以是任何

萌︼了一个春 2025-02-18 11:58:54

我遵循了Larnu在OP评论中的建议,并将SQL脚本调整到下面,这是我所需要的

WITH EndsMarked
AS
    (SELECT F.Asset_Id
          , F.Status_Id
          , F.Creation_Date
          , CASE
                 WHEN LAG(F.Status_Id, 1) OVER (PARTITION BY F.Asset_Id
                                                ORDER BY F.Asset_Id
                                                       , F.Creation_Date
                                               ) IS NULL
                      AND ROW_NUMBER() OVER (PARTITION BY F.Asset_Id
                                             ORDER BY F.Creation_Date
                                            ) = 1 THEN 1
                 WHEN LAG(F.Status_Id, 1) OVER (PARTITION BY F.Asset_Id
                                                ORDER BY F.Asset_Id
                                                       , F.Creation_Date
                                               ) <> LAG(F.Status_Id, 0) OVER (PARTITION BY F.Asset_Id
                                                                              ORDER BY F.Asset_Id
                                                                                     , F.Creation_Date
                                                                             ) THEN 1
                 ELSE 0
            END AS IS_START
          , CASE
                 WHEN LEAD(F.Status_Id, 1) OVER (PARTITION BY F.Asset_Id
                                                 ORDER BY F.Asset_Id
                                                        , F.Creation_Date
                                                ) IS NULL
                      AND ROW_NUMBER() OVER (PARTITION BY F.Asset_Id
                                             ORDER BY F.Creation_Date DESC
                                            ) = 1 THEN 1
                 WHEN LEAD(F.Status_Id, 0) OVER (PARTITION BY F.Asset_Id
                                                 ORDER BY F.Asset_Id
                                                        , F.Creation_Date
                                                ) <> LEAD(F.Status_Id, 1) OVER (PARTITION BY F.Asset_Id
                                                                                ORDER BY F.Asset_Id
                                                                                       , F.Creation_Date
                                                                               ) THEN 1
                 ELSE 0
            END AS IS_END
     FROM
            (
            SELECT mrsabda.Assets_AssetId        AS Asset_Id
                 , mrsabda.CreationDate          AS Creation_Date
                 , mrsabda.Assets_Asset_StatusId AS Status_Id
            --,[Aantal Facturen]
            FROM   MRR.MRR_Round_Status_Audit_Buildup_Dim_Asset AS mrsabda
            ) AS F )
   , GroupsNumbered
AS
    (SELECT EndsMarked.Asset_Id
          , EndsMarked.Status_Id
          , EndsMarked.Creation_Date
          , EndsMarked.IS_START
          , EndsMarked.IS_END
          , COUNT(   CASE
                          WHEN EndsMarked.IS_START = 1 THEN 1
                     END
                 ) OVER (ORDER BY EndsMarked.Asset_Id
                                , EndsMarked.Creation_Date
                        ) AS GroupNum
     FROM   EndsMarked
     WHERE  EndsMarked.IS_START = 1
            OR EndsMarked.IS_END = 1)
SELECT   a.Asset_Id
       , a.Status_Id
       , a.GROUP_START                                                                                   AS Start_Date
       , DATEADD(SECOND, -1, LEAD(a.GROUP_START, 1, '2099-12-31 00:00:01') OVER (ORDER BY a.GROUP_START)) AS End_Date
FROM
         (
         SELECT   GroupsNumbered.Asset_Id
                , GroupsNumbered.Status_Id
                , MIN(GroupsNumbered.Creation_Date) AS GROUP_START
                , MAX(GroupsNumbered.Creation_Date) AS GROUP_END
         FROM     GroupsNumbered
         GROUP BY GroupsNumbered.Asset_Id
                , GroupsNumbered.Status_Id
                , GroupsNumbered.GroupNum
         ) AS a
GROUP BY a.Asset_Id
       , a.Status_Id
       , a.GROUP_START

I've followed Larnu recommendation on the op comment and adjusted the SQL script to the below, this is provided me what I need

WITH EndsMarked
AS
    (SELECT F.Asset_Id
          , F.Status_Id
          , F.Creation_Date
          , CASE
                 WHEN LAG(F.Status_Id, 1) OVER (PARTITION BY F.Asset_Id
                                                ORDER BY F.Asset_Id
                                                       , F.Creation_Date
                                               ) IS NULL
                      AND ROW_NUMBER() OVER (PARTITION BY F.Asset_Id
                                             ORDER BY F.Creation_Date
                                            ) = 1 THEN 1
                 WHEN LAG(F.Status_Id, 1) OVER (PARTITION BY F.Asset_Id
                                                ORDER BY F.Asset_Id
                                                       , F.Creation_Date
                                               ) <> LAG(F.Status_Id, 0) OVER (PARTITION BY F.Asset_Id
                                                                              ORDER BY F.Asset_Id
                                                                                     , F.Creation_Date
                                                                             ) THEN 1
                 ELSE 0
            END AS IS_START
          , CASE
                 WHEN LEAD(F.Status_Id, 1) OVER (PARTITION BY F.Asset_Id
                                                 ORDER BY F.Asset_Id
                                                        , F.Creation_Date
                                                ) IS NULL
                      AND ROW_NUMBER() OVER (PARTITION BY F.Asset_Id
                                             ORDER BY F.Creation_Date DESC
                                            ) = 1 THEN 1
                 WHEN LEAD(F.Status_Id, 0) OVER (PARTITION BY F.Asset_Id
                                                 ORDER BY F.Asset_Id
                                                        , F.Creation_Date
                                                ) <> LEAD(F.Status_Id, 1) OVER (PARTITION BY F.Asset_Id
                                                                                ORDER BY F.Asset_Id
                                                                                       , F.Creation_Date
                                                                               ) THEN 1
                 ELSE 0
            END AS IS_END
     FROM
            (
            SELECT mrsabda.Assets_AssetId        AS Asset_Id
                 , mrsabda.CreationDate          AS Creation_Date
                 , mrsabda.Assets_Asset_StatusId AS Status_Id
            --,[Aantal Facturen]
            FROM   MRR.MRR_Round_Status_Audit_Buildup_Dim_Asset AS mrsabda
            ) AS F )
   , GroupsNumbered
AS
    (SELECT EndsMarked.Asset_Id
          , EndsMarked.Status_Id
          , EndsMarked.Creation_Date
          , EndsMarked.IS_START
          , EndsMarked.IS_END
          , COUNT(   CASE
                          WHEN EndsMarked.IS_START = 1 THEN 1
                     END
                 ) OVER (ORDER BY EndsMarked.Asset_Id
                                , EndsMarked.Creation_Date
                        ) AS GroupNum
     FROM   EndsMarked
     WHERE  EndsMarked.IS_START = 1
            OR EndsMarked.IS_END = 1)
SELECT   a.Asset_Id
       , a.Status_Id
       , a.GROUP_START                                                                                   AS Start_Date
       , DATEADD(SECOND, -1, LEAD(a.GROUP_START, 1, '2099-12-31 00:00:01') OVER (ORDER BY a.GROUP_START)) AS End_Date
FROM
         (
         SELECT   GroupsNumbered.Asset_Id
                , GroupsNumbered.Status_Id
                , MIN(GroupsNumbered.Creation_Date) AS GROUP_START
                , MAX(GroupsNumbered.Creation_Date) AS GROUP_END
         FROM     GroupsNumbered
         GROUP BY GroupsNumbered.Asset_Id
                , GroupsNumbered.Status_Id
                , GroupsNumbered.GroupNum
         ) AS a
GROUP BY a.Asset_Id
       , a.Status_Id
       , a.GROUP_START

SQL-根据时间戳记录查找结束日期和开始日期

萌︼了一个春 2025-02-18 07:58:41

删除 SinglechildScrollview 可能会有所帮助。

Removing SingleChildScrollView might help.

listView赢得了滚动中的滚动

萌︼了一个春 2025-02-18 07:24:59

首先通过使用

<?php

$dirs = array_filter(glob('*'), 'is_dir');
//print_r($dirs);
?>

并在每个文件夹中使用循环查看

<?php foreach($dirs as $key) { ?>

\\your code as you required. 

<? } ?>

目录中的文件列表,从而 获取目录的目录列表“ rel =“ nofollow noreferrer”>单击

first get list of directories by using

<?php

$dirs = array_filter(glob('*'), 'is_dir');
//print_r($dirs);
?>

and check for file name by using for loop in each folder

<?php foreach($dirs as $key) { ?>

\\your code as you required. 

<? } ?>

To check list of files in directory refer this php function click

在目录和子目录中搜索特定文件的功能

萌︼了一个春 2025-02-18 06:59:45

更新您的 targualResponse 模型类以相应地解析数据。

data class CharacterResponse(
    val info: Info,
    val results: List<Characters>
)

Update your CharacterResponse model class to parse data accordingly.

data class CharacterResponse(
    val info: Info,
    val results: List<Characters>
)

recyclerview不显示项目Kotlin

萌︼了一个春 2025-02-18 05:26:13

错误原因

@newbie评论中正确说明的 ,问题不是模型本身,而是CUDA上下文。当新的子进程分叉时,父母的内存将与孩子共享,但是CUDA上下文不支持此共享,必须向孩子复制。因此,它报告了上述错误。

Spawn 而不是 fork

要解决此问题,我们必须将子进程的开始方法从 fork spawn 带有多处理.set_start_method 。以下简单示例正常工作:

import torch
import torch.multiprocessing as mp


def f(y):
    y[0] = 1000


if __name__ == '__main__':
    x = torch.zeros(1).cuda()
    x.share_memory_()

    mp.set_start_method('spawn')
    p = mp.Process(target=f, args=(x,), daemon=True)
    p.start()
    p.join()
    print("x =", x.item())

在运行此代码时,初始化了第二个CUDA上下文(可以在第二个窗口中通过观看-N 1 NVIDIA -SMI 观察),以及 f < f < /代码>在上下文完全初始化后执行。此后, x = 1000.0 在控制台上打印出来,因此,我们确认张量 x 在过程之间成功共享。

但是,Gunicorn内部使用 os.fork 来启动工作过程,因此 Multiprocessing.set_start_method 对Gunicorn的行为没有影响。因此,必须避免在根过程中初始化CUDA上下文。

为了在工作过程中共享模型的解决方案

,我们必须将模型加载到一个过程中并与工人共享。幸运的是,通过 torch.multiprocessing.queue 将CUDA张量发送到另一个过程不会复制GPU上的参数,因此我们可以将这些队列用于此问题。

import time

import torch
import torch.multiprocessing as mp


def f(q):
    y = q.get()
    y[0] = 1000


def g(q):
    x = torch.zeros(1).cuda()
    x.share_memory_()
    q.put(x)
    q.put(x)
    while True:
        time.sleep(1)  # this process must live as long as x is in use


if __name__ == '__main__':
    queue = mp.Queue()
    pf = mp.Process(target=f, args=(queue,), daemon=True)
    pf.start()
    pg = mp.Process(target=g, args=(queue,), daemon=True)
    pg.start()
    pf.join()
    x = queue.get()
    print("x =", x.item())  # Prints x = 1000.0

对于Gunicorn Server,我们可以使用相同的策略:模型服务器进程加载模型并将其用于每个新工作后的叉子后。在 post_fork 挂钩工作者请求并从模型服务器接收模型。 Gunicorn配置看起来像这样:

import logging

from client import request_model
from app import app

logging.basicConfig(level=logging.INFO)

bind = "localhost:8080"
workers = 1
zmq_url = "tcp://127.0.0.1:5555"


def post_fork(server, worker):
    app.config['MODEL'], app.config['COUNTER'] = request_model(zmq_url)

POST_FORK Hook中,我们调用 request_model 从模型服务器中获取模型,并将模型存储在烧瓶应用程序的配置中。方法 request_model 在我的示例中定义了 client.py.py ,定义如下:

import logging
import os

from torch.multiprocessing.reductions import ForkingPickler
import zmq


def request_model(zmq_url: str):
    logging.info("Connecting")
    context = zmq.Context()
    with context.socket(zmq.REQ) as socket:
        socket.connect(zmq_url)
        logging.info("Sending request")
        socket.send(ForkingPickler.dumps(os.getpid()))
        logging.info("Waiting for a response")
        model = ForkingPickler.loads(socket.recv())
    logging.info("Got response from object server")
    return model

我们使用 Zeromq 用于此处的程序间通信,因为它允许我们通过名称/地址引用服务器并将服务器代码外包到其自己的应用程序中。 多仔号多处理。 -Starting-worker/75307381“>与Gunicorn不能很好地工作。 多处理在内部使用 forkingpickler 来序列化对象,并且模块 torch.multiprocessing 以一种可以以torch数据结构为单位来改变它适当,可靠地序列化。因此,我们使用此类序列化模型将其发送到工作过程中。

该模型已加载并提供在一个完全分开的应用程序中,该应用程序完全分开并在 server.py.py.py 中定义:

from argparse import ArgumentParser
import logging

import torch
from torch.multiprocessing.reductions import ForkingPickler
import zmq


def load_model():
    model = torch.nn.Linear(10000, 50000)
    model.cuda()
    model.share_memory()

    counter = torch.zeros(1).cuda()
    counter.share_memory_()
    return model, counter


def share_object(obj, url):
    context = zmq.Context()
    socket = context.socket(zmq.REP)
    socket.bind(url)
    while True:
        logging.info("Waiting for requests on %s", url)
        message = socket.recv()
        logging.info("Got a message from %d", ForkingPickler.loads(message))
        socket.send(ForkingPickler.dumps(obj))


if __name__ == '__main__':
    parser = ArgumentParser(description="Serve model")
    parser.add_argument("--listen-address", default="tcp://127.0.0.1:5555")
    args = parser.parse_args()

    logging.basicConfig(level=logging.INFO)
    logging.info("Loading model")
    model = load_model()
    share_object(model, args.listen_address)

对于此测试,我们使用大约2GB的模型,以查看GPU内存的影响 nvidia-smi 中的分配和一个小张量来验证数据实际上在过程之间共享。

我们的示例瓶应用程序以随机​​输入运行模型,计算请求的数量并返回两个结果:

from flask import Flask
import torch

app = Flask(__name__)


@app.route("/", methods=["POST"])
def infer():
    model: torch.nn.Linear = app.config['MODEL']
    counter: torch.Tensor = app.config['COUNTER']
    counter[0] += 1  # not thread-safe
    input_features = torch.rand(model.in_features).cuda()
    return {
        "result": model(input_features).sum().item(),
        "counter": counter.item()
    }

测试

示例可以如下运行:

$ python server.py &
INFO:root:Waiting for requests on tcp://127.0.0.1:5555 
$ gunicorn -c config.py app:app
[2023-02-01 16:45:34 +0800] [24113] [INFO] Starting gunicorn 20.1.0
[2023-02-01 16:45:34 +0800] [24113] [INFO] Listening at: http://127.0.0.1:8080 (24113)
[2023-02-01 16:45:34 +0800] [24113] [INFO] Using worker: sync
[2023-02-01 16:45:34 +0800] [24186] [INFO] Booting worker with pid: 24186
INFO:root:Connecting
INFO:root:Sending request
INFO:root:Waiting for a response
INFO:root:Got response from object server

使用 nvidia-smi ,我们可以观察到,现在,两个过程正在使用GPU,其中一个分配了2GB的VRAM,而不是另一个。查询烧瓶应用程序也可以按预期工作:

$ curl -X POST localhost:8080
{"counter":1.0,"result":-23.956459045410156} 
$ curl -X POST localhost:8080
{"counter":2.0,"result":-8.161510467529297}
$ curl -X POST localhost:8080
{"counter":3.0,"result":-37.823692321777344}

让我们介绍一些混乱并终止我们唯一的枪手工人:

$ kill 24186
[2023-02-01 18:02:09 +0800] [24186] [INFO] Worker exiting (pid: 24186)
[2023-02-01 18:02:09 +0800] [4196] [INFO] Booting worker with pid: 4196
INFO:root:Connecting
INFO:root:Sending request
INFO:root:Waiting for a response
INFO:root:Got response from object server

它正在正确重新启动并准备回答我们的请求。

最初的好处

,我们服务所需的VRAM的数量为(sizeof(model) + sizeof(cuda context)) * num(工人)。通过共享模型的权重,我们可以通过 sizeof(model) *(num(Workers)-1) to size> sizeof(model) + sizeof(cuda context) * num(num(num)工人)。

警告

这种方法的可靠性取决于单个模型服务器进程。如果该过程终止,新启动的工人不仅会陷入困境,而且现有工人中的模型将变得不可用,所有工人都会一次崩溃。只要服务器进程正在运行,共享张量/型号只能使用。即使重新启动模型服务器和枪支工人,也不可避免地停电也是不可避免的。因此,在生产环境中,您应确保此服务器流程保持活力。

此外,在不同过程之间共享数据可能会产生副作用。共享可变的数据时,必须使用适当的锁来避免比赛条件。

Reason for the Error

As correctly stated in the comments by @Newbie, the issue isn't the model itself, but the CUDA context. When new child processes are forked, the parent's memory is shared read-only with the child, but the CUDA context doesn't support this sharing, it must be copied to the child. Hence, it reports above-mentioned error.

Spawn instead of Fork

To resolve this issue, we have to change the start method for the child processes from fork to spawn with multiprocessing.set_start_method. The following simple example works fine:

import torch
import torch.multiprocessing as mp


def f(y):
    y[0] = 1000


if __name__ == '__main__':
    x = torch.zeros(1).cuda()
    x.share_memory_()

    mp.set_start_method('spawn')
    p = mp.Process(target=f, args=(x,), daemon=True)
    p.start()
    p.join()
    print("x =", x.item())

When running this code, a second CUDA context is initialized (this can be observed via watch -n 1 nvidia-smi in a second window), and f is executed after the context was initialized completely. After this, x = 1000.0 is printed on the console, thus, we confirmed that the tensor x was successfully shared between the processes.

However, Gunicorn internally uses os.fork to start the worker processes, so multiprocessing.set_start_method has no influence on Gunicorn's behavior. Consequently, initializing the CUDA context in the root process must be avoided.

Solution for Gunicorn

In order to share the model among the worker processes, we thus must load the model in one single process and share it with the workers. Luckily, sending a CUDA tensor via a torch.multiprocessing.Queue to another process doesn't copy the parameters on the GPU, so we can use those queues for this problem.

import time

import torch
import torch.multiprocessing as mp


def f(q):
    y = q.get()
    y[0] = 1000


def g(q):
    x = torch.zeros(1).cuda()
    x.share_memory_()
    q.put(x)
    q.put(x)
    while True:
        time.sleep(1)  # this process must live as long as x is in use


if __name__ == '__main__':
    queue = mp.Queue()
    pf = mp.Process(target=f, args=(queue,), daemon=True)
    pf.start()
    pg = mp.Process(target=g, args=(queue,), daemon=True)
    pg.start()
    pf.join()
    x = queue.get()
    print("x =", x.item())  # Prints x = 1000.0

For the Gunicorn server, we can use the same strategy: A model server process loads the model and serves it to each new worker process after its fork. In the post_fork hook the worker requests and receives the model from the model server. A Gunicorn configuration could look like this:

import logging

from client import request_model
from app import app

logging.basicConfig(level=logging.INFO)

bind = "localhost:8080"
workers = 1
zmq_url = "tcp://127.0.0.1:5555"


def post_fork(server, worker):
    app.config['MODEL'], app.config['COUNTER'] = request_model(zmq_url)

In the post_fork hook, we call request_model to get a model from the model server and store the model in the configuration of the Flask application. The method request_model is defined in my example in the file client.py and defined as follows:

import logging
import os

from torch.multiprocessing.reductions import ForkingPickler
import zmq


def request_model(zmq_url: str):
    logging.info("Connecting")
    context = zmq.Context()
    with context.socket(zmq.REQ) as socket:
        socket.connect(zmq_url)
        logging.info("Sending request")
        socket.send(ForkingPickler.dumps(os.getpid()))
        logging.info("Waiting for a response")
        model = ForkingPickler.loads(socket.recv())
    logging.info("Got response from object server")
    return model

We make use of ZeroMQ for inter-process communication here because it allows us to reference servers by name/address and to outsource the server code into its own application. multiprocessing.Queue and multiprocessing.Process apparently don't work well with Gunicorn. multiprocessing.Queue uses the ForkingPickler internally to serialize the objects, and the module torch.multiprocessing alters it in a way that Torch data structures can be serialized appropriately and reliably. So, we use this class to serialize our model to send it to the worker processes.

The model is loaded and served in an application that is completely separate from Gunicorn and defined in server.py:

from argparse import ArgumentParser
import logging

import torch
from torch.multiprocessing.reductions import ForkingPickler
import zmq


def load_model():
    model = torch.nn.Linear(10000, 50000)
    model.cuda()
    model.share_memory()

    counter = torch.zeros(1).cuda()
    counter.share_memory_()
    return model, counter


def share_object(obj, url):
    context = zmq.Context()
    socket = context.socket(zmq.REP)
    socket.bind(url)
    while True:
        logging.info("Waiting for requests on %s", url)
        message = socket.recv()
        logging.info("Got a message from %d", ForkingPickler.loads(message))
        socket.send(ForkingPickler.dumps(obj))


if __name__ == '__main__':
    parser = ArgumentParser(description="Serve model")
    parser.add_argument("--listen-address", default="tcp://127.0.0.1:5555")
    args = parser.parse_args()

    logging.basicConfig(level=logging.INFO)
    logging.info("Loading model")
    model = load_model()
    share_object(model, args.listen_address)

For this test, we use a model of about 2GB in size to see an effect on the GPU memory allocation in nvidia-smi and a small tensor to verify that the data is actually shared among the processes.

Our sample flask application runs the model with a random input, counts the number of requests and returns both results:

from flask import Flask
import torch

app = Flask(__name__)


@app.route("/", methods=["POST"])
def infer():
    model: torch.nn.Linear = app.config['MODEL']
    counter: torch.Tensor = app.config['COUNTER']
    counter[0] += 1  # not thread-safe
    input_features = torch.rand(model.in_features).cuda()
    return {
        "result": model(input_features).sum().item(),
        "counter": counter.item()
    }

Test

The example can be run as follows:

$ python server.py &
INFO:root:Waiting for requests on tcp://127.0.0.1:5555 
$ gunicorn -c config.py app:app
[2023-02-01 16:45:34 +0800] [24113] [INFO] Starting gunicorn 20.1.0
[2023-02-01 16:45:34 +0800] [24113] [INFO] Listening at: http://127.0.0.1:8080 (24113)
[2023-02-01 16:45:34 +0800] [24113] [INFO] Using worker: sync
[2023-02-01 16:45:34 +0800] [24186] [INFO] Booting worker with pid: 24186
INFO:root:Connecting
INFO:root:Sending request
INFO:root:Waiting for a response
INFO:root:Got response from object server

Using nvidia-smi, we can observe that now, two processes are using the GPU, and one of them allocates 2GB more VRAM than the other. Querying the flask application also works as expected:

$ curl -X POST localhost:8080
{"counter":1.0,"result":-23.956459045410156} 
$ curl -X POST localhost:8080
{"counter":2.0,"result":-8.161510467529297}
$ curl -X POST localhost:8080
{"counter":3.0,"result":-37.823692321777344}

Let's introduce some chaos and terminate our only Gunicorn worker:

$ kill 24186
[2023-02-01 18:02:09 +0800] [24186] [INFO] Worker exiting (pid: 24186)
[2023-02-01 18:02:09 +0800] [4196] [INFO] Booting worker with pid: 4196
INFO:root:Connecting
INFO:root:Sending request
INFO:root:Waiting for a response
INFO:root:Got response from object server

It's restarting properly and ready to answer our requests.

Benefit

Initially, the amount of required VRAM for our service was (SizeOf(Model) + SizeOf(CUDA context)) * Num(Workers). By sharing the weights of the model, we can reduce this by SizeOf(Model) * (Num(Workers) - 1) to SizeOf(Model) + SizeOf(CUDA context) * Num(Workers).

Caveats

The reliability of this approach relies on the single model server process. If that process terminates, not only will newly started workers get stuck, but the models in the existing workers will become unavailable and all workers crash at once. The shared tensors/models are only available as long as the server process is running. Even if the model server and Gunicorn workers are restarted, a short outage is certainly unavoidable. In a production environment, you thus should make sure this server process is kept alive.

Additionally, sharing data among different processes can have side effects. When sharing changeable data, proper locks must be used to avoid race conditions.

Gunicorn&#x2B; CUDA:无法在分叉子过程中重新定位CUDA

萌︼了一个春 2025-02-18 04:28:12

您可以创建自定义类,也可以创建另一个数组来捕获信息,

public class BoardTile {
  String info;
  boolean visited;
}

然后使用 boardtile [] [] [] 而不是 string [] [] [] [] []

,另外您可以创建一个单独的2D布尔值

private boolean[][] visited = new boolean[widthOfBoard, heightOfBoard];

You can either create a custom class or create another array to capture the information, like

public class BoardTile {
  String info;
  boolean visited;
}

And then use a BoardTile[][] instead of a String[][]

Alternatively, you could create a separate 2D array of booleans

private boolean[][] visited = new boolean[widthOfBoard, heightOfBoard];

如何检查玩家是否访问了演奏板的所有面板?

萌︼了一个春 2025-02-18 02:51:48

std :: to_string()返回 std :: String 。这就是它的作用,如果您检查C ++教科书以获取此C ++库功能的描述,这就是您将在此阅读的内容。

encString.push_back( /* something */ )

因为 venting std :: vector&lt; char&gt; ,因此从逻辑上讲,唯一的东西可以是 push_back(),代码> char 。只有一个 char 。 C ++不允许您将整个 std :: String 传递给一个单个 char> char 参数的函数。 C ++不起作用,C ++只允许在不同类型的情况下进行某些特定的转换,这不是其中之一。

这就是为什么 encstring.push_back(to_string(runlength)); 不起作用。 [0] 操作员从返回的 std :: String 返回第一个 char 。多么幸运的巧合!您可以从中获得 char push_back()期望一个 char> char 值,每个人都过着幸福的生活。

另外,重要的是要注意,不要“必须添加[0]”。如果您必须以相同的方式使用 [1] ,则可以使用字符串或字符串中的任何其他字符添加第二个字符。这解释了汇编错误。 [0] 是否是正确的解决方案,是您需要单独找出的东西。您想知道为什么如果没有 [0] >,这就是答案: to_string()返回 std :: String 将您必须 push_back()单一 char 值,然后使用 [0] 使其实现。无论是正确的 char ,这都是一个完全不同的问题。

std::to_string() returns a std::string. That's what it does, if you check your C++ textbook for a description of this C++ library function that's what you will read there.

encString.push_back( /* something */ )

Because encString is a std::vector<char>, it logically follows that the only thing can be push_back() into it is a char. Just a single char. C++ does not allow you to pass an entire std::string to a function that takes a single char parameter. C++ does not work this way, C++ allows only certain, specific conversions betweens different types, and this isn't one of them.

And that's why encString.push_back(to_string(runLength)); does not work. The [0] operator returns the first char from the returned std::string. What a lucky coincidence! You get a char from that, the push_back() expects a single char value, and everyone lives happily ever after.

Also, it is important to note that you do not, do not "gotta add [0]". You could use [1], if you have to add the 2nd character from the string, or any other character from the string, in the same manner. This explains the compilation error. Whether [0] is the right solution, or not, is something that you'll need to figure out separately. You wanted to know why this does not compile without the [0], and that's the answer: to_string() returns a std::string put you must push_back() a single char value, and using [0] makes it happen. Whether it's the right char, or not, that's a completely different question.

为什么我必须为此做一个2D阵列

萌︼了一个春 2025-02-17 23:59:17

Mkopriva的答案:field.set(反射valueof(文章))

The answer of mkopriva: field.Set(reflect.ValueOf(articles))

设置Reflect.Value在Go中切片

萌︼了一个春 2025-02-17 19:11:53

使用预处理器是作弊吗?

struct A {

    #define GETTER_CORE_CODE       \
    /* line 1 of getter code */    \
    /* line 2 of getter code */    \
    /* .....etc............. */    \
    /* line n of getter code */       

    // ^ NOTE: line continuation char '\' on all lines but the last

   B& get() {
        GETTER_CORE_CODE
   }

   const B& get() const {
        GETTER_CORE_CODE
   }

   #undef GETTER_CORE_CODE

};

它不像模板或铸件那样花哨,但它确实使您的意图(“这两个功能是相同的”)非常明确。

Is it cheating to use the preprocessor?

struct A {

    #define GETTER_CORE_CODE       \
    /* line 1 of getter code */    \
    /* line 2 of getter code */    \
    /* .....etc............. */    \
    /* line n of getter code */       

    // ^ NOTE: line continuation char '\' on all lines but the last

   B& get() {
        GETTER_CORE_CODE
   }

   const B& get() const {
        GETTER_CORE_CODE
   }

   #undef GETTER_CORE_CODE

};

It's not as fancy as templates or casts, but it does make your intent ("these two functions are to be identical") pretty explicit.

如何在类似的const和非CONST成员功能之间删除代码重复?

萌︼了一个春 2025-02-17 17:16:04

您需要用 with checkedthrativeContinuation 这样

func holDaten(daten: String) async throws -> String {
    let ref = Database.database().reference()
    let kurstxt = UserDefaults.standard.value(forKey: "aktuellerKurs") as! String
    return try await withCheckedThrowingContinuation { (continuation: CheckedContinuation<String, Error>) in
        ref.child("\(kurstxt ?? "default value")/\(daten ?? "")").getData { error, snapshot in
            guard error == nil else {
                print(error!.localizedDescription)
                continuation.resume(with: .failure(error))
                return
            }
            let ergebnis = snapshot?.value as? String ?? "Unknown"
            continuation.resume(with: .success(ergebnis))
        }
    }
}

You will need to wrap your call to getData with withCheckedThrowingContinuation like this

func holDaten(daten: String) async throws -> String {
    let ref = Database.database().reference()
    let kurstxt = UserDefaults.standard.value(forKey: "aktuellerKurs") as! String
    return try await withCheckedThrowingContinuation { (continuation: CheckedContinuation<String, Error>) in
        ref.child("\(kurstxt ?? "default value")/\(daten ?? "")").getData { error, snapshot in
            guard error == nil else {
                print(error!.localizedDescription)
                continuation.resume(with: .failure(error))
                return
            }
            let ergebnis = snapshot?.value as? String ?? "Unknown"
            continuation.resume(with: .success(ergebnis))
        }
    }
}

swift函数返回(),但值是存在的其他事物

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文