甜心小果奶

文章 评论 浏览 28

甜心小果奶 2025-02-20 23:02:50

有一种专用的配置来确保您想要的名称: poddisruptionbudget

它确保您的豆荚分布在节点之间对于高可用性,如果您想更换节点等,将为您提供帮助。

There's a dedicated configuration to ensure what you want, called: PodDisruptionBudget.

https://kubernetes.io/docs/tasks/run-application/configure-pdb/

It ensures that your pods are distributed among nodes for high availability and will help you if you want to replace a node etc.

如何在不同的节点上分配重新塞替奈尔豆荚?

甜心小果奶 2025-02-20 16:19:32

Jusify-content只能放置在Flex容器级别上,

但在您的情况下,如果您想将特定物品对齐在容器右侧的特定项目,则可以使用 Margin-Left播放:auto

header {
    width: 100%;
    display: flex;
    flex-direction: row;
    background: red;
}

header * {
  margin: 5px;
  border: solid 1px blue;
}

header .header-sm {
    margin-left: auto
}
<header>
   <div class="logo">test</div>
   <nav>test</nav>
   <div class="header-sm">test</div>
</header>

justify-content can only be placed at the flex container level

but in your case if you want to align a specific item to the right of the container you can play with margin-left: auto

header {
    width: 100%;
    display: flex;
    flex-direction: row;
    background: red;
}

header * {
  margin: 5px;
  border: solid 1px blue;
}

header .header-sm {
    margin-left: auto
}
<header>
   <div class="logo">test</div>
   <nav>test</nav>
   <div class="header-sm">test</div>
</header>

Flexbox不填充宽度

甜心小果奶 2025-02-19 18:12:20

您描述的模式不是通用模式,我们可以应用于文件列表中的所有可能值。但是,我们可以确保如果这些特定值出现在您的向量中,则将它们分类到前面:

示例 fs :: dir_ls() data:code:code:code:code:code:result:code

files <- c('some/dir/LATO-bar.csv', 'some/dir/LATO-baz.csv', 'some/dir/LATO-foo.csv',
           'some/dir/LATO-kwi.csv', 'some/dir/LATO-lut.csv', 'some/dir/LATO-sty.csv',
           'some/dir/ZLATO-bar.csv', 'some/dir/ZLATO-baz.csv')

:code:

order <- c('LATO-sty.csv', 'LATO-lut.csv', 'LATO-mar.csv', 'LATO-kwi.csv',
           'LATO-maj.csv', 'LATO-cze.csv', 'LATO-lip.csv', 'LATO-sie.csv',
           'LATO-wrz.csv', 'LATO-paz.csv', 'LATO-lis.csv', 'LATO-gru.csv')


# get `files` present in `order`
set1 <- files[fs::path_file(files) %in% order] # extract filenames
ids <- match(fs::path_file(set1), order)       # get matching IDs from `order`
ids_sorted <- sort(ids, index.return=T)        # get sort order
set1_sorted <- set1[ids_sorted$ix]             # apply sort order

# get `files` NOT present in `order`, keep them in the same order
set2 <- files[!fs::path_file(files) %in% order]

# join sets
result <- unname(c(set1_sorted, set2))

code:code:code:code:code:code:code:code: code。

> result
[1] "some/dir/LATO-sty.csv"  "some/dir/LATO-lut.csv"  "some/dir/LATO-kwi.csv"  "some/dir/LATO-bar.csv"  "some/dir/LATO-baz.csv" 
[6] "some/dir/LATO-foo.csv"  "some/dir/ZLATO-bar.csv" "some/dir/ZLATO-baz.csv"

The pattern you're describing is not a generic pattern we can apply to all possible values in your file listing. However, we can make sure that if these specific values appear in your vector, they get sorted to the front:

Example fs::dir_ls() data:

files <- c('some/dir/LATO-bar.csv', 'some/dir/LATO-baz.csv', 'some/dir/LATO-foo.csv',
           'some/dir/LATO-kwi.csv', 'some/dir/LATO-lut.csv', 'some/dir/LATO-sty.csv',
           'some/dir/ZLATO-bar.csv', 'some/dir/ZLATO-baz.csv')

Code:

order <- c('LATO-sty.csv', 'LATO-lut.csv', 'LATO-mar.csv', 'LATO-kwi.csv',
           'LATO-maj.csv', 'LATO-cze.csv', 'LATO-lip.csv', 'LATO-sie.csv',
           'LATO-wrz.csv', 'LATO-paz.csv', 'LATO-lis.csv', 'LATO-gru.csv')


# get `files` present in `order`
set1 <- files[fs::path_file(files) %in% order] # extract filenames
ids <- match(fs::path_file(set1), order)       # get matching IDs from `order`
ids_sorted <- sort(ids, index.return=T)        # get sort order
set1_sorted <- set1[ids_sorted$ix]             # apply sort order

# get `files` NOT present in `order`, keep them in the same order
set2 <- files[!fs::path_file(files) %in% order]

# join sets
result <- unname(c(set1_sorted, set2))

Result:

> result
[1] "some/dir/LATO-sty.csv"  "some/dir/LATO-lut.csv"  "some/dir/LATO-kwi.csv"  "some/dir/LATO-bar.csv"  "some/dir/LATO-baz.csv" 
[6] "some/dir/LATO-foo.csv"  "some/dir/ZLATO-bar.csv" "some/dir/ZLATO-baz.csv"

如何根据我的模式对文件进行排序

甜心小果奶 2025-02-19 17:51:48

您的启动变量为x = 0,y = 20。外圈运行3次,内部环在外循环中运行3次。因此,内部循环中的功能被称为9次,并且外圈中的功能(y- = 2)在3次中运行。

x = 0 +(9 *(6 + 3))= 81

y = 20 + 9 +(3 * -2)= 23

you have start variables of x = 0 and y = 20. the outer loop is run 3 times and the inner loop is run 3 times in the outer loop. So the functions in the inner loop gets called 9 times and the functions in the outer loop (y-=2) is run 3 times.

x = 0 + (9 * (6 + 3)) = 81

y = 20 + 9 + (3 * -2) = 23

kotlin循环,困惑

甜心小果奶 2025-02-19 08:35:35


React-Nagient-Reanimated/plugin

babel.config.js 的完整代码为:

plugins: [
  '@babel/plugin-proposal-export-namespace-from',
  'react-native-reanimated/plugin',
]

这对于在Web上运行的React本机应用程序正确起作用。

Simply, include the plugin in babel.config.js of the react native project as

@babel/plugin-proposal-export-namespace-from,
react-native-reanimated/plugin,

The full code of babel.config.js is:

plugins: [
  '@babel/plugin-proposal-export-namespace-from',
  'react-native-reanimated/plugin',
]

This works correctly for the react native application running on web.

您可能需要一个额外的装载机来处理这些加载程序的结果

甜心小果奶 2025-02-18 07:21:42
const mockUseAuthIsAuthenticated = jest.fn(() => false);
const mockUseAuth = jest.fn(() => ({
  isAuthenticated: mockUseAuthIsAuthenticated,
});

jest.mock("../hooks/useAuth", mockUseAuth);

describe('My test case', () => {
  it(`should return authenticated=TRUE`, () => {
    // Given
    mockUseAuthIsAuthenticated.mockImplementationOnce(
      () => true
    );

    // When
    // assuming `render` comes from the react testing-library
    render(<ComponentThatCallsTheHook />);

    // Then
    expect(mockUseAuthIsAuthenticated).toHaveBeenCalledOnce();
    // ... more expectations
  });
});
const mockUseAuthIsAuthenticated = jest.fn(() => false);
const mockUseAuth = jest.fn(() => ({
  isAuthenticated: mockUseAuthIsAuthenticated,
});

jest.mock("../hooks/useAuth", mockUseAuth);

describe('My test case', () => {
  it(`should return authenticated=TRUE`, () => {
    // Given
    mockUseAuthIsAuthenticated.mockImplementationOnce(
      () => true
    );

    // When
    // assuming `render` comes from the react testing-library
    render(<ComponentThatCallsTheHook />);

    // Then
    expect(mockUseAuthIsAuthenticated).toHaveBeenCalledOnce();
    // ... more expectations
  });
});

开玩笑 - 破坏性财产

甜心小果奶 2025-02-18 06:27:13

日期后使用 Pivot

d = pd.to_datetime(df['date'])

out = (pd
 .concat([df.assign(year=d.dt.year),
          df[df.groupby(d.dt.year, as_index=False).cumcount(ascending=False).eq(0)
            & d.dt.month.eq(12)
            ].assign(year=d.dt.year+1)])
 .assign(col=lambda d: 'g'+d.groupby('year').ngroup().add(1).astype(str))
 .pivot_table(index='date', columns='col', values='year')
 .convert_dtypes()
)

可以在复制每年的最后一个

col           g1    g2    g3    g4    g5    g6    g7
date                                                
2017-03-31  2017  <NA>  <NA>  <NA>  <NA>  <NA>  <NA>
2017-04-03  2017  <NA>  <NA>  <NA>  <NA>  <NA>  <NA>
2017-12-27  2017  <NA>  <NA>  <NA>  <NA>  <NA>  <NA>
2017-12-28  2017  <NA>  <NA>  <NA>  <NA>  <NA>  <NA>
2017-12-29  2017  2018  <NA>  <NA>  <NA>  <NA>  <NA>
2018-01-02  <NA>  2018  <NA>  <NA>  <NA>  <NA>  <NA>
2018-12-31  <NA>  2018  2019  <NA>  <NA>  <NA>  <NA>
2019-01-02  <NA>  <NA>  2019  <NA>  <NA>  <NA>  <NA>
2019-01-03  <NA>  <NA>  2019  <NA>  <NA>  <NA>  <NA>
2019-12-31  <NA>  <NA>  2019  2020  <NA>  <NA>  <NA>
2020-12-30  <NA>  <NA>  <NA>  2020  <NA>  <NA>  <NA>
2020-12-31  <NA>  <NA>  <NA>  2020  2021  <NA>  <NA>
2021-01-20  <NA>  <NA>  <NA>  <NA>  2021  <NA>  <NA>
2021-12-30  <NA>  <NA>  <NA>  <NA>  2021  <NA>  <NA>
2021-12-31  <NA>  <NA>  <NA>  <NA>  2021  2022  <NA>
2022-05-30  <NA>  <NA>  <NA>  <NA>  <NA>  2022  <NA>
2022-05-31  <NA>  <NA>  <NA>  <NA>  <NA>  2022  2023

iiuc,您

d = pd.to_datetime(df['date'])

out = (pd
 .concat([df.assign(year=d.dt.year),
          df[df.groupby(d.dt.year, as_index=False).cumcount(ascending=False).eq(0)]
            .assign(year=d.dt.year+1)])
      .groupby('year')
      # perform your aggregation here

)

IIUC, you can use a pivot after duplicating the last date of each year:

d = pd.to_datetime(df['date'])

out = (pd
 .concat([df.assign(year=d.dt.year),
          df[df.groupby(d.dt.year, as_index=False).cumcount(ascending=False).eq(0)
            & d.dt.month.eq(12)
            ].assign(year=d.dt.year+1)])
 .assign(col=lambda d: 'g'+d.groupby('year').ngroup().add(1).astype(str))
 .pivot_table(index='date', columns='col', values='year')
 .convert_dtypes()
)

output:

col           g1    g2    g3    g4    g5    g6    g7
date                                                
2017-03-31  2017  <NA>  <NA>  <NA>  <NA>  <NA>  <NA>
2017-04-03  2017  <NA>  <NA>  <NA>  <NA>  <NA>  <NA>
2017-12-27  2017  <NA>  <NA>  <NA>  <NA>  <NA>  <NA>
2017-12-28  2017  <NA>  <NA>  <NA>  <NA>  <NA>  <NA>
2017-12-29  2017  2018  <NA>  <NA>  <NA>  <NA>  <NA>
2018-01-02  <NA>  2018  <NA>  <NA>  <NA>  <NA>  <NA>
2018-12-31  <NA>  2018  2019  <NA>  <NA>  <NA>  <NA>
2019-01-02  <NA>  <NA>  2019  <NA>  <NA>  <NA>  <NA>
2019-01-03  <NA>  <NA>  2019  <NA>  <NA>  <NA>  <NA>
2019-12-31  <NA>  <NA>  2019  2020  <NA>  <NA>  <NA>
2020-12-30  <NA>  <NA>  <NA>  2020  <NA>  <NA>  <NA>
2020-12-31  <NA>  <NA>  <NA>  2020  2021  <NA>  <NA>
2021-01-20  <NA>  <NA>  <NA>  <NA>  2021  <NA>  <NA>
2021-12-30  <NA>  <NA>  <NA>  <NA>  2021  <NA>  <NA>
2021-12-31  <NA>  <NA>  <NA>  <NA>  2021  2022  <NA>
2022-05-30  <NA>  <NA>  <NA>  <NA>  <NA>  2022  <NA>
2022-05-31  <NA>  <NA>  <NA>  <NA>  <NA>  2022  2023

groupby only

d = pd.to_datetime(df['date'])

out = (pd
 .concat([df.assign(year=d.dt.year),
          df[df.groupby(d.dt.year, as_index=False).cumcount(ascending=False).eq(0)]
            .assign(year=d.dt.year+1)])
      .groupby('year')
      # perform your aggregation here

)

熊猫不寻常的团体

甜心小果奶 2025-02-17 19:54:15

以下代码能够按照您的预期跟踪以前的搜索。它只是将先前对数组的响应以及结果堆叠在一起,并在大小超过5以上时从末尾排除项目。

function App() {
  const [search, setSearch] = React.useState("Dehradun");
  const [searchHistory, setSearchHistory] = React.useState([]);

  const doSearch = () => {
    if (search.length > 0) {
      fetch(
        `https://api.openweathermap.org/data/2.5/weather?q=${search}&units=metric&appid=7938d9005e68d8b258a109c716436c91`
      )
        .then((res) => res.json())
        .then((result) => {
          setSearchHistory((prevState) => [
            [search, result],
            ...prevState.slice(0, 4)
          ]);
          setSearch("");
        });
    }
  };

  return (
    <div>
      <input onChange={(e) => setSearch(e.target.value)} value={search} />
      <button onClick={doSearch}>search</button>
      {searchHistory.map(([search, result], index) => (
        <div key={index}>
          <b>{search}</b> : {JSON.stringify(result)}
        </div>
      ))}
    </div>
  );
}

ReactDOM.render(<App />, document.querySelector('.react'));
<script crossorigin src="https://unpkg.com/react@16/umd/react.development.js"></script>
<script crossorigin src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"></script>
<div class='react'></div>

The following code is able to keep track of previous searches as you expect. It just stacks the previous response to an array along with the result and excludes items from the end when the size exceeds more than 5.

function App() {
  const [search, setSearch] = React.useState("Dehradun");
  const [searchHistory, setSearchHistory] = React.useState([]);

  const doSearch = () => {
    if (search.length > 0) {
      fetch(
        `https://api.openweathermap.org/data/2.5/weather?q=${search}&units=metric&appid=7938d9005e68d8b258a109c716436c91`
      )
        .then((res) => res.json())
        .then((result) => {
          setSearchHistory((prevState) => [
            [search, result],
            ...prevState.slice(0, 4)
          ]);
          setSearch("");
        });
    }
  };

  return (
    <div>
      <input onChange={(e) => setSearch(e.target.value)} value={search} />
      <button onClick={doSearch}>search</button>
      {searchHistory.map(([search, result], index) => (
        <div key={index}>
          <b>{search}</b> : {JSON.stringify(result)}
        </div>
      ))}
    </div>
  );
}

ReactDOM.render(<App />, document.querySelector('.react'));
<script crossorigin src="https://unpkg.com/react@16/umd/react.development.js"></script>
<script crossorigin src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"></script>
<div class='react'></div>

如何将前5个值通过搜索进行,然后在API中使用它。我想显示五个值

甜心小果奶 2025-02-17 16:42:27

buffer =&amp; buffer [0]; 无法工作。循环后(和设置\ 0)缓冲区指向最后一个字符(SO \ 0)。取0个元素的地址只会为您提供最后一个元素的地址(作为缓冲区点)。您不能以这种方式“倒带”到第一个角色。
当您致电时, free()您开始从最后一个元素释放,然后在某些以前未分配的内存区域上迭代。

Construction like buffer = &buffer[0]; won't work. After the loop (and setting \0) buffer points to the last character (so to \0). Taking address of the 0th element will just give you address of the last element (as the buffer points). You cannot 'rewind' to the first character that way.
When you call then free() you start freeing from your last element and then iterate over some memory region that was not allocated before.

使用单个循环分配和初始化文件数据初始化缓冲区

甜心小果奶 2025-02-17 13:50:20

我不确定我是否了解您的需求。

Label1:如果您想从那些拆分zip文件中提取数据。

您无需将它们合并为一个。只需将这些 xxx.001,xxx.002 ... fiels放在同一文件夹中,然后解压缩 xxx.001 ,它将在中自动混凝土数据xxx.002,xxx.003 ... 并在这些zips中解开这些数据,至少 bandzip 软件确实可以。

LABEL2:如果您只想要一个整个大型82GB zip文件。

请参阅 lebel1 来解压缩它们,然后从那些提取的文件中创建一个新的82GB zip文件。

I am not sure if I understand your need.

Label1: In case you want to extract data from those split zip files.

You don't need to merge them as one. Just put those xxx.001, xxx.002 ... fiels in the same folder, and then unzip the xxx.001, which will automatically concrete data in xxx.002, xxx.003... and unpack those data inside those zips, at least the bandzip software does.

Label2: In case you just want a whole big 82GB zip file.

See the Lebel1 to unzip them, and then create a new 82GB zip file from those extracted files.

如何从部分下载的zip文件中提取特定文件?

甜心小果奶 2025-02-17 13:10:42

提供的信息相当稀疏。

你的定界符是什么?

定界符(引用)可以在任何字段中发生吗?

对于简单的情况,例如分隔符=“ |”并且在字段中不发生,这是一个快速 awk hack。

$ cat myDataFile 
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|g|h|i|m
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|i|j|k|l|m
a|b|c|d|e|f|g|h|i|j|k|l|m

而且尴尬:

awk -F'|' '{missing=13-NF;if(missing==0){print $0}else{printf "%s",$0;for(i=1;i<=missing-1;i++){printf "|"};print "|"}}' myDataFile 
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|g|h|i|m|||
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|d|e|f|g|h|i|j|k|l|m|
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|i|j|k|l|m||
a|b|c|d|e|f|g|h|i|j|k|l|m

尴尬变得漂亮而解释:

{
        missing = 13 - NF      # store the number of missing fields
        if (missing == 0) {    # if all fields are present
                print $0       # just print the line 
        } else {               # otherwise
                printf "%s", $0        # first print the line
                for (i = 1; i <= missing - 1; i++) {   # then pad the line with delimiters (w/o a newline)
                        printf "|"                     
                }
                print "|"      # followed by a last one WITH a newline                     
        }
}

The information provided is rather sparse.

What's your delimiter?

Can the delimiter (quoted) occur in any of the fields?

For a simple case, e.g. delimiter="|" and doesn't occur inside the fields here's a quick awk hack.

$ cat myDataFile 
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|g|h|i|m
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|i|j|k|l|m
a|b|c|d|e|f|g|h|i|j|k|l|m

And the awk:

awk -F'|' '{missing=13-NF;if(missing==0){print $0}else{printf "%s",$0;for(i=1;i<=missing-1;i++){printf "|"};print "|"}}' myDataFile 
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|g|h|i|m|||
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|d|e|f|g|h|i|j|k|l|m|
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|g|h|i|j|k|l|m
a|b|c|d|e|f|i|j|k|l|m||
a|b|c|d|e|f|g|h|i|j|k|l|m

And the awk made pretty and explained:

{
        missing = 13 - NF      # store the number of missing fields
        if (missing == 0) {    # if all fields are present
                print $0       # just print the line 
        } else {               # otherwise
                printf "%s", $0        # first print the line
                for (i = 1; i <= missing - 1; i++) {   # then pad the line with delimiters (w/o a newline)
                        printf "|"                     
                }
                print "|"      # followed by a last one WITH a newline                     
        }
}

在Linux中的文件中搜索字符在线中的字符并进行编辑

甜心小果奶 2025-02-17 02:23:34
queryable = queryable
    .Where(x => x.parts.Select(y => y.MemberId).Contains(1) || !x.parts.Any());
queryable = queryable
    .Where(x => x.parts.Select(y => y.MemberId).Contains(1) || !x.parts.Any());

linq查询加入以获取左表没有匹配记录的左表的条目

甜心小果奶 2025-02-16 08:53:00

马丁·福勒(Martin Fowler)说:

域驱动的设计是一种软件开发的方法,它以编程域模型的开发为中心,该模型对域的过程和规则有丰富的了解。

因此,DDD是有关软件开发的方法。您可以为您的应用程序选择DDD,并在此方向上实现所有代码。

但是调解员是解决特定软件问题的设计模式。您可以在代码中的任何地方解决特定问题。这与您的软件开发方法(如Singleton Design模式等)无关。

您可以在没有DDD的情况下使用调解员设计模式,也可以在没有调解器的情况下使用DDD。因此,DDD和中介设计模式之间没有共同的原理。

Martin Fowler says:

Domain-Driven Design is an approach to software development that centers the development on programming a domain model that has a rich understanding of the processes and rules of a domain.

So DDD is an approach about software development. You can choose DDD for your application and you implement all your code in this direction.

But mediator is a design pattern to solve specific software problems. You solve your a specific problem anywhere in your code. This is independent from your software development approach like singleton design pattern etc.

You can use mediator design pattern without DDD or you can use DDD without mediator. So there are no common principles between DDD and mediator design pattern.

域驱动设计(DDD)的原理是什么?

甜心小果奶 2025-02-16 05:07:55

当随后的归因解决这个问题时,只有内部类中没有错误,不幸的是,这不适用于我的情况

When thenReturn will solve this problem, but only if there are no bugs in the internal classes, which does not apply to my situation unfortunately

Mockito Mock类,称为内部测试课程

甜心小果奶 2025-02-15 11:00:42

正如您提到的,云功能的用法是正确的方法。
需要一个简单的功能,然后将其与与水桶关联的适当触发器部署。

更多详细信息,可以在此处找到示例:
https://cloud.google.com/functions/docs/docs/docs/calling/calling/calling/storage/storage

As you mentioned usage of Cloud Functions is the right approach.
A simple function is required, then it should be deployed with the proper trigger associated with a bucket.

More details, with examples can be found here:
https://cloud.google.com/functions/docs/calling/storage

在GCP中未经处理的文件夹时,要获得通知的通知

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文