濫情▎り

文章 评论 浏览 29

濫情▎り 2025-02-20 21:40:04

在下面写的此代码的情况下,您可以打开CMD并将其设置在所需的目录上。

import java.io.File;
import java.io.IOException;

public class main {
    public static void main( String[] args ) {
        try{
            String command = "ping www.google.com";
            //remove "\"start;" to hide cmd
            String[] cmd       = new String[]{ "cmd.exe", "/C", "\"start;" + command + "\"" };
            File     directory = new File( "C:/Users" );

            ProcessBuilder pb = new ProcessBuilder( cmd );
            pb.directory( directory );

            Process process = pb.start();

        }catch( IOException e ){
            System.out.println( "Something has gone wrong" );
            e.printStackTrace();
        }
    }
}

让我知道是否有帮助。

With this code i wrote below, you can open cmd and set it on desired directory.

import java.io.File;
import java.io.IOException;

public class main {
    public static void main( String[] args ) {
        try{
            String command = "ping www.google.com";
            //remove "\"start;" to hide cmd
            String[] cmd       = new String[]{ "cmd.exe", "/C", "\"start;" + command + "\"" };
            File     directory = new File( "C:/Users" );

            ProcessBuilder pb = new ProcessBuilder( cmd );
            pb.directory( directory );

            Process process = pb.start();

        }catch( IOException e ){
            System.out.println( "Something has gone wrong" );
            e.printStackTrace();
        }
    }
}

Let me know if it was helpfull.

尝试使用Java秋千和FileChooser,但我的代码中存在错误

濫情▎り 2025-02-20 19:09:05

我最终从头开始实现了元素的拖动行为 - 从而覆盖了默认行为 - 而不是尝试使用 event.datatransfer.setdragimage(img,0,0,0)。

例如,请参阅此堆栈溢出帖子:样式拖动ghost ghost element

但是,任何帮助理解为什么要理解为什么原始的代码仍然不起作用。

I ended up implementing the dragging behaviour of the element from scratch -- and thus overriding the default behaviour -- instead of trying to set a custom ghost image with event.dataTransfer.setDragImage(img, 0, 0).

For example see this Stack Overflow post: Style drag ghost element

However, any help understanding why the original code doesn't work is still appreciated.

基于该元素的HTML代码的拖放元素的自定义幽灵映像

濫情▎り 2025-02-20 16:09:48

尝试删除React严格模式,它使组件仅在开发中呈现两次,而不是在生产中。如果是这种情况,请重新打开。

Try to remove React Strict Mode, it makes components render twice only in development, not in production. Put back on if it's the case.

反应使用效应导致功能运行7次。我正在使用usecallback,但仍有很多次

濫情▎り 2025-02-20 05:48:16

在应用程序中

。弹簧:
数据源:
hikari:
模式:名称chema

In application.yml

spring:
datasource:
hikari:
schema: nameSchema

在哪里可以在项目中安装架构?

濫情▎り 2025-02-20 01:12:48

解决方案的格式对我来说在Windows 10和GIT版本2.32.0上不适用于我。这个片段对我有用

主机 = hostname.com
IdentityFile =〜/.ssh/id_rsa
身份 =是
hostKeyalgorithms = +ssh-rsa
PubKeyAcceptedalgorithms = +SSH-RSA

The format of the workaround wasn't working for me for windows 10 and git version 2.32.0. This snippet worked for me

Host = Hostname.com
IdentityFile = ~/.ssh/id_rsa
IdentitiesOnly = yes
HostkeyAlgorithms = +ssh-rsa
PubkeyAcceptedAlgorithms = +ssh-rsa

git错误未找到匹配的主机密钥类型。他们的报价:SSH-RSA

濫情▎り 2025-02-19 20:22:24

在这里描述了 https://docs.confluent.io/platform/current/current/current/docker/docker/config-reference.html#confluent-enterprise-enterprise-ak-configuration

For the Enterprise Kafka (cp-server) image, convert the kafka.properties file variables as below and use them as environment variables:

    Prefix with KAFKA_ for Apache Kafka.
    Prefix with CONFLUENT_ for Confluent components.
    Convert to upper-case.
    Replace a period (.) with a single underscore (_).
    Replace a dash (-) with double underscores (__).
    Replace an underscore (_) with triple underscores (___).

似乎我似乎必须使用

kafka_confluentics_metrics_meretrics_merterics_mertrectopicto. 1

kafka_confluent_security_event_logger_exporter_kafka_topic_replicas:1

It's described here https://docs.confluent.io/platform/current/installation/docker/config-reference.html#confluent-enterprise-ak-configuration

For the Enterprise Kafka (cp-server) image, convert the kafka.properties file variables as below and use them as environment variables:

    Prefix with KAFKA_ for Apache Kafka.
    Prefix with CONFLUENT_ for Confluent components.
    Convert to upper-case.
    Replace a period (.) with a single underscore (_).
    Replace a dash (-) with double underscores (__).
    Replace an underscore (_) with triple underscores (___).

It seems like I have to use

KAFKA_CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1

and

KAFKA_CONFLUENT_SECURITY_EVENT_LOGGER_EXPORTER_KAFKA_TOPIC_REPLICAS: 1

如何映射汇合。'前缀变量适用于正确的Docker Envs?

濫情▎り 2025-02-19 07:06:50

您可以使用

protected function success($message, $data, $status = Response::HTTP_OK)
{
    return response()->json(array_merge([
        'status' => 'success',
        'message' => $message,
    ], $data->toArray()), $status);
}

,您会得到像

{
    "status": "success",
    "message": "Fetched customer details",
    "current_page": 1,
    "data": [
        {
            "pid": "hkEH97ur",
            "name": "badri44",
            "email": "[email protected]",
            "mobile": "761487",
            "total_amount": 0,
            "used_amount": 0,
            "remaining_amount": 0,
            "active_status": "Active"
        },
        {
            "pid": "5j9vCsRb",
            "name": "badri3",
            "email": "[email protected]",
            "mobile": "9989893890808118",
            "total_amount": 0,
            "used_amount": 0,
            "remaining_amount": 0,
            "active_status": "Active"
        }
    ],
    "first_page_url": "http://localhost/retail/v1/customers?page=1",
    "from": 1,
    "next_page_url": null,
    "path": "http://localhost/retail/v1/customers",
    "per_page": 15,
    "prev_page_url": null,
    "to": 2,
}

You could use

protected function success($message, $data, $status = Response::HTTP_OK)
{
    return response()->json(array_merge([
        'status' => 'success',
        'message' => $message,
    ], $data->toArray()), $status);
}

And you will get a response like

{
    "status": "success",
    "message": "Fetched customer details",
    "current_page": 1,
    "data": [
        {
            "pid": "hkEH97ur",
            "name": "badri44",
            "email": "[email protected]",
            "mobile": "761487",
            "total_amount": 0,
            "used_amount": 0,
            "remaining_amount": 0,
            "active_status": "Active"
        },
        {
            "pid": "5j9vCsRb",
            "name": "badri3",
            "email": "[email protected]",
            "mobile": "9989893890808118",
            "total_amount": 0,
            "used_amount": 0,
            "remaining_amount": 0,
            "active_status": "Active"
        }
    ],
    "first_page_url": "http://localhost/retail/v1/customers?page=1",
    "from": 1,
    "next_page_url": null,
    "path": "http://localhost/retail/v1/customers",
    "per_page": 15,
    "prev_page_url": null,
    "to": 2,
}

当我返回数据时获取数据中的数据。为什么?

濫情▎り 2025-02-19 05:35:12

我在设置中发现了问题。

在第二个管道中定义资源时,会触发自动下载伪像。这意味着发布管道不需要步骤

                    - download: 'BAK-Release'
                      artifact: $(ArtifactName)

不仅要与发布资源相同,因此,在我的情况下,它应该是 bak-ak 。参考:

resources:
  pipelines:
    - pipeline: 'bak-ak'
      source: 'BAK - Build'
      trigger:
        branches:
          - main

I found the problem with my setup.

When the resource is defined in the second pipeline, an auto download artifact is triggered. That means that the release pipeline does not need the step

                    - download: 'BAK-Release'
                      artifact: $(ArtifactName)

Not only that, the download string must be the same name as the release resource, in my case it should have been bak-ak. Ref:

resources:
  pipelines:
    - pipeline: 'bak-ak'
      source: 'BAK - Build'
      trigger:
        branches:
          - main

Azure Devops第二管道消耗另一条管道的工件被困在工作中

濫情▎り 2025-02-19 00:14:04

您可以在否则pyspark的功能时使用,

因为您只需要在q == 1的位置提取,因此对于任何其他值的结果将为null

>>> from pyspark.sql.functions import when
>>> df.withColumn("result",when(col('Q')==1,(df['median']-df['2_3avg'])).otherwise("nulll")).show()
+----------+------+---+------+------+
|      date|median|  Q|2_3avg|result|
+----------+------+---+------+------+
|2018-03-31|     6|  1|    15|    -9|
|2018-03-31|    27|  2|    15| nulll|
|2018-03-31|     3|  3|    15| nulll|
|2018-03-31|    44|  4|    15| nulll|
|2018-06-30|     6|  1|    18|   -12|
|2018-06-30|     4|  3|    18| nulll|
|2018-06-30|    32|  2|    18| nulll|
|2018-06-30|   112|  4|    18| nulll|
|2018-09-30|     2|  1|    20|   -18|
|2018-09-30|    23|  4|    20| nulll|
|2018-09-30|    37|  3|    20| nulll|
|2018-09-30|     3|  2|    20| nulll|
+----------+------+---+------+------+

you can use when otherwise function of pyspark

since you have to substract only where Q==1 so for any other value result will be null

>>> from pyspark.sql.functions import when
>>> df.withColumn("result",when(col('Q')==1,(df['median']-df['2_3avg'])).otherwise("nulll")).show()
+----------+------+---+------+------+
|      date|median|  Q|2_3avg|result|
+----------+------+---+------+------+
|2018-03-31|     6|  1|    15|    -9|
|2018-03-31|    27|  2|    15| nulll|
|2018-03-31|     3|  3|    15| nulll|
|2018-03-31|    44|  4|    15| nulll|
|2018-06-30|     6|  1|    18|   -12|
|2018-06-30|     4|  3|    18| nulll|
|2018-06-30|    32|  2|    18| nulll|
|2018-06-30|   112|  4|    18| nulll|
|2018-09-30|     2|  1|    20|   -18|
|2018-09-30|    23|  4|    20| nulll|
|2018-09-30|    37|  3|    20| nulll|
|2018-09-30|     3|  2|    20| nulll|
+----------+------+---+------+------+

在窗口函数上减去列值

濫情▎り 2025-02-18 20:56:02

通过引号将字符串分为数组,然后替换每个奇数。

const testString = 'The quick "brown" fox ate his lunch';

function removeWordsInQuotes(string) {

        let wordArray = string.split('"');

        for (let index = 0; index < wordArray.length; index++) {
                const element = wordArray[index];
                if (index % 2 === 1) {

                    //this is where the words are replaced with X
                        let censorship = '';
                        for (let j = 0; j < element.length; j++) {
                                censorship += 'X';
                        }
                        wordArray[index] = censorship;
                }
        }
        return wordArray.join('"');
}

console.log(removeWordsInQuotes(testString)); // "The quick 'XXXXX' fox ate his lunch"

这是一个codepen

Split the string into an array by quotes, and replace every odd word.

const testString = 'The quick "brown" fox ate his lunch';

function removeWordsInQuotes(string) {

        let wordArray = string.split('"');

        for (let index = 0; index < wordArray.length; index++) {
                const element = wordArray[index];
                if (index % 2 === 1) {

                    //this is where the words are replaced with X
                        let censorship = '';
                        for (let j = 0; j < element.length; j++) {
                                censorship += 'X';
                        }
                        wordArray[index] = censorship;
                }
        }
        return wordArray.join('"');
}

console.log(removeWordsInQuotes(testString)); // "The quick 'XXXXX' fox ate his lunch"

Here is a codepen

我如何审查文本中的任何单词之间的引号标记之间?

濫情▎り 2025-02-18 17:27:12

我也有类似的问题:一个客户可以垄断资源并延迟所有其他客户的执行,只是因为他们的活动首先到达。

在不同消息量的不同应用程序上,我们只是将所有事件加载到内存中,为每个客户创建一个内存队列,然后从每个客户队列中排出n个事件,然后再次将它们重新加入其他队列,我们称其为重新排序的队列。重新排序的队列具有容量限制。 (假设... 100*n),所以直到有空间之前,没有其他元素是队列。这保证了对所有客户的平等待遇。

我现在正在处理一个处理数十亿条消息的应用程序,现在面临着同样的问题。上面的解决方案是不可能的;只有不够的公羊。我们不能将所有数据保存在内存中。为每个客户创建一个主题也听起来过大;特别是,如果您在任何给定时间点都有一组活跃客户集。尽管如此,Pulsar似乎处理了数千个,甚至数百万个主题。

因此,上面的技术可能对您(以及对我)很好。
只需从数千个主题中阅读...写入另一个主题有限数量的消息,然后等待它具有“空间”以继续启动。

I have a similar problem: a single customer can monopolize the resources and delay execution from all other customers, just because their events arrived first.

On a different application with a low amount of messages, we just load all the events in memory, creating a in-memory queue for every customer and then dequeuing up to N events from each customer queue and re-queue them again into a different queue, lets call it the re-ordered queue. The re-ordered queue has a capacity limit. (lets say...100*N), so no additional elements are queue until there is space. This guarantees equal treatment to all customers.

I am facing the same problem now with an application that processes billions of messages. The solution above is impossible; there is just not enough RAM. We can't keep all the data in memory. Creating a topic for each customer also sounds overkill; specially if you have a variable set of active customers at any given point in time. Nevertheless, Pulsar seems to handle well thousands, even millions, of topics.

So the technique above may work well for you (and for me).
Just read from thousands of topics... write to another topic a limited number of messages and then wait for it to have "space" to continue enqueuing.

如何在编写1个流的多个租户之间实施公平的安排

濫情▎り 2025-02-18 01:53:12

这是发生的,因为每当您更改URL时,组件就会被卸载。当您回来时,它是一个新的组件,这意味着 filtersfetched 状态将处于其初始值。我认为您是这样宣布的:

const [filtersFetched, setFiltersFetched] = useState(false);

这是React的正常行为。如果对您真的很重要的是,它不应在更改URL之后在MORT上运行,则可以使用 localStorage 记住 filtersfetched 的值。为此,将您的更改为:

useEffect(() => {
  if (history.location.search.includes("publisher_id=") && !filtersFetched) {
    fetchAllAxiosResponse <
      API.Publisher >
      (CatalogService.getPublishers, new URLSearchParams(), 1500)
        .then((data) => {
          setPublishers(data);
        })
        .catch((err) => {
          console.log(err);
        })
        .finally(() => {
          setFiltersFetched(true);
          localStorage.setItem("filtersFetched", JSON.stringify(true));
        });
  }
}, [filtersFetched, history.location]);

并设置 filtersfetched 的初始值,而不是以下内容,而不是您之前的内容:

const initialFiltersFetchedValue = localStorage.getItem("filtersFetched")
  ? JSON.parse(localStorage.getItem("filtersFetched"))
  : false;
const [filtersFetched, setFiltersFetched] = useState(initialFiltersFetchedValue);

或 @dilshan所说,您可以使用上下文它包含您的应用程序,将上述逻辑放置在其中。另外,您可以使用一个库,例如 react Query 来缓存您的http请求。

Thats's happening because whenever you change the url, the component gets unmounted. And when you come back, it's a fresh component, which means filtersFetched state will be at its initial value. I assume you declared it like this:

const [filtersFetched, setFiltersFetched] = useState(false);

That's the normal behaviour in React. If it's really important for you that it shouldn't run on mount after url change, you could use localStorage to remember the value of filtersFetched. For that change your useEffect to:

useEffect(() => {
  if (history.location.search.includes("publisher_id=") && !filtersFetched) {
    fetchAllAxiosResponse <
      API.Publisher >
      (CatalogService.getPublishers, new URLSearchParams(), 1500)
        .then((data) => {
          setPublishers(data);
        })
        .catch((err) => {
          console.log(err);
        })
        .finally(() => {
          setFiltersFetched(true);
          localStorage.setItem("filtersFetched", JSON.stringify(true));
        });
  }
}, [filtersFetched, history.location]);

And set the initial value of filtersFetched like below instead of what you had before:

const initialFiltersFetchedValue = localStorage.getItem("filtersFetched")
  ? JSON.parse(localStorage.getItem("filtersFetched"))
  : false;
const [filtersFetched, setFiltersFetched] = useState(initialFiltersFetchedValue);

Or as @Dilshan said, you could use a context that wraps your app, in which you would put the above logic. Also you could use a library like React Query to cache your HTTP requests.

反应,网址更改后无法运行逻辑效果

濫情▎り 2025-02-17 03:49:03

您可以转发填充<代码>时间列,然后groupby 时间

df['time'] = df['time'].ffill()
out = (df.groupby('time', as_index=False)
       ['text'].agg(lambda x: '\n'.join(x.dropna())))
print(out)

         time           text
0  01.01.2000  abc\ncde\ndef
1  01.02.2000  abb\nbbc\ndde
2  01.03.2000  123\n278\n782
groups = [g for name, g in out.groupby('time')]
print(groups)

[         time           text
0  01.01.2000  abc\ncde\ndef,          time           text
1  01.02.2000  abb\nbbc\ndde,          time           text
2  01.03.2000  123\n278\n782]

You can forward fill time column then groupby time column

df['time'] = df['time'].ffill()
out = (df.groupby('time', as_index=False)
       ['text'].agg(lambda x: '\n'.join(x.dropna())))
print(out)

         time           text
0  01.01.2000  abc\ncde\ndef
1  01.02.2000  abb\nbbc\ndde
2  01.03.2000  123\n278\n782
groups = [g for name, g in out.groupby('time')]
print(groups)

[         time           text
0  01.01.2000  abc\ncde\ndef,          time           text
1  01.02.2000  abb\nbbc\ndde,          time           text
2  01.03.2000  123\n278\n782]

根据列中的条目将数据帧分为多个数据框

濫情▎り 2025-02-16 21:01:18

这是默认行为。
当您的值=''并按DELETE时,它仍然等于'',并且没有被调用。
为了实现目标,您应该使用RawKeyboardListener等听众

this is a default behaviour.
when your value = '' and you press delete it is still equal to '' and onChanged not getting called.
to achieve your goals you should use a listener like RawKeyboardListener

当没有角色颤动时,Textformfield on Changed不会被调用

濫情▎り 2025-02-16 05:13:58

我喜欢Pankaj的答案,我和以为我可以摆脱第二个CTE(而不会使它成为子选择),但最终出于范围的原因而需要。

我确实翻转了使用变量来控制桶。我首先将两个顺序添加到了桶中,我使用了所讨论的值,因此可以看到如何不随机将值分配到存储桶中。其次是数组上的组内部的一个,因此始终“按顺序”始终不需要“按顺序”。但也不需要说明。但是,如果总是需要答案,则需要按顺序进行。

set width = 4;

with table_of_numbers(val) as (
    -- just a CTE to make 14 numbers
    select 
        row_number() over(order by null)
    from table(generator(ROWCOUNT => 14))
), pre_cond as(
    select *,
        ceil(row_number() over (order by val)/$width) as rn
    from table_of_numbers
)
select 
    array_agg(val)within group(order by val) agg_output 
from pre_cond 
group by rn order by rn;

给出:

agg_output
[1,2,3,4] [5,6,7,8
]
[9,10,11,12 ]
[13,14]

在存储桶上注意的点是Row_number在1开始值将是自然间隔,因此需要CEIL将PRIO N-1部分推入同一桶。由于不只有整数划分(自然地板)减去1 form_number,因此无法利用更简单的逻辑。

I like Pankaj's answer, I and thought I could get rid of the second CTE (without making it a sub-select) but in the end it is needed for scoping reasons.

I did flip to using a variable to control the bucketing. And I added two ORDER BY's firstly to the bucket, I used the values in question, so it can be seen how to not randomly allocate the values into buckets. And secondly a WITHIN GROUP on the array so the values are always "in order" both perhaps not required. But also not stated as not required. But if the answers are always want this way, the ORDER BY are required.

set width = 4;

with table_of_numbers(val) as (
    -- just a CTE to make 14 numbers
    select 
        row_number() over(order by null)
    from table(generator(ROWCOUNT => 14))
), pre_cond as(
    select *,
        ceil(row_number() over (order by val)/$width) as rn
    from table_of_numbers
)
select 
    array_agg(val)within group(order by val) agg_output 
from pre_cond 
group by rn order by rn;

gives:

AGG_OUTPUT
[ 1, 2, 3, 4 ]
[ 5, 6, 7, 8 ]
[ 9, 10, 11, 12 ]
[ 13, 14 ]

The points to note on the bucketing is ROW_NUMBER starts at 1 thus the Nth values will be a natural interval, thus CEIL is needed to push the prio N-1 partials into the same bucket. As there is not integer only division (with natural floor) subtracting 1 form ROW_NUMBER cannot be leveraged for a simpler logic.

获取表中每个k记录的array_agg(id)数组

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文