笙痞

文章 评论 浏览 30

笙痞 2025-02-07 02:35:49

通过此链接
https://reactrouter.com/docs/docs/docs/en/en/en/v6/getting-started/概述

import ReactDOM from "react-dom/client";
import {
  BrowserRouter,
  Routes,
  Route,
} from "react-router-dom";
// import your route components too

const root = ReactDOM.createRoot(
  document.getElementById("root")
);
root.render(
  <BrowserRouter>
    <Routes>
      <Route path="/content" element={<content />}>
        </Route>
      </Route>
    </Routes>
  </BrowserRouter>
);

go through this link
https://reactrouter.com/docs/en/v6/getting-started/overview

import ReactDOM from "react-dom/client";
import {
  BrowserRouter,
  Routes,
  Route,
} from "react-router-dom";
// import your route components too

const root = ReactDOM.createRoot(
  document.getElementById("root")
);
root.render(
  <BrowserRouter>
    <Routes>
      <Route path="/content" element={<content />}>
        </Route>
      </Route>
    </Routes>
  </BrowserRouter>
);

使用browserrouter v6,未显示网页未找到错误

笙痞 2025-02-06 16:54:53

我发现了我认为的解决方案,并且代码应该是:

%%bash
git clone https://github.com/my_repository/folder1

%cd folder1

run -i file1.py

I found a solution I think and the code should be:

%%bash
git clone https://github.com/my_repository/folder1

%cd folder1

run -i file1.py

如何在不使用Google Drive的情况下将文件从GitHub上传到COLAB?

笙痞 2025-02-06 13:26:06

以防万一旅游文件已经在过去的提交中已经跟踪了,请尝试过滤。

  • (基于Python)
  • 删除您的历史记录中的任何大文件:git filter-repo-strip-blobs-bigger-bigger-比2M。 (基于内容的过滤
  • 力推动(git push-mirror:确保在该存储库上通知任何合作者)

Just in case tour file was already tracked in past commits, try filtering it.

  • install git filter-repo (python-based)
  • delete any large file in your history: git filter-repo --strip-blobs-bigger-than 2M for instance. (content-based filtering)
  • force push (git push --mirror: make sure to notify any collaborator on that repository)

文件大小超过错误。如何从git中删除文件?

笙痞 2025-02-05 22:12:56

如果我添加了console.log(部分)我获得{}

这可能是因为您执行此输出作为同步执行的一部分,并且不要等待Promise解决。简而言之,您仅在 a a 然后回调时只能打印,因为只有在填充对象时才能执行。

不是您的问题,但是当您已经拥有一个时,您不应创建新的Promise - 它不会带来任何其他值。

同样,在函数内部突变全局变量()是不好的做法。相反,让您的parsemd更加独立,然后将其解析为您所需的信息 input string(id> id和<代码> value ),并将其留给该承诺的用户,将这些信息对组合到对象e节中。

所以这样做:

const parseMD = (val) => unified()
    .use(remarkParse)
    .use(remarkRehype)
    .use(rehypeStringify)
    .process(val.body)
    .then(str => [val.id, str.value]); // No access to `sections` here! Just the pair

Promise.all(contents.map(parseMD)) // Wait for ALL those little promises to resolve
    .then(Object.fromEntries) // Collect the pairs into an object
    .then(sections => { // This is the accumulated object we need
        // Do all you wish to do with sections here (not in global scope)
        console.log(sections);
    })
    .catch(console.log);

If I add a console.log(sections) I get {}

This is probably because you perform this output as part of the synchronous execution, and do not wait for the promise to resolve. In short, you should only print sections when you are within a then callback, as that will only execute when the object has been populated.

Not your problem, but you should not create a new Promise when you already have one -- it doesn't bring any additional value.

Also, it is bad practice to mutate a global variable (sections) inside a function. Instead, let your parseMD be more independent, and let it just resolve to the information you need for that input string (the id and the value), and leave it to the user of that promise to combine those pairs of information into an object sections.

So do like this:

const parseMD = (val) => unified()
    .use(remarkParse)
    .use(remarkRehype)
    .use(rehypeStringify)
    .process(val.body)
    .then(str => [val.id, str.value]); // No access to `sections` here! Just the pair

Promise.all(contents.map(parseMD)) // Wait for ALL those little promises to resolve
    .then(Object.fromEntries) // Collect the pairs into an object
    .then(sections => { // This is the accumulated object we need
        // Do all you wish to do with sections here (not in global scope)
        console.log(sections);
    })
    .catch(console.log);

从备注而不是诺言中返回价值

笙痞 2025-02-05 04:55:17

您可以尝试一下:

for url in all_urls.split('\n'):
    print(url)

我认为您有一个字符串而不是列表,因此,当您循环到变量的所有元素时,它只会打印单个字符。

这样,split('\ n')将允许通过在新行上拆分来生成列表,每个元素将是单个URL。

You can try with this:

for url in all_urls.split('\n'):
    print(url)

What I'm thinking is that you have a string rather than a list, so when you cycle over all elements of your variable, it just prints the single character.

In this way, split('\n') will allow to generate a list by splitting on the new line, each of which element will be the single url.

python-如何在for循环中打印整个URL,而不仅仅是字符

笙痞 2025-02-05 03:55:00
from pandas.io.json import json_normalize
data = [{"column1" : "value1", "column2" : {"key1" : "kvalue1", "key2" : "kvalue2" } },
        {"column1" : "value2", "column2" : {"key1" : "kvalue3", "key2" : "kvalue4" } },
        {"column1" : "value3", "column2" : {"key1" : "kvalue5", "key2" : "kvalue6" } }]

# Generate the dataframe for the array items in
dataframe = pd.DataFrame()
record = pd.json_normalize(data)

# Append it to the dataframe
dataframe =  pd.concat([dataframe,record], ignore_index=True)

# Renaming columns of the dataframe
col = {"column1":"column1","column2.key1":"key1","column2.key2":"key2"}
df = dataframe.rename(columns = col)[[*col.values()]]

df


OUTPUT:     column1  key1    key2
       0    value1  kvalue1 kvalue2
       1    value2  kvalue3 kvalue4
       2    value3  kvalue5 kvalue6

另外,如果您希望直接转换JSON文件,即使您可以使用以下函数:

import json
def read_json(filename: str) -> dict:
    try:
        with open(filename, "r") as f:
            data = json.loads(f.read())
    except:
        raise Exception(f"Reading {filename} file encountered an error")

    return data

def create_dataframe(data: list) -> pd.DataFrame:

    # Declare an empty dataframe to append records
    dataframe = pd.DataFrame()

    # Looping through each record
    for d in data:

    # Normalize the column levels
        record = pd.json_normalize(d)
    # Append it to the dataframe
        dataframe =  pd.concat([dataframe,record], ignore_index=True)

    return dataframe


# Read the JSON file as python dictionary
data = read_json(filename="./demo.json")

#*data=data[insert parameter according to your json data]*
dataframe = create_dataframe(data=data['data'])

# Renaming columns of the dataframe
col = {"column1":"column1","column2.key1":"key1","column2.key2":"key2"}
df = dataframe.rename(columns = col)[[*col.values()]]

df
from pandas.io.json import json_normalize
data = [{"column1" : "value1", "column2" : {"key1" : "kvalue1", "key2" : "kvalue2" } },
        {"column1" : "value2", "column2" : {"key1" : "kvalue3", "key2" : "kvalue4" } },
        {"column1" : "value3", "column2" : {"key1" : "kvalue5", "key2" : "kvalue6" } }]

# Generate the dataframe for the array items in
dataframe = pd.DataFrame()
record = pd.json_normalize(data)

# Append it to the dataframe
dataframe =  pd.concat([dataframe,record], ignore_index=True)

# Renaming columns of the dataframe
col = {"column1":"column1","column2.key1":"key1","column2.key2":"key2"}
df = dataframe.rename(columns = col)[[*col.values()]]

df


OUTPUT:     column1  key1    key2
       0    value1  kvalue1 kvalue2
       1    value2  kvalue3 kvalue4
       2    value3  kvalue5 kvalue6

Also, if you wish to directly convert the JSON file even you can use below function:

import json
def read_json(filename: str) -> dict:
    try:
        with open(filename, "r") as f:
            data = json.loads(f.read())
    except:
        raise Exception(f"Reading {filename} file encountered an error")

    return data

def create_dataframe(data: list) -> pd.DataFrame:

    # Declare an empty dataframe to append records
    dataframe = pd.DataFrame()

    # Looping through each record
    for d in data:

    # Normalize the column levels
        record = pd.json_normalize(d)
    # Append it to the dataframe
        dataframe =  pd.concat([dataframe,record], ignore_index=True)

    return dataframe


# Read the JSON file as python dictionary
data = read_json(filename="./demo.json")

#*data=data[insert parameter according to your json data]*
dataframe = create_dataframe(data=data['data'])

# Renaming columns of the dataframe
col = {"column1":"column1","column2.key1":"key1","column2.key2":"key2"}
df = dataframe.rename(columns = col)[[*col.values()]]

df

如何读取JSONL文件并归一化JSON列

笙痞 2025-02-04 06:38:16

我已经使用了示例API,并能够使用Azure Data Factory中的数据流动活动将JSON数据从BLOB存储发布到其余端点。

来源数据集:

“在此处输入图像描述”

  • 将源连接到JSON数据集。

  • 将水槽连接到休息端点。

”在此处输入图像说明”

I have repro’d with sample API and was able to post the JSON data from blob storage to the REST endpoint using data flow activity in azure data factory.

Source dataset:

enter image description here

  • Connect source to json dataset.

enter image description here

  • Connect sink to rest endpoint.

enter image description here

enter image description here

发布JSON数据(在ADL中存储为JSON文件)以在Azure Data Factory中休息端点

笙痞 2025-02-03 19:44:44

“在此处输入图像描述”
“在此处输入图像说明”
“在此处输入图像说明”
嗨,团队,对于此问题,我正在附上您的参考文档,以便如何在GCP控制台上创建警报策略。这将为所有GCP人员提供帮助。

在测试和产品环境上为云数据patoc工作失败创建警报策略指标:
•第一步,我们需要进入“ Google Cloud Console”并选择灵活的环境。
•转到云记录页面查看错误日志。根据错误日志,我们需要创建度量标准。
•在这里,我们需要选择集群名称,服务名称和错误消息。对于您的参考,我将附加下面的快照。

“在此处输入映像说明”

•在上面的快照中,将创建衡量标准选项。选择公制选项后,我们需要一次选择该度量选项,它将显示如下快照。

•选择创建度量标准后,它将显示详细信息,过滤器选择&amp;我们也可以添加标签。

注意:我们正在根据错误日志创建此自定义指标。在这里,我们需要填写强制性字段,例如基于日志的名称,描述和过滤器选择。填写详细信息后一次。单击创建集群。

重要:根据您的过滤器部分,您可以检查日志。顶部过滤器选择上有预览日志选项。此预览日志仅显示一小时的持续时间日志。这意味着,如果假设您在上午10:30的工作失败,则最多显示到上午11:30。之后,它不会显示任何日志。

笔记:
如果您想创建一个自定义指标,我建议检查工作失败时间并创建自定义指标。如果当时创建,您也可以应用正则表达式,并且可以从此错误日志中选择所需的标签。

•创建自定义公制后,我们需要导航到公制资源管理器页面。

•在这里,我们需要选择我们创建的公制名称表格公制下降选项。选择公制名称后,我们可以选择适合您创建阈值值的图表。
•选择此值后,它将根据您的自定义过滤器显示错误日志。
注意:对于您的参考,我正在选择该自定义度量标准的自定义度量标准,我正在DataProc群集作业失败名称上应用过滤器,然后单击“应用”按钮。

•根据群集名称选择之后,它将在图形上显示群集作业失败日志。uppo在这里我们只是创建度量标准。
•现在,我们需要从控制台左侧的导航菜单中选择“中路”选项。
•选择警报选项后,它将显示如下快照。

•单击创建策略选项。将使用警报策略创建页面,然后选择您的自定义度量名称,然后单击其上的应用按钮。

•选择您的度量标准后,它将根据“快照”中的群集名称显示错误图,然后单击“下一个”按钮,它将输入变更详细信息。

•在这里,我们需要根据我们的工作失败给出阈值,然后单击下一个按钮。

•在这里,我们需要使用您的电子邮件ID或团队邮件组名称填充通知频道。我们需要提及警报通知的主题。根据您的严重性,您需要选择策略严重性级别并为您的警报策略提供唯一的名称。

•创建警报策略后,您将根据您的严重性收到警报通知。

•这是为了创建自定义警报指标和策略。

enter image description here
Enter image description here
Enter image description here
Hi Team, For this issue i am attaching documentation for your reference how to create a alert policy on gcp console.It will really help for all GCP folks.

Creating Alert policy Metric for Cloud DataProc Job Failure on Test and Prod Environment:
• First Step we need to go “google cloud console” and choose Flexible environment.
• Go to Cloud logging page view the error logs. According to the Error logs we need to create metric.
• Here we need to select the cluster name, service name and Error message. For your reference I am attaching the snapshot below.

enter image description here

• In the above snapshot there will the create metric option. We need to select that metric option once after selecting the metric option it will show like below snapshot.

• After selecting Create Metric , It will shows the Details, Filter selection& We can add Labels as well.

Note: This custom metric we are creating according to our error Logs. Here We need to fill mandatory fields like log based name, description and Filter Selection. Once after filling the details. Click on create a cluster.

Important: According to your filter section you can check the logs. There is preview logs option on top Filter selection. This preview log will show only one hour duration logs. That means, If suppose your job failure on 10:30 am it will show upto 11:30 am. After that it won’t show any logs.

Note:
If you want to create a custom metric I will suggest to check the job failure time and create the custom metric. If we create at that time you can apply regular expression as well and you can pick which label you want from this error logs.

• After Creating the custom metric we need to navigate into Metric Explorer page.

• Here We need to select our created metric name form metric drop options. Once after selecting our metric name, we can choose which graph is suitable for you to create a threshold value.
• After selecting this value, it will show the error logs according to your custom filters.
Note: for your reference here, I am selecting my custom metric in that custom metric I am applying the filter on dataproc cluster job failure name and click on apply button.

• After Selecting according to cluster name it will show cluster job failure log on the graph.Upto here we are just creating the metric.
• Now, We need to select the aletring option from navigation menu on left side of the console.
• After selecting the Alerting option it will show like below snap shot.

• Click on Create policy option. Will take into the alert policy creation page and select your custom metric name and click on apply button on it.

• After selecting your metric it will show the error graph according to the cluster name from above snapshot and Click Next Button it will take into Alter Details.

• Here We need to give threshold value according to our job failure and click on next button.

• Here We need to fill the notification channel with your email id or your team mail group name. and we need to mention subject for the alert notification. According to your severity you need to select the policy Severity level and Give unique name for your Alert policy.

• Once after creation the alert policy you will receive alert notifications according to your severity.

• This is to create custom alert metric and policy.

Google Cloud平台 - 创建警报策略 - 如何在警报文档标记时指定消息变量?

笙痞 2025-02-03 18:59:54

blob不是有效的 jdl 字段类型,尝试> Blob而不是(资本化很重要)。

有关更多信息,您可以检查“ nofollow noreferrer”> field类型和验证

BLOB is not a valid JDL field type, try Blob instead (capitalization is important).

For more information you can check Field types and validations.

尝试使用MongoDB将JDL文件导入JHIPSTER项目时出错

笙痞 2025-02-03 11:48:18

您创建的初始迁移可能生成的约束“ FK_Books_users_userid”。由于您尚未执行它,但是手动添加了表(如您所说),因此该约束从未在数据库中创建。

现在,当您添加了类(ES)的配置时,您将在类书籍中具有属性“用户ID”,该账簿将生成称为“ FK_Books_users_userid”的外键(请在用户ID中注意UserID中的小写u,而不是用户ID中的uppercase u)。

当您的新迁移试图在数据库中删除“ fk_books_users_userid”时,它会失败,因为它从未创建。

一种解决方案是简单地从您的新迁移中删除这种代码的平静:

migrationBuilder.DropForeignKey(
                name: "FK_Books_Users_UserId",
                table: "Books");

这只会留下此部分:

migrationBuilder.AddForeignKey(
                name: "FK_books_Users_userid",
                table: "books",
                column: "userid",
                principalTable: "Users",
                principalColumn: "Id",
                onDelete: ReferentialAction.Cascade);

我不建议这样做,但它将适用于该数据库。

第二个解决方案是清除数据库,删除这两个创建的迁移,然后创建新的迁移并执行它。这将为您提供干净的起点,而无需将来错误。

Initial migration that you created probably generated constraint "FK_Books_Users_UserId". Since you haven't executed it, but added table(s) manually (as you said), this constraint was never created in the database.

Now when you added configuration for your class(es), you have property "userid" in class books, which will generate foreign key called "FK_books_Users_userid" (notice lowercase u in userId instead of uppercase U in UserId).

When your new migration tries to drop "FK_Books_Users_UserId" in the database it fails since it was never created in the first place.

One solution would be to simply delete this peace of code from your new migration:

migrationBuilder.DropForeignKey(
                name: "FK_Books_Users_UserId",
                table: "Books");

which would leave just this part:

migrationBuilder.AddForeignKey(
                name: "FK_books_Users_userid",
                table: "books",
                column: "userid",
                principalTable: "Users",
                principalColumn: "Id",
                onDelete: ReferentialAction.Cascade);

which I would not recommend, but it will work for that one database.

Second solution would be to clear your database, remove these two created migrations and just create new one and execute it. This would give you clean starting point without future errors.

EF Core无法执行数据库更新命令“失败:Microsoft.entityFrameWorkCore.Database.Command [20102]&quot;

笙痞 2025-02-03 11:41:02

我认为截至今年早些时候,Buildx添加了一项功能。

如果您的Dockerfile 1.4+和buildx 0.8+,则可以执行类似的操作:

docker buildx build --build-context othersource= ../something/something .

然后,在Docker文件中,您可以使用“从命令”添加上下文,

ADD –-from=othersource . /stuff

请参见此相关文章

I think as of earlier this year a feature was added in buildx to do just this.

If you have dockerfile 1.4+ and buildx 0.8+ you can do something like this:

docker buildx build --build-context othersource= ../something/something .

Then in your docker file you can use the from command to add the context

ADD –-from=othersource . /stuff

See this related post.

如何在Docker构建上下文之外包含文件?

笙痞 2025-02-03 08:11:39
while true do
...
elseif not IsPedInAnyPoliceVehicle(playerPed) and not IsPauseMenuActive() then
    TriggerServerEvent("mdt:hotKeyOpen")
end

一个循环没有其他/Elseif分支。

while true do
...
elseif not IsPedInAnyPoliceVehicle(playerPed) and not IsPauseMenuActive() then
    TriggerServerEvent("mdt:hotKeyOpen")
end

A loop has no else/elseif branch.

错误解析脚本 @mdt2/cl_mdt.lua资源mdt2: @mdt2/cl_mdt.lua:16:&#x27; end&#x27;期望(在第5行上关闭&#x27;)附近

笙痞 2025-02-03 06:36:38

AS Edward Thomson 提到,这只是一个指示,而不是错误消息。我的脚本正常。

As Edward Thomson mentioned, this is just an indicator, not an error message. My script works fine.

github时间表工作流每小时会赢得每小时

笙痞 2025-02-02 17:39:51

想象一下,在基本十足的基础上工作,例如8位准确性。您检查是否

1/3 + 2 / 3 == 1

并了解此返回false。为什么?好吧,作为实数,我们有

1/3 = 0.333 .... 2/3 = 0.666 ....

在八个小数位置上截断,我们得到

0.33333333 + 0.66666666 = 0.99999999

的是,当然,与1.00000000不同,0.00000001


具有固定数量的二进制数字的情况完全相似。作为实际数字,我们有

1/10 = 0.0001100110011001100 ...(基数2)

1/5 = 0.00110011001100110011001 ...(基本2)

如果我们将其截断为,例如,七个位,

0.0001100 + 0.0011001 = 0.0100101

另一方面,

我们会得到 3/10 = 0.01001100110011 ...(基本2)

,截断为七个位,是0.0100110 ,这些与0.0000001完全不同。


确切的情况稍微更细微,因为这些数字通常存储在科学符号中。因此,例如,我们可以将其存储为0.0001100我们可以将其存储为1.10011 * 2^-4,具体取决于我们分配了多少位对于指数和曼蒂萨。这会影响您获得的计算精确度数字。

结果是,由于这些舍入错误,因此本质上不想在浮点数上使用==。相反,您可以检查其差异的绝对值是否小于一些固定的小数。

Imagine working in base ten with, say, 8 digits of accuracy. You check whether

1/3 + 2 / 3 == 1

and learn that this returns false. Why? Well, as real numbers we have

1/3 = 0.333.... and 2/3 = 0.666....

Truncating at eight decimal places, we get

0.33333333 + 0.66666666 = 0.99999999

which is, of course, different from 1.00000000 by exactly 0.00000001.


The situation for binary numbers with a fixed number of bits is exactly analogous. As real numbers, we have

1/10 = 0.0001100110011001100... (base 2)

and

1/5 = 0.0011001100110011001... (base 2)

If we truncated these to, say, seven bits, then we'd get

0.0001100 + 0.0011001 = 0.0100101

while on the other hand,

3/10 = 0.01001100110011... (base 2)

which, truncated to seven bits, is 0.0100110, and these differ by exactly 0.0000001.


The exact situation is slightly more subtle because these numbers are typically stored in scientific notation. So, for instance, instead of storing 1/10 as 0.0001100 we may store it as something like 1.10011 * 2^-4, depending on how many bits we've allocated for the exponent and the mantissa. This affects how many digits of precision you get for your calculations.

The upshot is that because of these rounding errors you essentially never want to use == on floating-point numbers. Instead, you can check if the absolute value of their difference is smaller than some fixed small number.

浮点数学破裂了吗?

笙痞 2025-02-02 07:22:14

第一点,因为您尚未给出考虑log_message的索引映射,定义为文本在没有任何分析器的情况下键入字段。因此,它将考虑log_message字段的默认标准分析仪。

在这里,您的正则模式/[az]*/在索引时将所有令牌转换为standard分析仪将所有令牌转换为plowsect。您可以阅读有关标准分析仪。您可以替换图案,例如`/[az]*/

point,匹配查询不支持REGEX模式。您可以使用query_string Elasticsearch的查询类型,如下所示:

{
  "query": {
    "bool": {
      "must": [
        {
          "query_string": {
            "default_field": "log_message",
            "query": "The application node /[a-z]*/ is down",
            "default_operator": "AND"
          }
        }
      ],
      "filter": [
        {
          "term": {
            "application": "XYZ"
          }
        }
      ]
    }
  }
}

Regex查询将影响您的搜索性能,因此要谨慎使用。

最佳解决方案:

如果您的用例与节点名称和应用程序名称的查询匹配,则可以使用node或down等节点的状态,则可以使用 grok pattern with ingest管道并将其存储为单独的值并将其用于查询。

以下是日志消息的示例grok模式(您可以根据各种日志模式进行修改):

The application node %{WORD:node_name} is %{WORD:node_status}

上面,grok模式将给出以下结果:

{
  "node_name": "ABC",
  "node_status": "down"
}

示例摄入管道:

PUT _ingest/pipeline/my-pipeline
{
  "processors": [
    {
      "grok": {
        "field": "log_message",
        "patterns": [
          "The application node %{WORD:node_name} is %{WORD:node_status}"
        ]
      }
    }
  ]
}

您可以在索引文档时使用以下管道:

POST index_name/_doc?pipeline=my-pipeline
{
  "log_message":"The application node XXX is down"
}

输出文档:

"hits" : [
      {
        "_index" : "index_name",
        "_type" : "_doc",
        "_id" : "bZuMkoABMUDAwut6pbnf",
        "_score" : 1.0,
        "_source" : {
          "node_status" : "down",
          "node_name" : "XXX",
          "log_message" : "The application node XXX is down"
        }
      }
    ]

您可以使用:您可以使用以下查询以获取有关特定节点的数据,该节点已下降:

{
  "query": {
    "bool": {
      "must": [
        {
          "match": {
            "node_name": "XXX"
          }
        },
        {
          "match": {
            "node_status": "down"
          }
        }
      ],
      "filter": [
        {
          "term": {
            "application": "XYZ"
          }
        }
      ]
    }
  }
}

The first point, as you have not given index mapping considering log_message is defined as text type field without any analyzer. So it will consider default standard analyzer for log_message field.

Here, your regex pattern /[A-Z]*/ will not work as standard analyzer convert all tokens into lowercase while indexing. You can read about standard analyzer here. You can replace your pattern like `/[a-z]*/

Second Point, match query not support regex pattern. You can use query_string type of query of Elasticsearch as shown below:

{
  "query": {
    "bool": {
      "must": [
        {
          "query_string": {
            "default_field": "log_message",
            "query": "The application node /[a-z]*/ is down",
            "default_operator": "AND"
          }
        }
      ],
      "filter": [
        {
          "term": {
            "application": "XYZ"
          }
        }
      ]
    }
  }
}

Regex query will impact your search performance so used with caution.

Best Solution:

If your use case to match the query on node name and application name with status of node like running or down then you can get this information from message field using grok pattern with ingest pipeline and stored as separate value and use it to query.

Below is sample grok pattern for your log message (you can modified based on your various log pattern):

The application node %{WORD:node_name} is %{WORD:node_status}

Above, grok pattern will give below result:

{
  "node_name": "ABC",
  "node_status": "down"
}

Sample Ingest Pipeline:

PUT _ingest/pipeline/my-pipeline
{
  "processors": [
    {
      "grok": {
        "field": "log_message",
        "patterns": [
          "The application node %{WORD:node_name} is %{WORD:node_status}"
        ]
      }
    }
  ]
}

You can use pipeline like below while indexing document:

POST index_name/_doc?pipeline=my-pipeline
{
  "log_message":"The application node XXX is down"
}

Output Document:

"hits" : [
      {
        "_index" : "index_name",
        "_type" : "_doc",
        "_id" : "bZuMkoABMUDAwut6pbnf",
        "_score" : 1.0,
        "_source" : {
          "node_status" : "down",
          "node_name" : "XXX",
          "log_message" : "The application node XXX is down"
        }
      }
    ]

You can use below query to get data for specific node which is down:

{
  "query": {
    "bool": {
      "must": [
        {
          "match": {
            "node_name": "XXX"
          }
        },
        {
          "match": {
            "node_status": "down"
          }
        }
      ],
      "filter": [
        {
          "term": {
            "application": "XYZ"
          }
        }
      ]
    }
  }
}

弹性搜索DSL查询将日志消息与启动和结束文本匹配

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文