城歌

文章 评论 浏览 32

城歌 2025-02-06 20:24:24

只需将setbody方法与Astactdata交换:

 $pdf = Pdf::loadHTML($html);
 Mail::send(array(), array(), function ($message) use ($fullname, $email, $html, $pdf) {
        $message->to([$email => $fullname])
            ->subject('New Test Mail')
            ->from('[email protected]', 'Test Mail')
            ->attachData($pdf->output(), "text.pdf")
            ->setBody($html, 'text/html');
    });

它对我有用。

Just swap setBody method with attachData like this:

 $pdf = Pdf::loadHTML($html);
 Mail::send(array(), array(), function ($message) use ($fullname, $email, $html, $pdf) {
        $message->to([$email => $fullname])
            ->subject('New Test Mail')
            ->from('[email protected]', 'Test Mail')
            ->attachData($pdf->output(), "text.pdf")
            ->setBody($html, 'text/html');
    });

It worked for me.

Laravel如何求解Swift消息附加数据错误?

城歌 2025-02-06 11:33:19

我猜测massofelephants的预期内容:

let startOrder = [1, 4, 5, 3, 6, 2];
let massOfElephants = [100, 200, 500, 2, 8, 9000]

const elephantObject = [];
startOrder.forEach((order, i) => {
  elephantObject.push({
    [order]: massOfElephants[i]
  });
});

let leastMassyElephant = Math.min(...massOfElephants)

let output = elephantObject.filter(item => {
  return item[Object.keys(item)[0]] === leastMassyElephant
})

console.log(output)

I'm guessing at the intended content of massOfElephants here:

let startOrder = [1, 4, 5, 3, 6, 2];
let massOfElephants = [100, 200, 500, 2, 8, 9000]

const elephantObject = [];
startOrder.forEach((order, i) => {
  elephantObject.push({
    [order]: massOfElephants[i]
  });
});

let leastMassyElephant = Math.min(...massOfElephants)

let output = elephantObject.filter(item => {
  return item[Object.keys(item)[0]] === leastMassyElephant
})

console.log(output)

如何找到对象中最低值的大象

城歌 2025-02-05 21:47:09

@Mozway的解决方案适合小数组,但对于大型数组而言,它在o(n ** 2)时间(即二次时间,请参见时间复杂性有关更多信息)。这是基于快速 binary search o(n log n) time(即Quasi-linear)中运行的大型数组的更好解决方案:

unique_values, index = np.unique(A, return_index=True)
result = index[np.searchsorted(unique_values, B)]

The solution of @mozway is good for small array but not for big ones as it runs in O(n**2) time (ie. quadratic time, see time complexity for more information). Here is a much better solution for big array running in O(n log n) time (ie. quasi-linear) based on a fast binary search:

unique_values, index = np.unique(A, return_index=True)
result = index[np.searchsorted(unique_values, B)]

B中的B中值位置索引

城歌 2025-02-05 20:15:06

您可能应该检查由CFS计算的线程的负载。负载对应于线程的时间相对于CFS分配的时间(不运行,仅能运行)。批处理处理过程将倾向于使用所有CPU时间,而对延迟敏感的任务通常会承受较低的负载,因为它们通常会频繁阻止/UNBLOCK。

请注意,这不是100%准确的,并且某些对延迟敏感的应用程序可能使用其所有CPU时间具有线程,因此具有高负载。但是负载可能是一个很好的第一个近似值,不必使用硬件计数器和实现复杂的东西。

您可以阅读有关CFS如何跟踪线程负载的精彩文章: https://lwn.net/articles/ 531853/

You should probably check the load of the threads, as computed by CFS. The load corresponds to how much time a thread was runnable (not running, just able to run) relative to the time it was allocated by CFS. Batch processing processes would tend to have a high load as they use all the CPU time they can, while latency-sensitive tasks would tend to have a lower load as they usually frequently block/unblock.

Note that this is not 100% accurate, and some latency-sensitive applications may have threads using all their CPU time, therefore having a high load. But the load might be a good first approximation that does not necessitate using hardware counters and implementing complicated stuff.

You can read this great article about how CFS tracks the load of threads: https://lwn.net/Articles/531853/

区分内核级别对延迟敏感和批处理过程

城歌 2025-02-05 16:23:45

易于解决方案,如果它不支持状态集,则不要使用插件,并且要部署它。

只需将 kubectl 安装到bash中,然后直接使用它来应用YAML

stage('Deploy Image') {
      steps{
        script {
          docker.withRegistry( '', registryCredential ) {
            dockerImage.push("$BUILD_NUMBER")
             dockerImage.push('latest')

          }
        }
      }
    }
    stage('Deploy to K8s') {
      steps{
        script {
          sh "sed -i 's,TEST_IMAGE_NAME,harshmanvar/node-web-app:$BUILD_NUMBER,' deployment.yaml"
          sh "cat deployment.yaml"
          sh "kubectl --kubeconfig=/home/ec2-user/config get pods"
          sh "kubectl --kubeconfig=/home/ec2-user/config apply -f deployment.yaml"
        }
      }
    }

Easy solution don't use a plugin if it's not supporting stateful set and you want to deploy it.

Just install kubectl into bash and use it directly to apply the YAML

stage('Deploy Image') {
      steps{
        script {
          docker.withRegistry( '', registryCredential ) {
            dockerImage.push("$BUILD_NUMBER")
             dockerImage.push('latest')

          }
        }
      }
    }
    stage('Deploy to K8s') {
      steps{
        script {
          sh "sed -i 's,TEST_IMAGE_NAME,harshmanvar/node-web-app:$BUILD_NUMBER,' deployment.yaml"
          sh "cat deployment.yaml"
          sh "kubectl --kubeconfig=/home/ec2-user/config get pods"
          sh "kubectl --kubeconfig=/home/ec2-user/config apply -f deployment.yaml"
        }
      }
    }

如何使用Jenkins部署状态应用程序

城歌 2025-02-04 22:36:44

也许基本r中的3D饼图可以与设置为true的爆炸参数合作,例如

pie3D(num_data, labels = num_data, explode = 0.25)

Perhaps 3D pie chart in base R can work with explode argument set to true, e.g.

pie3D(num_data, labels = num_data, explode = 0.25)

使用ggplot2在R中的花式饼图

城歌 2025-02-04 21:53:40

我们在按'id'分组后创建一个新列

library(dplyr)
example %>% 
  group_by(ID) %>% 
  mutate(Last_visit = +(row_number() %in% which.max(as.Date(Date1)))) %>%
  ungroup

,然后根据列filts/slice基于列

example %>%
  group_by(ID) %>%
  mutate(Last_visit = +(row_number() %in% which.max(as.Date(Date1)))) %>%
  slice_max(n = 1, order_by = Last_visit) %>%
  ungroup

-output

# A tibble: 3 × 4
     ID Date1      VarA  Last_visit
  <dbl> <chr>      <chr>      <int>
1     1 2021-01-02 20             1
2     2 2020-12-20 No             1
3     3 1998-05-01 0              1

另一个选项是将'date1'转换为date class class,first,然后进行安排并使用不同的

example %>% 
  mutate(Date1 = as.Date(Date1)) %>%
  arrange(ID, desc(Date1)) %>%
  distinct(ID, .keep_all = TRUE) %>% 
  mutate(Last_visit = 1)
  ID      Date1 VarA Last_visit
1  1 2021-01-02   20          1
2  2 2020-12-20   No          1
3  3 1998-05-01    0          1

We create a new column after grouping by 'ID'

library(dplyr)
example %>% 
  group_by(ID) %>% 
  mutate(Last_visit = +(row_number() %in% which.max(as.Date(Date1)))) %>%
  ungroup

and then filter/slice based on the column

example %>%
  group_by(ID) %>%
  mutate(Last_visit = +(row_number() %in% which.max(as.Date(Date1)))) %>%
  slice_max(n = 1, order_by = Last_visit) %>%
  ungroup

-output

# A tibble: 3 × 4
     ID Date1      VarA  Last_visit
  <dbl> <chr>      <chr>      <int>
1     1 2021-01-02 20             1
2     2 2020-12-20 No             1
3     3 1998-05-01 0              1

Another option is to convert the 'Date1' to Date class first, then do an arrange and use distinct

example %>% 
  mutate(Date1 = as.Date(Date1)) %>%
  arrange(ID, desc(Date1)) %>%
  distinct(ID, .keep_all = TRUE) %>% 
  mutate(Last_visit = 1)
  ID      Date1 VarA Last_visit
1  1 2021-01-02   20          1
2  2 2020-12-20   No          1
3  3 1998-05-01    0          1

在r中找到数据框中每个分区的最大值

城歌 2025-02-04 15:35:37

您必须实现 pre-per-per-per-push 钩子。

在挂钩脚本内部检查使用的分支,并询问用户是否要推开,如果不返回1,否则返回0。

您可以使用任何要编写脚本的语言。

查看.git/hooks/pre-push.sample

You have to implement a pre-push hook.

Inside the hook script examine the branch used and ask the user if he wants to push and if not return a 1 otherwise return a 0.

You can use any language you want to write the script.

Look at the .git/hooks/pre-push.sample

在将git更改为遥远的分支之前,是否有办法在VS代码中警告我?

城歌 2025-02-04 13:26:22

展开()仅扩展指定的索引,不是其父母。而且它不应该,因为即使您的父母中的任何一个崩溃,您也可能需要扩展或崩溃索引,因此,当他们再次扩展时,先前指定的索引将使用所选状态。

使用 setcurrentIndex() 隐式扩展父索引,并确保指定的索引变得可见。

此外,调用resizecolumntocontents()是错误的,因为在尝试展开索引之前将其调用将导致将列调整到 current 内容,在这种情况下,它将仅是根路径,因为其所有子项目仍然崩溃。

使用 setSectionSectionSectionSectionSectionSectionSectionResizemode()

self.fileSelectTreeView.header().setSectionResizeMode(0, 
    QtWidgets.QHeaderView.ResizeToContents)

expand() only expands the specified index, not its parents. And it shouldn't, because you may want to expand or collapse an index even if any of its parents are collapsed, so that when they are expanded again the previously specified index will use the selected state.

Use setCurrentIndex(), which will implicitly expand the parent indexes and ensure that the specified index becomes visible.

Also, calling resizeColumnToContents() at that moment is wrong, because calling it before trying to expand the index will result in resizing the column to the current contents, which, in this case, will be the root path alone, since all its child items are still collapsed.

Use setSectionResizeMode() instead:

self.fileSelectTreeView.header().setSectionResizeMode(0, 
    QtWidgets.QHeaderView.ResizeToContents)

为什么会赢得我的qfilesystemview从set&#x27;地点?

城歌 2025-02-04 00:44:30

您可以在一行中获取两个列表,然后用空格将它们拆分。并秘密地掩盖int,将它们合并并按降序命令。

list1 = input('Enter List 1:  ')
list2 = input('Enter List 2:  ')
list_1 = list1.split()
list_2 = list2.split()
# print list

list_1 = list(map(int, list_1))
list_2 = list(map(int, list_2))

final_list = list_1 + list_2


# 1
print(sorted(final_list, reverse=True))

# 2
final_list.sort(reverse=True)
print(final_list)```

You can get the two list in a single line and split them with spaces. And covert them to int, merge them and order in descending order.

list1 = input('Enter List 1:  ')
list2 = input('Enter List 2:  ')
list_1 = list1.split()
list_2 = list2.split()
# print list

list_1 = list(map(int, list_1))
list_2 = list(map(int, list_2))

final_list = list_1 + list_2


# 1
print(sorted(final_list, reverse=True))

# 2
final_list.sort(reverse=True)
print(final_list)```

程序以读取2个列表和从列表中的打印数字,以相反的顺序合并

城歌 2025-02-03 23:02:45

我刚刚通过重新启动日食并运行应用程序来解决此错误。
我的案件的原因可能是因为我在不关闭项目或Eclipse的情况下更换源文件。
这引起了我正在使用的不同类别的类别。

I have just solved this error by restarting my Eclipse and run the applcation.
The reason for my case may because I replace my source files without closing my project or Eclipse.
Which caused different version of classes I was using.

我如何修复诺舒德罗?

城歌 2025-02-03 21:12:30

而不是使用 inninhhtml >使HTML解析器解析字符串,使用textcontent 没有。

在下面的示例中,我刚刚制作了一个input元素作为data,因为您没有共享data是。我还使用

let data = document.querySelector("input");
let newDiv = 
`<div id="receivedMsgDiv">
   <div id="rMsgName" class="messageName">${data.value}</div>
   <div id="receivedMsg" class="bounce">${data.value}</div>
   <div id="rMsgTime">${data.value}</div>
 </div>`;
document.getElementById("msgDiv").textContent += newDiv;
<input name="user" value="John Doe">
<div id="msgDiv"></div>

Instead of using innerHTML, which causes the HTML parser to parse the string, use textContent which doesn't.

In the example below, I just made an input element to use as data since you didn't share what data was. I also used template literals to inject the dynamic values into the string.

let data = document.querySelector("input");
let newDiv = 
`<div id="receivedMsgDiv">
   <div id="rMsgName" class="messageName">${data.value}</div>
   <div id="receivedMsg" class="bounce">${data.value}</div>
   <div id="rMsgTime">${data.value}</div>
 </div>`;
document.getElementById("msgDiv").textContent += newDiv;
<input name="user" value="John Doe">
<div id="msgDiv"></div>

发送文本而不是元素

城歌 2025-02-03 16:41:59

根据评论,您似乎尚未设置AMQP服务。

步骤1在本教程中 - https://develoder.ibm.com/tutorials/mq-running-ibm-mq-apps - Quarkus and-graalvm-using-qpid-amqp-jms-classes/#step-1-set-up-the-amqp-channel-in-ibm-mq

展示了如何设置AMQP听众。我们有更多的AMQP教程,该步骤可能很快就会出现在其自己的文章中,但是这些教程将继续指出。

**注意:**教程要求您克隆

git clone -b 9.2.3 ...

当前的最新时间为9.2.5即。

git clone -b 9.2.5 ...

主要取代是:

通过编辑install-mq.sh文件,在MQ容器中启用AMQP,并
将以下AMQP行更改为:

导出genmqpkg_incamqp = 1

通过添加
add-dev.mqsc.tpl文件的内容到底部
/incubating/mqadvanced-server-dev/10-dev.mqsc.tpl文件,您的克隆
存储库。

Based on the comments, it looks like you haven't set up the AMQP service.

Step 1 in this tutorial - https://developer.ibm.com/tutorials/mq-running-ibm-mq-apps-on-quarkus-and-graalvm-using-qpid-amqp-jms-classes/#step-1-set-up-the-amqp-channel-in-ibm-mq

shows how to set up the AMQP listener. We have more AMQP tutorials in the pipeline, and that step might soon end up in its own article, but the tutorials will continue to point at it.

**Note: ** The tutorial asks you to clone

git clone -b 9.2.3 ...

The current latest is 9.2.5 ie.

git clone -b 9.2.5 ...

The key substeps are:

Enable AMQP in the MQ container by editing the install-mq.sh file, and
changing the following AMQP line to:

Export genmqpkg_incamqp=1

and

Set up AMQP authority, channel, and service properties by adding the
contents of the add-dev.mqsc.tpl file to the bottom of the
/incubating/mqadvanced-server-dev/10-dev.mqsc.tpl file in your cloned
repository.

如何通过.NET Core中的AMQP API连接到IBM MQ

城歌 2025-02-03 13:54:23

维护的替代方案是 https://github.com/adnanh/webhook 允许您安装本地Webhooks随附脚本。

示例配置:

- id: redeploy-webhook
  execute-command: "/var/scripts/redeploy.sh"
  command-working-directory: "/var/webhook"

Webhook进程的默认端口为9000,因此以下URL将从上面的配置示例执行Redeploy.sh脚本。

http://yourserver:9000/hooks/redeploy-webhook

然后可以在您的AlertManager配置中使用:

receivers:
- name: 'general'
  webhook_config:
  - url: http://yourserver:9000/hooks/redeploy-webhook

A maintained alternative is https://github.com/adnanh/webhook which allows you to install local webhooks with scripts attached.

Example config:

- id: redeploy-webhook
  execute-command: "/var/scripts/redeploy.sh"
  command-working-directory: "/var/webhook"

Default port of the webhook process is 9000, so the following URL would execute the redeploy.sh script from above config example.

http://yourserver:9000/hooks/redeploy-webhook

Which can be then used in your alertmanager config:

receivers:
- name: 'general'
  webhook_config:
  - url: http://yourserver:9000/hooks/redeploy-webhook

通过Prometheus Alert Manger执行Shell脚本

城歌 2025-02-03 11:53:43

parseint(),但请注意,此功能在某种意义上有所不同,例如,它返回100 parseint(“ 100px”)。

parseInt(), but be aware that this function is a bit different in the sense that it for example returns 100 for parseInt("100px").

如何检查字符串是否是有效的数字?

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文