总攻大人

文章 评论 浏览 27

总攻大人 2025-02-20 10:45:51

尝试一下。我将Osheet更改为Obook!

Dim SaveFileDialog1 As New SaveFileDialog()
SaveFileDialog1.Filter = "Execl files (*.xlsx)|*.xlsx"
SaveFileDialog1.FilterIndex = 2
SaveFileDialog1.RestoreDirectory = True
If SaveFileDialog1.ShowDialog() = DialogResult.OK Then
    obook.SaveAs(SaveFileDialog1.FileName)
    MsgBox("Excel File Created Successfully!")
Else
    Return
End If
oBook.Close()
oExcel.Quit()

Try this. I changed oSheet to oBook!

Dim SaveFileDialog1 As New SaveFileDialog()
SaveFileDialog1.Filter = "Execl files (*.xlsx)|*.xlsx"
SaveFileDialog1.FilterIndex = 2
SaveFileDialog1.RestoreDirectory = True
If SaveFileDialog1.ShowDialog() = DialogResult.OK Then
    obook.SaveAs(SaveFileDialog1.FileName)
    MsgBox("Excel File Created Successfully!")
Else
    Return
End If
oBook.Close()
oExcel.Quit()

运行代码后保存文件代码错误

总攻大人 2025-02-20 07:17:45

尝试找到一种方法来更改 maxrequestLength 属性。尽管我认为当默认值仅为4MB或以前已更改时,我不认为它的问题。

Try to find a way to change the maxRequestLength property. Although I don't think the problem with it when the default value is only 4MB or has been changed by you before.

ASP.NET核心MVC:上传文件时iformfile返回null

总攻大人 2025-02-20 06:57:21

测试结果表明,对于特定类型的数据,欧几里得距离和余弦距离可能是相同的距离函数(达到某些缩放系数)。您可以通过两个距离矩阵的热图来验证这一点。

The test result indicates that Euclidean distance and cosine distance are likely the same distance function (up to certain scaling factor) for the specific type of data. You could verify this by heatmaps of the two distance matrixes.

为什么我的T-SNE图与欧几里得和余弦距离看起来相似

总攻大人 2025-02-20 03:04:44

使用 sed

$ sed -i.bak '/pam_wheel.so use_ui$/s/^/#/' /etc/pam.d/su

-i.bak 将使用 .bak Extension(或您想命名的任何内容)创建文件的备份。

Using sed

$ sed -i.bak '/pam_wheel.so use_ui$/s/^/#/' /etc/pam.d/su

-i.bak will create a backup of the file with .bak extension (or anything you wish to name it).

SED有关如何将#添加到模式匹配开始的问题

总攻大人 2025-02-20 02:30:56

您已经硬编码了XML文件的结构的名称。在第一个文件中,数据位于称为 path 的元素中,因此您使用 root.iter('path')

但是,在第二个文件中,XML中没有元素 path ,因此上面的循环是一个空的迭代器。如果您真的想用给定颜色上色,则不应在循环中给出可选的过滤器。

for path in root.iter():
    path.attrib['fill'] = color

You have hardcoded the names of the structure of the XML file. In the first file, the data is in an element called path, so you use root.iter('path').

However, in the second file, there is no element path in the XML, so the loop over it is over an empty iterator. If you really want to color all elements in the given color, you should not give an optional filter in the loop.

for path in root.iter():
    path.attrib['fill'] = color

enter image description here

填充SVG图像中的颜色

总攻大人 2025-02-19 22:58:41

关于脚本的性能,这里有许多因素需要考虑。您可以做一些最佳实践来使您的脚本更快:
https://developers.google.com/apps-script/guides/support/support/best-practices#:~: text = use%20batch%20operations, -scripts%20commonly%20NEED& text =交替%20Read%20和20write%20个命令,应%20NOT%20FOLL%20folly%20OR%20use

查看您的脚本,对于如此简短的脚本,除非您在电子表格中使用非常大的数据,否则不应该花那么长时间。

检查后,所有Google服务都可以正常工作:

参考链接: https://www.google.com/appsstatus/dashboard/

也与如此巨大的区别7秒至一分钟以上的一分钟,如果您确定没有更改,并且脚本突然突然就不会是由于脚本或数据的微小变化而引起的增加了运行时,我建议您在此处发布您的问题/关注: https://issuetracker.google.com/sissues

There are many factors to consider here on the performance of the script. Here are some best practices you can do to make your script faster:
https://developers.google.com/apps-script/guides/support/best-practices#:~:text=Use%20batch%20operations,-Scripts%20commonly%20need&text=Alternating%20read%20and%20write%20commands,should%20not%20follow%20or%20use.

Looking at your script, for such a short script it should not take that long unless you are working with very big data in your spreadsheet.

Upon checking, all Google services are working fine as of today:
enter image description here

Reference Link: https://www.google.com/appsstatus/dashboard/

Also for such a huge difference from 7 sec to over a minute that alone cannot be caused by a minor change in your script or your data, if you are sure there were no changes made and the script just suddenly increased the runtime I suggest you post your issue/concern here: https://issuetracker.google.com/issues

电子表格脚本通常需要7秒才能运行,现在需要一分钟

总攻大人 2025-02-19 20:54:11

我修复了! Nginx找不到 Web 容器,因为在AWS ECS任务定义中未设置 nginx Web 容器之间的链接。这就是我的任务定义以前的样子:

{
  ...
  "containerDefinitions": [
    {
      "name": "nginx",
      "image": "xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/nginx:latest",
      "cpu": 0,
      "links": [],
      "portMappings": [
        {
          "containerPort": 80,
          "hostPort": 0,
          "protocol": "tcp"
        }
      ],
      "essential": true,
      ...
    },
    {
      "name": "web",
      "image": "xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/web:latest",
      "cpu": 0,
      "links": [],
      "portMappings": [
        {
          "containerPort": 3000,
          "hostPort": 3000,
          "protocol": "tcp"
        }
      ],
      "essential": true,
      ...
    }
  ]

  ...
}

现在看起来如下(注意 links ):

{
  ...
  "containerDefinitions": [
    {
      "name": "nginx",
      "image": "xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/nginx:latest",
      "cpu": 0,
      "links": [
        "web"
      ],
      "portMappings": [
        {
          "containerPort": 80,
          "hostPort": 0,
          "protocol": "tcp"
        }
      ],
      "essential": true,
      ...
    },
    {
      "name": "web",
      "image": "xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/web:latest",
      "cpu": 0,
      "links": [],
      "portMappings": [
        {
          "containerPort": 3000,
          "hostPort": 3000,
          "protocol": "tcp"
        }
      ],
      "essential": true,
      ...
    }
  ]

  ...
}

我必须使用旧的ECS控制台才能添加 Web 容器作为链接;新控制台还没有该选项。请参阅此答案以获取屏幕截图。

I fixed it! Nginx could not find the web container because the links between the nginx and web containers were not set in the AWS ECS task definition. This is what my task definition looked like before:

{
  ...
  "containerDefinitions": [
    {
      "name": "nginx",
      "image": "xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/nginx:latest",
      "cpu": 0,
      "links": [],
      "portMappings": [
        {
          "containerPort": 80,
          "hostPort": 0,
          "protocol": "tcp"
        }
      ],
      "essential": true,
      ...
    },
    {
      "name": "web",
      "image": "xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/web:latest",
      "cpu": 0,
      "links": [],
      "portMappings": [
        {
          "containerPort": 3000,
          "hostPort": 3000,
          "protocol": "tcp"
        }
      ],
      "essential": true,
      ...
    }
  ]

  ...
}

Now it looks like the following (note the links):

{
  ...
  "containerDefinitions": [
    {
      "name": "nginx",
      "image": "xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/nginx:latest",
      "cpu": 0,
      "links": [
        "web"
      ],
      "portMappings": [
        {
          "containerPort": 80,
          "hostPort": 0,
          "protocol": "tcp"
        }
      ],
      "essential": true,
      ...
    },
    {
      "name": "web",
      "image": "xxxxxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/web:latest",
      "cpu": 0,
      "links": [],
      "portMappings": [
        {
          "containerPort": 3000,
          "hostPort": 3000,
          "protocol": "tcp"
        }
      ],
      "essential": true,
      ...
    }
  ]

  ...
}

I had to use the old ECS console in order to add the web container as link; the new console doesn't have that option yet. See this SO answer for a screenshot.

如何将Nginx Docker容器指向我的Web应用程序的正确容器?

总攻大人 2025-02-19 14:30:57

调用REST API上loadPartCopy的API为 copypartrerept 。您可以找到使用此API和相关API ,这是应该演示如何调用API的显着部分:

// Create a list to store the upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();
List<CopyPartResponse> copyResponses = new List<CopyPartResponse>();

// Setup information required to initiate the multipart upload.
InitiateMultipartUploadRequest initiateRequest =
    new InitiateMultipartUploadRequest
    {
        BucketName = targetBucket,
        Key = targetObjectKey
    };

// Initiate the upload.
InitiateMultipartUploadResponse initResponse = await s3Client.InitiateMultipartUploadAsync(initiateRequest);

// Save the upload ID.
String uploadId = initResponse.UploadId;

// Get the size of the object.
GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest
{
    BucketName = sourceBucket,
    Key = sourceObjectKey
};

GetObjectMetadataResponse metadataResponse = await s3Client.GetObjectMetadataAsync(metadataRequest);
long objectSize = metadataResponse.ContentLength; // Length in bytes.

// Copy the parts.
long partSize = 5242880; // Part size is 5 MiB.
long bytePosition = 0;

for (int i = 1; bytePosition < objectSize; i++)
{
    CopyPartRequest copyRequest = new CopyPartRequest
    {
        DestinationBucket = targetBucket,
        DestinationKey = targetObjectKey,
        SourceBucket = sourceBucket,
        SourceKey = sourceObjectKey,
        UploadId = uploadId,
        FirstByte = bytePosition,
        LastByte = bytePosition + partSize - 1 >= objectSize ? objectSize - 1 : bytePosition + partSize - 1,
        PartNumber = i
    };

    copyResponses.Add(await s3Client.CopyPartAsync(copyRequest));

    bytePosition += partSize;
}

// Set up to complete the copy.
CompleteMultipartUploadRequest completeRequest =
new CompleteMultipartUploadRequest
{
    BucketName = targetBucket,
    Key = targetObjectKey,
    UploadId = initResponse.UploadId
};
completeRequest.AddPartETags(copyResponses);

// Complete the copy.
CompleteMultipartUploadResponse completeUploadResponse = await s3Client.CompleteMultipartUploadAsync(completeRequest);

The API that calls into the REST API UploadPartCopy is CopyPart and CopyPartRequest. You can find a fairly complete demo of using this API and the related APIs from Amazon, here's the salient portion that should demonstrate how to call the API:

// Create a list to store the upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();
List<CopyPartResponse> copyResponses = new List<CopyPartResponse>();

// Setup information required to initiate the multipart upload.
InitiateMultipartUploadRequest initiateRequest =
    new InitiateMultipartUploadRequest
    {
        BucketName = targetBucket,
        Key = targetObjectKey
    };

// Initiate the upload.
InitiateMultipartUploadResponse initResponse = await s3Client.InitiateMultipartUploadAsync(initiateRequest);

// Save the upload ID.
String uploadId = initResponse.UploadId;

// Get the size of the object.
GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest
{
    BucketName = sourceBucket,
    Key = sourceObjectKey
};

GetObjectMetadataResponse metadataResponse = await s3Client.GetObjectMetadataAsync(metadataRequest);
long objectSize = metadataResponse.ContentLength; // Length in bytes.

// Copy the parts.
long partSize = 5242880; // Part size is 5 MiB.
long bytePosition = 0;

for (int i = 1; bytePosition < objectSize; i++)
{
    CopyPartRequest copyRequest = new CopyPartRequest
    {
        DestinationBucket = targetBucket,
        DestinationKey = targetObjectKey,
        SourceBucket = sourceBucket,
        SourceKey = sourceObjectKey,
        UploadId = uploadId,
        FirstByte = bytePosition,
        LastByte = bytePosition + partSize - 1 >= objectSize ? objectSize - 1 : bytePosition + partSize - 1,
        PartNumber = i
    };

    copyResponses.Add(await s3Client.CopyPartAsync(copyRequest));

    bytePosition += partSize;
}

// Set up to complete the copy.
CompleteMultipartUploadRequest completeRequest =
new CompleteMultipartUploadRequest
{
    BucketName = targetBucket,
    Key = targetObjectKey,
    UploadId = initResponse.UploadId
};
completeRequest.AddPartETags(copyResponses);

// Complete the copy.
CompleteMultipartUploadResponse completeUploadResponse = await s3Client.CompleteMultipartUploadAsync(completeRequest);

上载partcopy Amazon S3 API

总攻大人 2025-02-19 08:28:39

您可以 colesce 列值,例如

declare @demo table (id int, A int null, B int null, C int null);
insert into @demo (id, A,B,C) values
( 3,   NULL, NULL,  1),
( 7,   2,    NULL,  5),
( 1,   NULL, 9,     2);

select id, A,B,C, COALESCE(A,B,C,0) as firstNonNull
from @demo

编辑:正如OP在评论中指出的那样找到该值的列。这是我修正的例子

select ID, A, B, C, COALESCE(A, B, C) as D,
    case COALESCE(A, B, C)
        when A THEN 'A'
        when B THEN 'B'
        when C THEN 'C'
        else 'None'
    end as Choice
from @demo

You can COALESCE the column values e.g.

declare @demo table (id int, A int null, B int null, C int null);
insert into @demo (id, A,B,C) values
( 3,   NULL, NULL,  1),
( 7,   2,    NULL,  5),
( 1,   NULL, 9,     2);

select id, A,B,C, COALESCE(A,B,C,0) as firstNonNull
from @demo

Edit: as has been pointed out in the comments the OP wanted both the value and the name of the column in which that value was found. Here is my amended example

select ID, A, B, C, COALESCE(A, B, C) as D,
    case COALESCE(A, B, C)
        when A THEN 'A'
        when B THEN 'B'
        when C THEN 'C'
        else 'None'
    end as Choice
from @demo

具有等于​​第一列值的新列,而不是null

总攻大人 2025-02-19 07:12:43

移动 geometryReader scrollview 之外,例如

var body: some View {
  GeometryReader { geo in     // << here !!
    ScrollView {
        VStack {
            HStack {
                Text("Something")
                Text("Something")
            }
            VStack {
                CustomView(param: geo.size.width * 0.3)
                CustomView(param: geo.size.width * 0.3)
                CustomView(param: geo.size.width * 0.3)
            }.frame(width: geo.size.width, height: geo.size.height)

            Button(action: {
                print("Hey")
            }) {
                Text("Push me")
            }
        }.padding()
    }
  }
}

*注意:geometryReader在scrollview中不起作用,因为scrollview没有自己的几何形状,它试图从内容中读取几何形状,因此有循环循环。

Move GeometryReader outside of ScrollView, like

var body: some View {
  GeometryReader { geo in     // << here !!
    ScrollView {
        VStack {
            HStack {
                Text("Something")
                Text("Something")
            }
            VStack {
                CustomView(param: geo.size.width * 0.3)
                CustomView(param: geo.size.width * 0.3)
                CustomView(param: geo.size.width * 0.3)
            }.frame(width: geo.size.width, height: geo.size.height)

            Button(action: {
                print("Hey")
            }) {
                Text("Push me")
            }
        }.padding()
    }
  }
}

*Note: GeometryReader does not work in ScrollView because ScrollView does not have own geometry, it tries to read geometry from content, so there is cycling.

Swiftui几何学造成破坏

总攻大人 2025-02-19 03:29:28

您可以连接 api网关作为lambda函数的触发。这样,每当客户端访问网站时,客户端都可以向剩下的端点发送提取请求,然后lambda将返回您要寻找的JSON对象。让我知道这是否有帮助。

You can attach an api gateway as a trigger for the lambda function. That way, every time a client accesses the website, the client can send a fetch request to the rest endpoint and the lambda would then return the json object that you are looking for. Let me know if that helps.

每次调用S3静态站点时,我如何触发lambda功能

总攻大人 2025-02-19 02:52:00

下面我解决了答案

--create view mak_final_ageing_rpt as 
select *,
case when  (a.sold - a.Above60) > 0 then 0 else (Above60 - sold)    end ResultD60,
case when  (a.sold - a.Above60 ) > 0
then (

case when (a.sold - Above60 - a.days31to60)>0 then 0 else a.days31to60 - (sold - Above60) end  )
else
         a.days31to60 end result31to60,
case when (a.sold - a.Above60 -  a.days31to60) >0 then 
((a.Above60 + a.days31to60 + a.d30) - sold ) else a.d30 end resultd30

from (
select a.Itcodeprd,a.d30,a.days31to60,a.Above60,a.total,s.TotalStock
,(  (a.total) - (s.TotalStock) ) sold
from mak_stock_Ageing a
inner join [mak_stock_ageing _allstock]  s on s.itcode = a.Itcodeprd
where itcode = 15201 or itcode = 8438 or itcode = 12887 or itcode = 15516
) a


The Below i solved the answer

--create view mak_final_ageing_rpt as 
select *,
case when  (a.sold - a.Above60) > 0 then 0 else (Above60 - sold)    end ResultD60,
case when  (a.sold - a.Above60 ) > 0
then (

case when (a.sold - Above60 - a.days31to60)>0 then 0 else a.days31to60 - (sold - Above60) end  )
else
         a.days31to60 end result31to60,
case when (a.sold - a.Above60 -  a.days31to60) >0 then 
((a.Above60 + a.days31to60 + a.d30) - sold ) else a.d30 end resultd30

from (
select a.Itcodeprd,a.d30,a.days31to60,a.Above60,a.total,s.TotalStock
,(  (a.total) - (s.TotalStock) ) sold
from mak_stock_Ageing a
inner join [mak_stock_ageing _allstock]  s on s.itcode = a.Itcodeprd
where itcode = 15201 or itcode = 8438 or itcode = 12887 or itcode = 15516
) a


在SQL FIFO扣除额外的股票差异

总攻大人 2025-02-18 18:34:53

您可以简单地简单地键入 [a == a] echo == 。这也将产生此错误。原因是A = 对ZSH具有特定的含义,除非其后面是白空间。

您有三个可能的解决方法:

引用该参数,即

[ $1 "==" a ]

或使用单个均等符号,即

[ $1 = a ]

或使用 [[,它引入了略有不同的解析上下文:

[[ $1 == a ]]

You can simplify the problem to simply type a [a == a] or echo ==. This will also produce this error. The reason is that a = has specific meaning to zsh, unless it is followed by a white space.

You have three possible workarounds:

Either quote that parameter, i.e.

[ $1 "==" a ]

or use a single equal sign, i.e.

[ $1 = a ]

or use [[, which introduces a slightly different parsing context:

[[ $1 == a ]]

如果操作员弹壳

总攻大人 2025-02-18 16:26:16

在Typescript 5中,您可以使用新的 @overload 标签:

/**
 * @overload
 * @param {string} ticket
 * @param {string} userId
 *//**
 * @overload
 * @param {string} ticket
 * @param {string} firstname
 * @param {string} lastname
 *//**
 * @param {string} a
 * @param {string} b
 * @param {string} c
 */
function assignSlave(a, b, c) {}

用于参考: https://devblogs.microsoft.com/typescript/announcing-typescript-5-5-0/#overload-support-support-ipport-in-jsdoc

In TypeScript 5, you can use the new @overload tag:

/**
 * @overload
 * @param {string} ticket
 * @param {string} userId
 *//**
 * @overload
 * @param {string} ticket
 * @param {string} firstname
 * @param {string} lastname
 *//**
 * @param {string} a
 * @param {string} b
 * @param {string} c
 */
function assignSlave(a, b, c) {}

For reference: https://devblogs.microsoft.com/typescript/announcing-typescript-5-0/#overload-support-in-jsdoc

文档过载功能/方法

总攻大人 2025-02-18 04:42:59

lcov 具有默认情况下禁用分支机构数据>。使用 lcov_branch_coverage = 1 标志可以启用它。

以下命令将覆盖范围报告与分支覆盖范围数据正确合并:

lcov --rc lcov_branch_coverage=1 \
  --add-tracefile ./coverage-unit/lcov-1.info \
  --add-tracefile ./coverage-unit/lcov-2.info  \
  --output-file ./coverage-unit/lcov.info

lcov has branch coverage data disabled by default. Using the lcov_branch_coverage=1 flag enables it.

The following command properly merges the coverage reports with branch coverage data:

lcov --rc lcov_branch_coverage=1 \
  --add-tracefile ./coverage-unit/lcov-1.info \
  --add-tracefile ./coverage-unit/lcov-2.info  \
  --output-file ./coverage-unit/lcov.info

开玩笑的V28碎片,合并覆盖范围报告到单个文件缺少分支数据

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文