橙味迷妹

文章 评论 浏览 30

橙味迷妹 2025-02-21 00:38:44

有点不清楚为什么您从2844047记录中选择(从薪水中选择计数(*)),然后将结果数乘以365(现有年度的每天),从而导致1038077155+记录的临时结果(+ ,因为有些一年的时间超过365天...)。最后,您正在计算 avg_salary ,在当年的365天中,这总是相同的。

我确实认为无需为所有单个日期生成_eries,并且此查询应给出正确的结果:

select 
   t2.title, 
   avg(avg_salary) 
from (select 
         emp_no, 
         avg(salary) avg_salary 
      from salaries 
      group by emp_no) x 
inner join titles t2 on t2.emp_no=x.emp_no 
group by 1 order by 2 desc;

在上述查询中,我没有从_date和/或to_date中选择,因为这将作为您的待办事项

It is, a bit unclear why you are selecting from 2844047 records (select count(*) from salaries), and multiplying the number of results by 365 (for every day in the existing year), resulting in a temporary result of 1038077155+ records (+, because some year have more than 365 days...). Finally you are calculating the avg_salary, which will always be the same for all of the 365 days in that year.

I do think there is no need to generate_series for all individual dates, and this query should give the correct results:

select 
   t2.title, 
   avg(avg_salary) 
from (select 
         emp_no, 
         avg(salary) avg_salary 
      from salaries 
      group by emp_no) x 
inner join titles t2 on t2.emp_no=x.emp_no 
group by 1 order by 2 desc;

In above query I did not select from_date and/or to_date, because that will be left as a TODO for you ????

Results of above query:

       title        |        avg
--------------------+--------------------
 Senior Staff       | 69119.550582564534
 Staff              | 66956.829691848575
 Manager            | 66044.384223847367
 Senior Engineer    | 59144.768351940127
 Engineer           | 57244.458456267581
 Technique Leader   | 57034.814130272188
 Assistant Engineer | 56963.530432485845
(7 rows)

EDIT:

First you need to get the salary of an employee with the correct starting date and ending date, this deals with changing functions somewhere during the year

select 
    s.emp_no, 
    s.salary ,
    t.title,
    case when t.from_date >=s.from_date  then t.from_date else s.from_date end StartSal,
    case when t.from_date >=s.from_date  then s.to_date else case when t.to_date>s.to_date then s.to_date else t.to_date end end EndSal
from salaries s 
left join titles t on s.emp_no =t.emp_no and s.from_date <t.to_date  and s.to_date >t.from_date 
-- where s.emp_no =10005 and extract(year from s.from_date) in(1995,1996,1997)
order by s.emp_no, StartSal

example output (with the WHERE enabled):

emp_no|salary|title       |startsal  |endsal    |
------+------+------------+----------+----------+
 10005| 88448|Staff       |1995-09-11|1996-09-10|
 10005| 88063|Staff       |1996-09-10|1996-09-12|
 10005| 88063|Senior Staff|1996-09-12|1997-09-10|
 10005| 89724|Senior Staff|1997-09-10|1998-09-10|

When you want to know the average salary on 1996-01-01, you can do:

select title, round(avg(salary),2)
from (
select 
    s.emp_no, 
    s.salary ,
    t.title,
    case when t.from_date >=s.from_date  then t.from_date else s.from_date end StartSal,
    case when t.from_date >=s.from_date  then s.to_date else case when t.to_date>s.to_date then s.to_date else t.to_date end end EndSal
from salaries s 
left join titles t on s.emp_no =t.emp_no and s.from_date <t.to_date  and s.to_date >t.from_date 
order by s.emp_no, StartSal
) sal
where '1996-01-01'::date between StartSal and EndSal
group by title;

output:

title             |round   |
------------------+--------+
Engineer          |55341.77|
Senior Engineer   |61998.56|
Manager           |67784.00|
Assistant Engineer|54692.42|
Staff             |64678.06|
Senior Staff      |72139.55|
Technique Leader  |58156.86|

P.S. There is one small error in this, because the to_date is the same as the start_date. I did not handle this in this script.

SQL:匹配日期范围

橙味迷妹 2025-02-20 03:24:56

我注意到的第一件事是,您的结果类将尝试绑定到称为“数据”的属性,如JSON字符串所示,应称为“数据”。
Based on the json string I think you should try to deserialize to:

public partial class Result
{
    public ArastirmaRaporListesiResult ArastirmaRaporListesiResult { get; set; }
}

public partial class ArastirmaRaporListesiResult
{
    public Datum[] Data { get; set; }
    public long ErrorCode { get; set; }
    public object ErrorMessage { get; set; }
    public long StatusCode { get; set; }
}

public partial class Datum
{
    public string Baslik { get; set; }
    public string DosyaAd { get; set; }
    public string EnstrumanKod { get; set; }
    public string KategoriAd { get; set; }
    public string KategoriKod { get; set; }
    public long RaporId { get; set; }
    public string RaporTarih { get; set; }
    public string Url { get; set; }
}

For reference I used https://app.quicktype.io /生成课程

The first thing i noticed is that your Result class will be trying to bind to a property called "Datas" where as the json string suggests it should be called "Data".
Based on the json string I think you should try to deserialize to:

public partial class Result
{
    public ArastirmaRaporListesiResult ArastirmaRaporListesiResult { get; set; }
}

public partial class ArastirmaRaporListesiResult
{
    public Datum[] Data { get; set; }
    public long ErrorCode { get; set; }
    public object ErrorMessage { get; set; }
    public long StatusCode { get; set; }
}

public partial class Datum
{
    public string Baslik { get; set; }
    public string DosyaAd { get; set; }
    public string EnstrumanKod { get; set; }
    public string KategoriAd { get; set; }
    public string KategoriKod { get; set; }
    public long RaporId { get; set; }
    public string RaporTarih { get; set; }
    public string Url { get; set; }
}

For reference I used https://app.quicktype.io/ to generate the class

无法对json数组进行对象

橙味迷妹 2025-02-18 16:00:46

有两个错误,第一个是在初始化的语法中,

__init__    Not   _init_

第二个错误是您应该作为输入输入根。

最后,您将其更改一些,以将初始化与其他方法分开。

from tkinter import* 
from tkinter import ttk 
from PIL import Image, ImageTk

class Face_Recognization_System:
    def __init__(self, root):
        self.root = root
    def change(self):
        self.root.title("Simple Prog")
        self.root.geometry("1530x790+0+0")

if __name__ == "__main__":

    root = Tk()
    obj = Face_Recognization_System(root)
    obj.change()

    root.mainloop()

There are two errors the first is in the syntax of Initialization it's

__init__    Not   _init_

Second you should enter the root as an input.

And finally you change it up a little to separate the initialization from the other method .

from tkinter import* 
from tkinter import ttk 
from PIL import Image, ImageTk

class Face_Recognization_System:
    def __init__(self, root):
        self.root = root
    def change(self):
        self.root.title("Simple Prog")
        self.root.geometry("1530x790+0+0")

if __name__ == "__main__":

    root = Tk()
    obj = Face_Recognization_System(root)
    obj.change()

    root.mainloop()

为什么我的TKINTER标题栏没有更改?

橙味迷妹 2025-02-18 10:59:45

在几个地方使用了过多的内存,但是方法中的所有内容除了返回的 int [] 是可收集的垃圾,因此您不应有任何疑虑。

但是,如果您以许多值阅读 - 说100,000或更多,那么以下建议将减少所需的内存足迹。

主张 ArrayList 在使用前避免重新分配时:

int initialSize = 100000;
ArrayList<Integer> innt = new ArrayList<>(initialSize);

避免每次解析 Integer 避免使用StringBuffer。由于 int 具有最大长度,因此您可以替换为 final char [] dourus

final char[] doubus = new char[Integer.toString(Integer.MAX_VALUE).length() + 2];
int dIndex = 0;
// ... Then append with:
    if (a != '|') {
        doubus[dIndex++] = a;
    } else {
        innt.add(Integer.valueOf(new String(doubus, 0, dIndex)));
        dIndex=0;
    }

使用 integer.valueof arraylist&lt; integer&gt; 表示您将自动键入许多 int 值作为整数,仅在末尾提取为 int 。将 arraylist 交换为 int [] 并使用 integer.parseint ,因此结果始终是原始类型 int 是指避免了许多内存转换:

int [] innt = new int[initialSize];
int iIndex = 0;
// Replace add with:
    innt[iIndex++] = Integer.parseInt(new String(doubus, 0, dIndex));

将其放在一起您应该具有相同的输出,希望记忆力减少:

public static int[] findDoub(File file) throws IOException {
    int initialSize = 100000; // Use suitable value
    int [] innt = new int[initialSize];
    int iIndex = 0;

    try (BufferedInputStream buff = new BufferedInputStream(new FileInputStream(file))) {
        int index = buff.read();

        final char[] doubus = new char[Integer.toString(Integer.MAX_VALUE).length() + 2];
        int dIndex = 0;

        for (int i = 0; index != -1; i++) {
            char a = (char) index;
            if (i > 0) {
                if (a != '|') {
                    doubus[dIndex++] = a;
                } else {
                    // Grow int[] if needed:
                    if (iIndex == innt.length) {
                        innt = Arrays.copyOf(innt, innt.length + initialSize);
                    }
                    innt[iIndex++] = Integer.parseInt(new String(doubus, 0, dIndex));
                    dIndex=0;
                }
            }
            index = buff.read();
        }
    }
    // Return exact size int[] of current length:
    return Arrays.copyOf(innt, iIndex);
}

There are several places where excessive memory is used but everything within the method except the returned int[] is garbage collectable so you should not have any concerns.

However If you are reading in many values - say 100,000 or more then the suggestions below will reduce the memory footprint needed.

Presize ArrayList before use avoids re-allocations when it grows:

int initialSize = 100000;
ArrayList<Integer> innt = new ArrayList<>(initialSize);

Avoid StringBuffer per parsed Integer. As int has maximum length you can replace with final char[] doubus:

final char[] doubus = new char[Integer.toString(Integer.MAX_VALUE).length() + 2];
int dIndex = 0;
// ... Then append with:
    if (a != '|') {
        doubus[dIndex++] = a;
    } else {
        innt.add(Integer.valueOf(new String(doubus, 0, dIndex)));
        dIndex=0;
    }

Using Integer.valueOf with ArrayList<Integer> means you are auto-boxing many int values as Integer, only to extract as int at the end. Swap out the ArrayList for int[] and use Integer.parseInt so the result is always primitive type int means many memory conversions are avoided:

int [] innt = new int[initialSize];
int iIndex = 0;
// Replace add with:
    innt[iIndex++] = Integer.parseInt(new String(doubus, 0, dIndex));

Putting this together you should have same output and hopefully less memory churn:

public static int[] findDoub(File file) throws IOException {
    int initialSize = 100000; // Use suitable value
    int [] innt = new int[initialSize];
    int iIndex = 0;

    try (BufferedInputStream buff = new BufferedInputStream(new FileInputStream(file))) {
        int index = buff.read();

        final char[] doubus = new char[Integer.toString(Integer.MAX_VALUE).length() + 2];
        int dIndex = 0;

        for (int i = 0; index != -1; i++) {
            char a = (char) index;
            if (i > 0) {
                if (a != '|') {
                    doubus[dIndex++] = a;
                } else {
                    // Grow int[] if needed:
                    if (iIndex == innt.length) {
                        innt = Arrays.copyOf(innt, innt.length + initialSize);
                    }
                    innt[iIndex++] = Integer.parseInt(new String(doubus, 0, dIndex));
                    dIndex=0;
                }
            }
            index = buff.read();
        }
    }
    // Return exact size int[] of current length:
    return Arrays.copyOf(innt, iIndex);
}

我的方法是出于某种原因即使在完成后使用记忆

橙味迷妹 2025-02-18 10:38:48

您将获得一个数字,可以通过减去距离该group_id的总数到目前为止的总数的次数来区分Group_ID的值的时间。一点点想法会向您展示此值在相同值的系列中始终相同,并且与在不同时间出现的相同值始终不同。

从该数字中,您可以计算您的顺序截面号。可能有一种直接执行此操作的方法(以较少的子查询),但是我必须使用一个中间步骤来获得group_id的特定值运行的日期。

SELECT id, group_id, date, value,
    dense_rank() over (partition by group_id order by group_value_incidence_start) section
FROM (    
    SELECT id, group_id, date, value,
        min(date) over (partition by group_id, value, group_value_incidence) group_value_incidence_start
    FROM (
        SELECT id, group_id, date, value,
            count(1) over (partition by group_id order by date) -
                count(1) over (partition by group_id, value order by date) group_value_incidence
        FROM test
    ) group_value_indidences
) group_value_incidence_starts
ORDER BY group_id, section

You get a number that distinguishes which time a value has occurred for a group_id by subtracting the number of times that value has occurred for the group_id so far from the total occurrences of that group_id so far; a little thought will show you this value will always be the same within a series of the same value and always different from that same value appearing at a different time.

From that number, you can calculate your sequential section number. There may be a way to do that directly (with one fewer subquery), but I had to use an intermediate step of getting the date that a particular run of values for a group_id started.

SELECT id, group_id, date, value,
    dense_rank() over (partition by group_id order by group_value_incidence_start) section
FROM (    
    SELECT id, group_id, date, value,
        min(date) over (partition by group_id, value, group_value_incidence) group_value_incidence_start
    FROM (
        SELECT id, group_id, date, value,
            count(1) over (partition by group_id order by date) -
                count(1) over (partition by group_id, value order by date) group_value_incidence
        FROM test
    ) group_value_indidences
) group_value_incidence_starts
ORDER BY group_id, section

fiddle

当值更改时,使用mySQL窗口函数到extirallies

橙味迷妹 2025-02-18 07:41:18

您是发送字符串而不是JSON,这就是为什么Elasticsearch表示它不喜欢收到的数据的原因。

尝试在没有 json.stringify 的情况下执行请求:

app.get("/import/data", async (req, res) => {

  const current = {
    project_name: "java",
    delivery_manager: "Yawhe",
    project_manager: "Ruth",
  };

  await axios({
    url: "http://localhost:9200/sales-enable-data-index/_doc",
    method: "POST",
    headers: {
      "Content-Type": "application/json",
    },
    auth: {
      username: "username",
      password: "password",
    },
    body: current,
  })
      .then((response) => {
        return res.status(200).send(response);
    })
      .catch((err) => {
        return res.status(500).send(err);
    });
});

You're sending String instead of JSON, this is why ElasticSearch says it does not like the data it's receiving.

Try doing the request without the JSON.stringify like this:

app.get("/import/data", async (req, res) => {

  const current = {
    project_name: "java",
    delivery_manager: "Yawhe",
    project_manager: "Ruth",
  };

  await axios({
    url: "http://localhost:9200/sales-enable-data-index/_doc",
    method: "POST",
    headers: {
      "Content-Type": "application/json",
    },
    auth: {
      username: "username",
      password: "password",
    },
    body: current,
  })
      .then((response) => {
        return res.status(200).send(response);
    })
      .catch((err) => {
        return res.status(500).send(err);
    });
});

无法使用nodejs的Axios将数据添加到弹性中

橙味迷妹 2025-02-18 02:11:26

要将.crt转换为.pfx,我们需要托管提供商提供的CSA证书(私钥)。以下是转换以下步骤:

-in

.pfx -inkey

privateKey.txt > cacert.crt = networksolutions_ca.crt

证书.pfx 是生成文件的新名称。

privateKey 可以在中.key .txt 格式

完成此过程后,我们拥有 cetide.pfx 文件,因此转到IIS Manager中的IIS服务器证书。

右侧有一个导入链接按钮,单击此,然后选择转换后的证书,然后输入在创建 .pfx 文件时输入的密码,然后完成该过程。

现在,在IIS上选择您的站点,然后右键单击此网站,选择“编辑绑定”,然后在“新弹出窗口”上选择 https:// 和“ hosting name”是您的域名和所有其他字段就是这样,单击确定以完成此过程。

现在,重新启动IIS,您的证书在您的网站上正常工作。

https://stackoverflow.com/a/a/12798206/133366642

To convert .crt to .pfx, we need CSA certificate (Private Key) provided by hosting provider. Below are the steps to convert this:

  • Download and install OpenSSL software from below link based on your
    system type https://slproweb.com/products/Win32OpenSSL.html

  • Run the following command on command prompt:

    openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key -in certificate.crt -certfile CACert.crt

    OR

    openssl pkcs12 -export -out certificate.pfx -inkey privateKey.txt -in certificate.crt -certfile CACert.crt

Here:

Certificate.crt = Your-domain-Name.crt

CACert.crt = NetworkSolutions_CA.crt

certificate.pfx is the new name of generated file.

PrivateKey can be in .key or .txt format

After completing this process now we have certificate.pfx file so go to IIS Server certificates in IIS Manager.

There is an import link button on right side, click on this and select the converted certificate and enter password which is enter at the time of creation of the .pfx file and complete the process.

Now select your site on IIS and right click on this, select "Edit Binding" and on the new popup window select type as https:// and "Hosting name" is your domain name and all other field is as it is, click on ok to complete this process.

Now restart IIS and your certificate is working fine with your site.

https://stackoverflow.com/a/12798206/13336642

如何从CRT文件中获取PFX

橙味迷妹 2025-02-18 02:00:27

编辑

中包含的方法

import { encode } from 'querystring'

// From useRouter:
// const { query } = useRouter()

// From GetServerSidePropContext
// const { query } = ctx

const urlQueryString = encode(query)
const searchParams = new URLSearchParams(urlQueryString )

简单的方法:使用 encode/stringify lib lib旧答案

,或者您可以使用这些助手方法将查询参数从一种类型转换为另一种类型为另一种类型一种非常简单的方法。

export function parsedUrlQueryToURLSearchParams(
  query: ParsedUrlQuery
): URLSearchParams {
  const searchParams = new URLSearchParams()
  for (const [key, value] of Object.entries(query)) {
    if (!value) continue
    if (Array.isArray(value)) {
      value.forEach((element) => {
        searchParams.append(key, element)
      })
    } else {
      searchParams.append(key, value)
    }
  }
  return searchParams
}

export function urlSearchParamsToParsedUrlQuery(
  searchParams: URLSearchParams
): ParsedUrlQuery {
  const query: ParsedUrlQuery = {}
  for (var [key, value] of searchParams.entries()) {
    query[key] = value
  }
  return query
}

如果需要

export function parsedUrlQueryToURLString(query: ParsedUrlQuery): string {
  const params = []
  for (const [key, value] of Object.entries(query)) {
    if (!value) continue
    if (Array.isArray(value)) {
      value.forEach((element) => {
        params.push(`${key}=${encodeURIComponent(element)}`)
      })
    } else {
      params.push(`${key}=${encodeURIComponent(value)}`)
    }
  }
  return params.join('&')
}

EDIT

Simplest method: use the the encode/stringify method included in the querystring lib

import { encode } from 'querystring'

// From useRouter:
// const { query } = useRouter()

// From GetServerSidePropContext
// const { query } = ctx

const urlQueryString = encode(query)
const searchParams = new URLSearchParams(urlQueryString )

OLD ANSWER

Or you can use these helper methods to transform the query params from one type to another in a very simple way.

export function parsedUrlQueryToURLSearchParams(
  query: ParsedUrlQuery
): URLSearchParams {
  const searchParams = new URLSearchParams()
  for (const [key, value] of Object.entries(query)) {
    if (!value) continue
    if (Array.isArray(value)) {
      value.forEach((element) => {
        searchParams.append(key, element)
      })
    } else {
      searchParams.append(key, value)
    }
  }
  return searchParams
}

export function urlSearchParamsToParsedUrlQuery(
  searchParams: URLSearchParams
): ParsedUrlQuery {
  const query: ParsedUrlQuery = {}
  for (var [key, value] of searchParams.entries()) {
    query[key] = value
  }
  return query
}

If you want, you can also create another helper method to transform the query object from next/router to an encoded string

export function parsedUrlQueryToURLString(query: ParsedUrlQuery): string {
  const params = []
  for (const [key, value] of Object.entries(query)) {
    if (!value) continue
    if (Array.isArray(value)) {
      value.forEach((element) => {
        params.push(`${key}=${encodeURIComponent(element)}`)
      })
    } else {
      params.push(`${key}=${encodeURIComponent(value)}`)
    }
  }
  return params.join('&')
}

如何将url-condurlquery编写?

橙味迷妹 2025-02-17 22:01:02

我只想补充说,我花了一个星期的时间将Babel和其他软件包降级到2018年以前的软件包,只是意识到我的问题是在我自己的代码中具有助手功能,该功能是为恶意的HTML代码过滤。
@lifeisfoo提到弦乐'?&lt;!'上面在node_modules中,但我建议还要将整个项目盖住。

仅供参考,我打破野生动物园的我的言论是'&lt; =!'。这也是一个不受支持的外观
我在Safaris Regex Tester中测试了我的正则:(?&lt; =![)(。*?)(?=]) https://www.regextester.com/ 输出说“ LookBehind在JavaScript中不支持”

要结束,我发现Safaris Console错误消息毫无价值,并在bundledle.js的10,000行中传播,给人的印象是问题在包装/依赖关系中,这显然不是。
我花了很长时间才降级包装,只是在捆绑包代码的不同行上发现了相同的错误消息。

I just want to add that I spent a week downgrading Babel and other packages to pre-2018 packages, only to realise that my problem was in a helper function within my own code that was to filter for malicious html code.
@lifeisfoo mentions to grep for the string '?<!' above in node_modules, but i recommend also grepping the entire project.

fyi, my regex that was breaking Safari was '?<=!'. Which is also an unsupported lookbehind
I tested my regex: (?<=![)(.*?)(?=]) in Safaris regex tester https://www.regextester.com/ and the output says 'Lookbehind is not supported in Javascript'

To end, I found Safaris console error message worthless and spread around the 10,000s of lines of the bundle.js, giving the impression that the issue was within the packages/dependancies, which it clearly was not.
I spent ages downgrading the packages only to find the same error message appear on a different line of the bundle.js code.

React申请在Safari投掷语法:无效的正则表达式:无效的组规范名称

橙味迷妹 2025-02-17 21:22:15

这要简单得多。首先只需使用 css transition 对于动画而言(您只需要一个类)。通过添加 data-progress =“ x” 来调整输入,其中 x 是每个选择贡献的进度量。您的脚本只需要收听更改事件,然后总结所有已检查的项目' data-progress 值,然后使用它来设置宽度。

$(document).ready(function() {
  const $radios = $('input[type="radio"]')

  $radios.change(function() {
    let progress_value = 0
    $('input[type="radio"]:checked').each(function() {
      progress_value += Number($(this).data('progress'))
    })
    console.log(progress_value)
    $(".progress-value").width(progress_value + '%')
  });

  // trigger change on one to register any pre-checked values
  $($radios[0]).change()
})
body {
  padding: 12px;
}

.col-6 label {
  border: 1px solid #333;
}

.col-6 input[type=radio]:checked+label {
  border: 2px solid blue;
}

.progress {
  background: rgba(255, 255, 255, 0.1);
  justify-content: flex-start;
  border-radius: 100px;
  align-items: center;
  position: relative;
  display: flex;
  height: 10px;
  width: 100%;
  margin-bottom: 10px;
}

.progress-value {
  box-shadow: 0 10px 40px -10px #fff;
  border-radius: 100px;
  background: #0d6efd;
  height: 30px;
  width: 0;
  transition: width 2s;
}
<head>
  <link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous">
  <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js"></script>
</head>

<body>



  <div class="wrappy">
    Speed
    <div class="progress">
      <div class="progress-value"></div>
    </div>
  </div>




  <div class="row">
    <label>Group 1</label>
    <div class="col-6">
      <input type="radio" style="display:none;" id="js1" data-price="146.99" value="option1a" name="ONE" data-progress="50" checked>
      <label for="js1" onclick="">
      Option 1a (Default 50%)
    </label>
    </div>
    <div class="col-6">
      <input type="radio" style="display:none;" id="js2" data-price="123.99" value="option2a" name="ONE" data-progress="75">
      <label for="js2" onclick="">
      Option 2a (75%)
    </label>
    </div>

    <hr style="margin-top:24px;">

    <label>Group 2</label>

    <div class="col-6">
      <input type="radio" style="display:none;" id="js3" data-price="116.99" value="option1b" name="TWO" data-progress="0">
      <label for="js3" onclick="">
      Option 1b (Default 50%, but if option 2a selected then stay 75%)
    </label>
    </div>
    <div class="col-6">
      <input type="radio" style="display:none;" id="js4" data-price="93.99" value="option2b" name="TWO" data-progress="10">
      <label for="js4" onclick="">
      Option 2b (Should increase from group 1 selection)
    </label>
    </div>
  </div>

  <!-- bootstrap.bundle.min.js belongs down here after all the content, just before the closing body tag -->
  <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js" integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM" crossorigin="anonymous"></script>
</body>

This much simpler then you are making it. First just use a CSS Transition for the animation (you only need one class). Adjust your inputs by adding data-progress="X" where X is the amount of progress contributed by each selection. Your script just needs to listen for the change event and sum all the checked items' data-progress values and use that to set the width.

$(document).ready(function() {
  const $radios = $('input[type="radio"]')

  $radios.change(function() {
    let progress_value = 0
    $('input[type="radio"]:checked').each(function() {
      progress_value += Number($(this).data('progress'))
    })
    console.log(progress_value)
    $(".progress-value").width(progress_value + '%')
  });

  // trigger change on one to register any pre-checked values
  $($radios[0]).change()
})
body {
  padding: 12px;
}

.col-6 label {
  border: 1px solid #333;
}

.col-6 input[type=radio]:checked+label {
  border: 2px solid blue;
}

.progress {
  background: rgba(255, 255, 255, 0.1);
  justify-content: flex-start;
  border-radius: 100px;
  align-items: center;
  position: relative;
  display: flex;
  height: 10px;
  width: 100%;
  margin-bottom: 10px;
}

.progress-value {
  box-shadow: 0 10px 40px -10px #fff;
  border-radius: 100px;
  background: #0d6efd;
  height: 30px;
  width: 0;
  transition: width 2s;
}
<head>
  <link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous">
  <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js"></script>
</head>

<body>



  <div class="wrappy">
    Speed
    <div class="progress">
      <div class="progress-value"></div>
    </div>
  </div>




  <div class="row">
    <label>Group 1</label>
    <div class="col-6">
      <input type="radio" style="display:none;" id="js1" data-price="146.99" value="option1a" name="ONE" data-progress="50" checked>
      <label for="js1" onclick="">
      Option 1a (Default 50%)
    </label>
    </div>
    <div class="col-6">
      <input type="radio" style="display:none;" id="js2" data-price="123.99" value="option2a" name="ONE" data-progress="75">
      <label for="js2" onclick="">
      Option 2a (75%)
    </label>
    </div>

    <hr style="margin-top:24px;">

    <label>Group 2</label>

    <div class="col-6">
      <input type="radio" style="display:none;" id="js3" data-price="116.99" value="option1b" name="TWO" data-progress="0">
      <label for="js3" onclick="">
      Option 1b (Default 50%, but if option 2a selected then stay 75%)
    </label>
    </div>
    <div class="col-6">
      <input type="radio" style="display:none;" id="js4" data-price="93.99" value="option2b" name="TWO" data-progress="10">
      <label for="js4" onclick="">
      Option 2b (Should increase from group 1 selection)
    </label>
    </div>
  </div>

  <!-- bootstrap.bundle.min.js belongs down here after all the content, just before the closing body tag -->
  <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js" integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM" crossorigin="anonymous"></script>
</body>

如何更新进度栏动画OnClick?

橙味迷妹 2025-02-17 21:17:12

它发送文件 csv ,因此您不需要

您可以使用 io pandas.read_csv()

import requests
import pandas as pd
import io

url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date=2022-06-21'

response = requests.get(url)

fh = io.StringIO(response.text)  # create file in memory
df = pd.read_csv(fh)

print(df)

或您可以使用Module CSV

import requests
import csv
import io

url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date=2022-06-21'

response = requests.get(url)

fh = io.StringIO(response.text)  # create file in memory
data = list(csv.reader(fh))

print(data)

编辑:

您甚至可以直接使用 url pandas 代码>

import pandas as pd

url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date=2022-06-21'

df = pd.read_csv(url)

print(df)

编辑:

现在,您只需要使用日期列表,然后运行 -loop的才能读取所有CSV并保持列表。稍后,您可以使用 pandas.concat()将此列表转换为单个 dataframe

pandas doc: Merge

“示例:

import pandas as pd

# --- before loop ---

all_dates = ["2022-06-21", "2022-06-14", "2022-06-07"]
all_dfs = []

# url without `2022-06-21` at the end
url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date='

# --- loop ---

for date in all_dates:
    print('date:', date)
    df = pd.read_csv( url + date )
    all_dfs.append( df )

# --- after loop --- 

full_df = pd.concat(all_dfs)
print(full_df)

要获取日期列表,您可以从网页上刮擦它们,但可能需要 selenium 而不是而不是<代码> BeautifulSoup 因为页面使用JavaScript在页面上添加日期。

或者,您应该使用 devtools (TAB:网络,filter: xhr )查看JavaScript使用的URL来获取日期并使用请求获取它们。

import requests

# without header `Content-Type` it sends `HTML` instead of `JSON`
headers = {
#    'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:98.0) Gecko/20100101 Firefox/98.0',
#    'X-Requested-With': 'XMLHttpRequest',
#    'Referer': 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/',
    'Content-Type': 'application/json; charset=utf-8',
}

url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/ReturnDMWeeks'

response = requests.get(url, headers=headers)
#print(response.text)

data = response.json()

all_dates = data['d']
all_dates = [f"{d[:4]}-{d[4:6]}-{d[6:]}" for d in all_dates]

print(all_dates)

结果

['2022-06-21', '2022-06-14', '2022-06-07', ..., '2000-01-18', '2000-01-11', '2000-01-04']

It sends file csv so you don't need BeautifulSoup

You can use io with pandas.read_csv()

import requests
import pandas as pd
import io

url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date=2022-06-21'

response = requests.get(url)

fh = io.StringIO(response.text)  # create file in memory
df = pd.read_csv(fh)

print(df)

or you can use io with module csv

import requests
import csv
import io

url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date=2022-06-21'

response = requests.get(url)

fh = io.StringIO(response.text)  # create file in memory
data = list(csv.reader(fh))

print(data)

EDIT:

You can even use url directly with pandas

import pandas as pd

url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date=2022-06-21'

df = pd.read_csv(url)

print(df)

EDIT:

Now you need only list with dates and run for-loop to read all csv and keep on list. And later you can use pandas.concat() to convert this list to single DataFrame.

Pandas doc: Merge, join, concatenate and compare

Minimal working example:

import pandas as pd

# --- before loop ---

all_dates = ["2022-06-21", "2022-06-14", "2022-06-07"]
all_dfs = []

# url without `2022-06-21` at the end
url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date='

# --- loop ---

for date in all_dates:
    print('date:', date)
    df = pd.read_csv( url + date )
    all_dfs.append( df )

# --- after loop --- 

full_df = pd.concat(all_dfs)
print(full_df)

To get list of dates you could scrape them from webpage but it may need Selenium instead of beautifulsoupbecause page uses JavaScript to add dates on page.

Or you should use DevTools (tab: Network, filter: XHR) to see what url is used by JavaScript to get dates and use requests to get them.

import requests

# without header `Content-Type` it sends `HTML` instead of `JSON`
headers = {
#    'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:98.0) Gecko/20100101 Firefox/98.0',
#    'X-Requested-With': 'XMLHttpRequest',
#    'Referer': 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/',
    'Content-Type': 'application/json; charset=utf-8',
}

url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/ReturnDMWeeks'

response = requests.get(url, headers=headers)
#print(response.text)

data = response.json()

all_dates = data['d']
all_dates = [f"{d[:4]}-{d[4:6]}-{d[6:]}" for d in all_dates]

print(all_dates)

Result

['2022-06-21', '2022-06-14', '2022-06-07', ..., '2000-01-18', '2000-01-11', '2000-01-04']

如何将汤转换为数据框

橙味迷妹 2025-02-17 17:46:08

如果我正确理解问题,您有两个选项:

  1. Inbisup 元素中删除 intactive 类(这足以使其可见)。您可以选择将活动类添加到

  2. 您更新 active> active 样式从显示:block display:block!重要; 给出一个更高的优先级,并覆盖 样式,而 intactive 类仍在元素上(不推荐)

If I understand the problem correctly, you have two options:

  1. Remove the inactive class from the signup element (that would be enough to make it visible). you can optionally add the active class to it

  2. You update the active style from display: block to display: block !important; to give this one higher priority and overwrite the inactive style while the inactive class is still on the element (not recommended)

为什么element.ClassList首次工作,但第二次工作?

橙味迷妹 2025-02-16 21:58:33

我认为,如果数据库关闭,则不应重新启动您的应用程序,它应该等待数据库节点重新启动并重新连接。这就是为什么数据库健康不是生计检查的一部分,而只是准备就绪的一部分。准备就绪将掩盖服务请求的一部分,应用程序将等待数据库重新在线。重新启动应用程序节点将对应用程序的生计没有影响。

I would argue that if the database is down, your application shouldn't be restarted, it should wait for the database node to be restarted and reconnect. Thats why database health is not part of the liveliness check, just part of the readiness. Readiness will covert the part of serving requests, the application will just wait for the database to be back online. Restarting the application node will have no effect on the liveliness of the application.

实时检查状态不返回数据库可用性

橙味迷妹 2025-02-16 16:14:26

确保您在每个字段上都有属性名称。

示例:名称=“电子邮件”

Make sure that you have the attribute name on each field.

Example: name="email"

HTML表格未提交数据?

橙味迷妹 2025-02-16 03:14:24

在Azure管道中,设定代理工作将达到您的需求。每个工作都有一个暂停。如果作业在指定的时间尚未完成,则服务器将取消作业。它将尝试向代理商发出信号,并将其标记为取消: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/process/runs?view = azure-devpops#timeoutsouts-and-and-and-and-come.view = -disconnects

将1440分钟设置为24小时。

In Azure Pipeline, set Timeout for agent job would achieve your demand. Each job has a timeout. If the job has not completed in the specified time, the server will cancel the job. It will attempt to signal the agent to stop, and it will mark the job as canceled: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/runs?view=azure-devops#timeouts-and-disconnects

Set 1440 minutes for 24 hours.

enter image description here

24小时后停止管道

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文