柳絮泡泡

文章 评论 浏览 655

柳絮泡泡 2025-02-20 23:59:07

尝试以下操作:

git add -u 为包括已删除/更新文件在内的所有更改。

然后只需运行 git commit -m'您的提交消息'进行提交即可。

Try this:

git add -u to stage all your changes including deleted/updated files.

Then just run git commit -m 'your commit message' to commit.

将所有删除的文件添加到带有git的提交

柳絮泡泡 2025-02-20 19:29:35

根据源代码,它将安装到 $ home/.bun/bin 默认情况下。

您可以更改 bun_install env以更改安装dir。但是,如果您希望保持简单,则是我这样做的方式(使用 sudo ln

curl -fsSL https://bun.sh/install | bash && \
  ln -s $HOME/.bun/bin/bun /usr/local/bin/bun

必要。

您可以在 dockerfile 中使用此方法,因为它可以使您的Dockerfile清洁器。

According to the source code install.sh#L115, it will install to $HOME/.bun/bin by default.

You can change the BUN_INSTALL env to change the installation dir. However, if you prefer to keep it simple, here is how I do it (use sudo ln if necessary):

curl -fsSL https://bun.sh/install | bash && \
  ln -s $HOME/.bun/bin/bun /usr/local/bin/bun

bun is now available globally.

You can use this approach in the Dockerfile as it will make your Dockerfile cleaner.

运行安装脚本后找不到的bun

柳絮泡泡 2025-02-20 17:48:33

您可以检查输入是否为 true , true 或 false false 通过使用 solow /b>方法

这是代码:

close = False
while not close:
    # Your code here
    if input().lower() in ["true", "false"]:
        close = True

You can check if the input is True, true or False, false by use the lower method

Here is the code:

close = False
while not close:
    # Your code here
    if input().lower() in ["true", "false"]:
        close = True

如何从用户那里获取布尔值的输入?

柳絮泡泡 2025-02-20 12:11:23
if __name__ == "__main__":
       # random.seed(1)
      cand = [(random.random()**.25,random.random()**.25) for i in range(1000)]
       #  cand  = np.random.randn(1000)
          fig, ax = plt.subplots(figsize=(10, 8))

      for (bestRouteTime, num_buses) in cand:
           x,y = (bestRouteTime*360, num_buses*10) 
          TSPTW, = ax.plot(x,y,"bo")

       front = pareto_front(cand)
       Font, = ax.plot([x for (x,y) in front], [y for (x,y) in front], "ro")
       fig.canvas.draw()
       ax.set_ylabel('Number of Walking Buses')
       ax.set_xlabel('Travel Time (seconds)')
       plt.title = ("Pareto Optimal Fronts (Modified TSPTW))")
       legend1 = plt.legend((TSPTW,Font), ["Generated Solutions","Pareto Set"],
       loc="upper left", shadow=True, fontsize='large')
       plt.gca().add_artist(legend1)
       plt.show()

新输出

if __name__ == "__main__":
       # random.seed(1)
      cand = [(random.random()**.25,random.random()**.25) for i in range(1000)]
       #  cand  = np.random.randn(1000)
          fig, ax = plt.subplots(figsize=(10, 8))

      for (bestRouteTime, num_buses) in cand:
           x,y = (bestRouteTime*360, num_buses*10) 
          TSPTW, = ax.plot(x,y,"bo")

       front = pareto_front(cand)
       Font, = ax.plot([x for (x,y) in front], [y for (x,y) in front], "ro")
       fig.canvas.draw()
       ax.set_ylabel('Number of Walking Buses')
       ax.set_xlabel('Travel Time (seconds)')
       plt.title = ("Pareto Optimal Fronts (Modified TSPTW))")
       legend1 = plt.legend((TSPTW,Font), ["Generated Solutions","Pareto Set"],
       loc="upper left", shadow=True, fontsize='large')
       plt.gca().add_artist(legend1)
       plt.show()

New output

有人可以帮助我解决这个传奇问题吗?

柳絮泡泡 2025-02-20 00:41:49

我们不需要使用BeautifulSoup来解析数据。硒的方法对于我们的用例就足够了。

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import pandas as pd
    

chrome_path = r"C:\Users\hpoddar\Desktop\Tools\chromedriver_win32\chromedriver.exe"
s = Service(chrome_path)
url = 'https://blinkit.com/cn/masala-oil-more/whole-spices/cid/1557/930'
driver = webdriver.Chrome(service=s)
driver.get(url)

click_location_tooltip = driver.find_element(by=By.XPATH, value="//button[@data-test-id='address-correct-btn']")
click_location_tooltip.click()

cards_elements_list = driver.find_elements(by=By.XPATH, value="//a[@data-test-id='plp-product']")
card_link_list = [x.get_attribute('href') for x in cards_elements_list]

df = pd.DataFrame(columns=['info_category','info_sub_category','info_product_name','info_brand','info_shelf_life','info_country_of_origin','info_weight','info_expiry_date','price','mrp'])

for url in card_link_list:
  driver.get(url)
  try:
      WebDriverWait(driver, 15).until(EC.presence_of_element_located((By.CLASS_NAME, 'ProductInfoCard__BreadcrumbLink-sc-113r60q-5')))
  except TimeoutException:
      print(url + ' cannot be loaded')
      continue
  bread_crumb_links = driver.find_elements(by=By.XPATH, value="//a[@class='ProductInfoCard__BreadcrumbLink-sc-113r60q-5 hRvdxN']")
  info_category = bread_crumb_links[1].text.strip()
  info_sub_category = bread_crumb_links[2].text.strip()

  product_name = driver.find_element(by=By.XPATH, value="//span[@class='ProductInfoCard__BreadcrumbProductName-sc-113r60q-6 lhxiqc']")
  info_product_name = product_name.text

  brand_name = driver.find_element(by=By.XPATH, value="//div[@class='ProductInfoCard__BrandContainer-sc-113r60q-9 exyKqL']")
  info_brand = brand_name.text

  product_details = driver.find_elements(by=By.XPATH, value="//div[@class='ProductAttribute__ProductAttributesDescription-sc-dyoysr-2 lnLDYa']")
  info_shelf_life = product_details[0].text.strip()
  info_country_of_origin = product_details[1].text.strip()
  info_weight = product_details[7].text.strip()
  info_expiry_date = product_details[5].text.strip()

  div_containing_radio = driver.find_element(by=By.XPATH, value="//div[starts-with(@class, 'ProductVariants__RadioButtonInner')]//ancestor::div[starts-with(@class, 'ProductVariants__VariantCard')]")

  price_mrp_div = div_containing_radio.find_element(by=By.CSS_SELECTOR, value=".ProductVariants__PriceContainer-sc-1unev4j-9.jjiIua")
  mrp_price_list = price_mrp_div.text.split("₹")
  price = mrp_price_list[1]
  mrp = ''
  if(len(mrp_price_list) > 2):
    mrp = mrp_price_list[2]

  data_dict = {'info_category' : info_category, 'info_sub_category' : info_sub_category, 'info_product_name' : info_product_name, 'info_brand' : info_brand, 'info_shelf_life' : info_shelf_life, 'info_country_of_origin': info_country_of_origin, 'info_weight' : info_weight, 'info_expiry_date' : info_expiry_date , 'price' : price, 'mrp' : mrp}
  df_dict = pd.DataFrame([data_dict])
  df = pd.concat([df, df_dict])

输出:

在此处输入图像描述“

ps:请注意, product_details 如果要为所有URL概括,那么我们不完全是一个结构化元素,只是我们需要使用正则元素来解析的文本,因此,您必须在索引列表 product_details 时进行一些非凡的处理在您的代码中。

We don't need to use BeautifulSoup to parse the data. Selenium has methods that will be sufficient for our use case.

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import pandas as pd
    

chrome_path = r"C:\Users\hpoddar\Desktop\Tools\chromedriver_win32\chromedriver.exe"
s = Service(chrome_path)
url = 'https://blinkit.com/cn/masala-oil-more/whole-spices/cid/1557/930'
driver = webdriver.Chrome(service=s)
driver.get(url)

click_location_tooltip = driver.find_element(by=By.XPATH, value="//button[@data-test-id='address-correct-btn']")
click_location_tooltip.click()

cards_elements_list = driver.find_elements(by=By.XPATH, value="//a[@data-test-id='plp-product']")
card_link_list = [x.get_attribute('href') for x in cards_elements_list]

df = pd.DataFrame(columns=['info_category','info_sub_category','info_product_name','info_brand','info_shelf_life','info_country_of_origin','info_weight','info_expiry_date','price','mrp'])

for url in card_link_list:
  driver.get(url)
  try:
      WebDriverWait(driver, 15).until(EC.presence_of_element_located((By.CLASS_NAME, 'ProductInfoCard__BreadcrumbLink-sc-113r60q-5')))
  except TimeoutException:
      print(url + ' cannot be loaded')
      continue
  bread_crumb_links = driver.find_elements(by=By.XPATH, value="//a[@class='ProductInfoCard__BreadcrumbLink-sc-113r60q-5 hRvdxN']")
  info_category = bread_crumb_links[1].text.strip()
  info_sub_category = bread_crumb_links[2].text.strip()

  product_name = driver.find_element(by=By.XPATH, value="//span[@class='ProductInfoCard__BreadcrumbProductName-sc-113r60q-6 lhxiqc']")
  info_product_name = product_name.text

  brand_name = driver.find_element(by=By.XPATH, value="//div[@class='ProductInfoCard__BrandContainer-sc-113r60q-9 exyKqL']")
  info_brand = brand_name.text

  product_details = driver.find_elements(by=By.XPATH, value="//div[@class='ProductAttribute__ProductAttributesDescription-sc-dyoysr-2 lnLDYa']")
  info_shelf_life = product_details[0].text.strip()
  info_country_of_origin = product_details[1].text.strip()
  info_weight = product_details[7].text.strip()
  info_expiry_date = product_details[5].text.strip()

  div_containing_radio = driver.find_element(by=By.XPATH, value="//div[starts-with(@class, 'ProductVariants__RadioButtonInner')]//ancestor::div[starts-with(@class, 'ProductVariants__VariantCard')]")

  price_mrp_div = div_containing_radio.find_element(by=By.CSS_SELECTOR, value=".ProductVariants__PriceContainer-sc-1unev4j-9.jjiIua")
  mrp_price_list = price_mrp_div.text.split("₹")
  price = mrp_price_list[1]
  mrp = ''
  if(len(mrp_price_list) > 2):
    mrp = mrp_price_list[2]

  data_dict = {'info_category' : info_category, 'info_sub_category' : info_sub_category, 'info_product_name' : info_product_name, 'info_brand' : info_brand, 'info_shelf_life' : info_shelf_life, 'info_country_of_origin': info_country_of_origin, 'info_weight' : info_weight, 'info_expiry_date' : info_expiry_date , 'price' : price, 'mrp' : mrp}
  df_dict = pd.DataFrame([data_dict])
  df = pd.concat([df, df_dict])

Output :

enter image description here

P.S : Please note that product_details is not exactly a structured element and just text which we need to parse using regex if want to generalize it for all urls, hence you will have to do some exceptional handling while indexing the list product_details which you have done in your code.

单击使用循环使用同一类名称的多个Divs

柳絮泡泡 2025-02-19 08:10:23

将有效的正则态度如下:

^(.+?)( ?\1)+$

这将匹配:

  • :String的开始
  • ^ 通过
  • (?\ 1)+:一个可选的空间和第一组的相同字符,多次
  • $ :字符串结束

以提取您的重复模式,提取第1组的内容就足够了

。演示在这里

A regex that would work is the following:

^(.+?)( ?\1)+$

This will match:

  • ^: start of string
  • (.+?): the least amount of characters (at least one), followed by
  • ( ?\1)+: an optional space and the very same characters of the first group, multiple times
  • $: end of string

In order to extract your repeated pattern, it's sufficient to extract the content of Group 1.

Check the demo here.

在字符串中查找并检测图案

柳絮泡泡 2025-02-18 18:14:04

有一个实用程序类型 <<<代码>排除&lt; t,u&gt; 实用程序类型

type AlphabetLike = 'a' | 'b' | 'c' | 'zeta' | 'beta' | 'gamma' | 'mu';

type Alphabet = Extract<AlphabetLike, 'a' | 'b' | 'c' | 'zzz'>;
// type Alphabet = "a" | "b" | "c"

请注意,没有什么可以迫使第二个参数(在此称为 u )到 extract&lt; t,t,u&gt; < /code>或排除&lt; t,u&gt; 仅包含第一个参数中的元素(在此处称为 t )。因此,上面的'zzz'对任何事物都没有影响。

Playgrounger链接到代码

There is an Extract<T, U> utility type which acts like the opposite of the Exclude<T, U> utility type:

type AlphabetLike = 'a' | 'b' | 'c' | 'zeta' | 'beta' | 'gamma' | 'mu';

type Alphabet = Extract<AlphabetLike, 'a' | 'b' | 'c' | 'zzz'>;
// type Alphabet = "a" | "b" | "c"

Note that nothing forces the second argument (called U here) to either Extract<T, U> or Exclude<T, U> to contain only elements from the first argument (called T here). Hence that 'zzz' above doesn't have an effect on anything.

Playground link to code

打字稿中有没有办法说“ include” (排除)?

柳絮泡泡 2025-02-18 17:59:27

经过大量的研究和尝试,我能够通过一些更改解决此问题:

<template>
  <div>
    {{ countDown }}
  </div>
</template>

<script>
export default {
  data() {
    return {
      countDown: 10,
    };
  },
  created(){
  this.timer = 0;
  },
  method: {
    countDownTimer() {
      clearInterval(this.timer);
      this.timer = setInterval(() => {
        if (this.countDown > 0) {
          this.countDown--;
        } else {
          clearInterval(this.timer);
        }
      }, 1000);
    },
    nextquestion(){
      this.countDown = 10;
      this.countDownTimer();
    },
  },
  mounted() {
    this.countDownTimer();
  },
};
</script>

After a lot of research and trying, I was able to solve this with a few changes:

<template>
  <div>
    {{ countDown }}
  </div>
</template>

<script>
export default {
  data() {
    return {
      countDown: 10,
    };
  },
  created(){
  this.timer = 0;
  },
  method: {
    countDownTimer() {
      clearInterval(this.timer);
      this.timer = setInterval(() => {
        if (this.countDown > 0) {
          this.countDown--;
        } else {
          clearInterval(this.timer);
        }
      }, 1000);
    },
    nextquestion(){
      this.countDown = 10;
      this.countDownTimer();
    },
  },
  mounted() {
    this.countDownTimer();
  },
};
</script>

每次我调用setInterval方法时,我的计数器都会加速-VUEJS

柳絮泡泡 2025-02-18 16:19:00

使用 isnan 您可以检查值是否包含字符串。 (例如:'5+')
如果ISNAN为真,则值不为分析。

const handleChange = (e) => {
   ....
   ....
   var value = !isNaN(e.target.value) && parseInt(e.target.value) ?
               parseInt(e.target.value) : e.target.value;
   setFormData((currentValues) => ({ ...currentValues, [e.target.name]: value }));
   ...
   ...
};

With isNaN you can check if the value contains string. (ex: '5+')
if isNan is true, value not parse number.

const handleChange = (e) => {
   ....
   ....
   var value = !isNaN(e.target.value) && parseInt(e.target.value) ?
               parseInt(e.target.value) : e.target.value;
   setFormData((currentValues) => ({ ...currentValues, [e.target.name]: value }));
   ...
   ...
};

React Selectbox数字值作为字符串传递

柳絮泡泡 2025-02-18 12:55:54

这是因为

 mongodb-org-mongos : Depends: libssl1.1 (>= 1.1.0) but it is not installable
 mongodb-org-server : Depends: libssl1.1 (>= 1.1.0) but it is not installable
 mongodb-org-shell : Depends: libssl1.1 (>= 1.1.0) but it is not installable

要解决此问题,您必须安装 libssl1.1

所以我发现我解决了解决方案,只需键入以下命令

步骤-1:open终端(ctrl+alt+t)
步骤-2:键入 sudo -s 按Enter并输入您的root密码
步骤-3:键入 sudo -i ,然后按Enter
步骤-4:键入 wget http://archive.ubuntu.com/ubuntu/pool/main/main/o/openssl/libssl1.1.1.1.1.1.1f-1ubuntu2_amd64.deb
步骤-5:键入 sudo dpkg -i libssl1.1.1.1.1.1f-1ubuntu2_amd64.deb ,然后按Enter

您已经成功安装了 libssl1.1 ,您可以去通过键入安装mongoDB: sudo apt -get install -y mongodb

it is because

 mongodb-org-mongos : Depends: libssl1.1 (>= 1.1.0) but it is not installable
 mongodb-org-server : Depends: libssl1.1 (>= 1.1.0) but it is not installable
 mongodb-org-shell : Depends: libssl1.1 (>= 1.1.0) but it is not installable

to fix this you have to install libssl1.1

so , i found i solution to fix it just type the following command

Step - 1 : Open Terminal (Ctrl+Alt+T)
Step - 2 : type sudo -s press enter and type your root password
Step - 3 : type sudo -i and press enter
Step - 4 : type wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2_amd64.deb and press enter
Step - 5 : type sudo dpkg -i libssl1.1_1.1.1f-1ubuntu2_amd64.deb and press enter

You have successfully installed libssl1.1 and you can go to install mongodb by typing : sudo apt-get install -y mongodb

无法找到包装Mongodb-org

柳絮泡泡 2025-02-17 21:32:33

再次向服务器请求再次请求是不好的。而不是使用API​​使用Web插座并创建事件。每当有针对用户的新数据时,即使您的应用终止,您都会使用FCM收到通知。

请访问此页面
https://docs.flutter.dev/cookbookbook/networking/networking/web-sockets/web-sockets

its not good thing to request again again to server. instead of using api use web sockets and create a event. when ever there is new data for the user, even your app is terminated you will receive a notification using fcm .

please visit this page
https://docs.flutter.dev/cookbook/networking/web-sockets

关闭应用程序时,请求服务器并显示通知

柳絮泡泡 2025-02-17 21:27:02

在HTTP/1.1标准中,是否明确允许或禁止服务器在收到所有请求数据之前发送响应?

在HTTP/1.1标准中,是否明确允许

例如,允许服务器发送在收到全部请求之前回复。我们还可以得出结论,服务器不仅允许允许发送1xx响应,而且实际上是鼓励 这样做的。整个类HTTP响应的存在并不是偶然的 - 在请求完成之前,它们被明确添加到规格中,以用于将响应发送给客户端。

1xx(信息)类状态代码类别指示在完成请求的操作并发送最终响应之前通信连接状态或请求进度的临时响应。

2xx 响应,服务器应仅在收到整个请求的情况下发送这些响应。这意味着如果请求未接收到 ,则服务器不会(也不应)用2xx做出响应。

2xx(成功)状态代码类别表示客户的请求已成功收到,理解和接受。

还有其他有关3xx,4xx和5xx的详细信息,但是在上述1xx和2xx的示例中,我们可以看到在某些情况下,服务器可以在请求完成之前发送响应,以及服务器应在请求之前发送响应的情况收到。

更新

第10.1.1节“ 100继续”
来自 http/1.1 rfc
包含此内容,它清楚地描述了服务器响应中间的服务器:

客户应继续其请求。此临时响应用于通知客户,请求的初始部分已收到,并且尚未被服务器拒绝。客户应继续发送请求的其余部分,或者,如果请求已经完成,请忽略此响应。

In the HTTP/1.1 standard, is it explicitly allowed or forbidden for a server to send a response before all the request's data have been received?

Yes, it's allowed, though it depends on the situation.

For example, the server is allowed to send 1xx responses back prior to receiving the entirety of the request. We can conclude also that the server is not merely allowed to send 1xx responses, but is in fact encouraged to do so. The existence of an entire class of HTTP responses is not an accident – they were added to the spec explicitly for use in sending responses back to the client prior to request completion.

The 1xx (Informational) class of status code indicates an interim response for communicating connection status or request progress prior to completing the requested action and sending a final response.

For 2xx responses, the server should send these only if the entire request was received. This implies that the server does not (and should not) respond with a 2xx if the request has not yet been received.

The 2xx (Successful) class of status code indicates that the client's request was successfully received, understood, and accepted.

There are other details about 3xx, 4xx, and 5xx, but with the above examples for 1xx and 2xx we can see that there are cases where the server can send a response before the request is complete, as well as cases where the server should not send a response before the request is received.

UPDATE:

Section 10.1.1 "100 Continue"
from the HTTP/1.1 RFC
contains this, which quite clearly describes the server responding mid-request:

The client SHOULD continue with its request. This interim response is used to inform the client that the initial part of the request has been received and has not yet been rejected by the server. The client SHOULD continue by sending the remainder of the request or, if the request has already been completed, ignore this response.

HTTP是否允许服务器在收到整个身体之前发送响应?

柳絮泡泡 2025-02-17 02:05:28

尝试以下查询:

count(last_over_time(fruits[30m] offset 30m)) by (name)
and
count(last_over_time(fruits[30m])) by (name)

它应返回 name 标签,这两个时间范围都存在 - (现在-60m ..现在-30m] (现在-30m) 现在]

  • .. 所有时间序列,在最后30分钟内存在
  • count 聚合函数
  • 运营商

Try the following query:

count(last_over_time(fruits[30m] offset 30m)) by (name)
and
count(last_over_time(fruits[30m])) by (name)

It should return name labels, which existed on both time ranges - (now-60m .. now-30m] and (now-30m .. now].

It uses the following functions:

promql表达式以两个间隔找到常见标签

柳絮泡泡 2025-02-16 06:28:19

只是一个变量。

对于 python 3.x

>>> d = {'x': 1, 'y': 2, 'z': 3}
>>> for the_key, the_value in d.items():
...     print(the_key, 'corresponds to', the_value)
...
x corresponds to 1
y corresponds to 2
z corresponds to 3

对于 python 2.x

>>> d = {'x': 1, 'y': 2, 'z': 3} 
>>> for my_var in d:
>>>     print my_var, 'corresponds to', d[my_var]

x corresponds to 1
y corresponds to 2
z corresponds to 3

...或更好,

d = {'x': 1, 'y': 2, 'z': 3} 

for the_key, the_value in d.iteritems():
    print the_key, 'corresponds to', the_value

key is simply a variable.

For Python 3.x:

>>> d = {'x': 1, 'y': 2, 'z': 3}
>>> for the_key, the_value in d.items():
...     print(the_key, 'corresponds to', the_value)
...
x corresponds to 1
y corresponds to 2
z corresponds to 3

For Python 2.x:

>>> d = {'x': 1, 'y': 2, 'z': 3} 
>>> for my_var in d:
>>>     print my_var, 'corresponds to', d[my_var]

x corresponds to 1
y corresponds to 2
z corresponds to 3

... or better,

d = {'x': 1, 'y': 2, 'z': 3} 

for the_key, the_value in d.iteritems():
    print the_key, 'corresponds to', the_value

使用&#x27; for&#x27;循环

柳絮泡泡 2025-02-16 01:04:01

您可以使用 itertools.groupbys.groupbys.groupby 将数据分组,然后转换为数据框:

from itertools import groupby 

grps = [(k, [t[1] for t in g]) for k, g in itertools.groupby(res1_, key=lambda x:x[0])]
df = pd.DataFrame(grps, columns=['docid', 'secid'])

输出:

  docid   secid
0    z1  [1, 2]
1    x1     [1]
2    x2     [1]
3    x1     [3]
4    z1     [1]

You could use itertools.groupby to group the data, and then convert to a dataframe:

from itertools import groupby 

grps = [(k, [t[1] for t in g]) for k, g in itertools.groupby(res1_, key=lambda x:x[0])]
df = pd.DataFrame(grps, columns=['docid', 'secid'])

Output:

  docid   secid
0    z1  [1, 2]
1    x1     [1]
2    x2     [1]
3    x1     [3]
4    z1     [1]

将元组转换为数据框中的分组行,而无需更改顺序

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文