半城柳色半声笛

文章 评论 浏览 29

半城柳色半声笛 2025-01-22 04:13:27

您是否尝试使用 Node 服务器生成的数据预填充 HTML 表单?
如果是这种情况,您可以使用诸如 ejs pug handlebars 等模板引擎来呈现动态数据,或者您可以简单地使用 AJAX 请求到服务器来获取数据并将其显示在您的 UI 中,就像框架一样。

Are you trying to prefill a HTML form with the data generated by the Node server?
If that's the case, you can use a templating engine like ejs pug handlebars etc to render dynamic data or you can simply use AJAX request to the server to fetch data and display it in your UI like the frameworks do.

想要将值从nodejs服务器推送到html表单

半城柳色半声笛 2025-01-21 17:32:08

您还必须告诉捆绑程序在 svelte.config.js 中使用它

kit: {
  vite: {
    resolve: {
      alias: {
        $routes: path.resolve('./src/routes')
      }
    }
  } 
}

也就是说,请记住路由文件夹中的所有文件,除了以下划线开头的文件之外) 被视为路线。因此,如果您有文件 /src/routes/db.js,用户可以访问 http://yoursite.domain/db

很少有任何理由从那里导入文件,如果您需要这样做,它可能不是路由或端点,可以安全地放入 lib 中,而不是

更新 31.01.2023

上面的答案是在对 SvelteKit 中的路由工作方式进行重大修改之前编写的。如今,路由是基于目录的,只有文件 +page.svelte 才会真正创建路由(因此 /src/routes/about/+page.svelte 将为您提供 <代码>/about路线)。这意味着您可以安全地将其他文件和组件添加到路由文件夹中。

You also have to tell the bundler to use it in svelte.config.js

kit: {
  vite: {
    resolve: {
      alias: {
        $routes: path.resolve('./src/routes')
      }
    }
  } 
}

That said, remember that all files in the routes folder except for the ones starting with an _ underscore) are considered to be routes. So if you have file /src/routes/db.js a user can go to http://yoursite.domain/db.

There is very rarely any reason to import a file from there and if you need to do so, it likely is not a route or endpoint and can be safely put in lib instead

Update 31.01.2023

The above answer was written before a major overhaul of how routes work in SvelteKit. Nowadays routing is directory based and only the file +page.svelte will actually create a route (so /src/routes/about/+page.svelte will give you the /about route). This means that you can safely add other files and components to the route folder.

SvelteKit:如何通过别名(如 $routes)从组件和端点引用 /routes 文件夹?

半城柳色半声笛 2025-01-21 16:54:18

如果您首先获取您关心的值,然后才使用view将其解释为矩阵,那么会更容易思考:

# setting up
>>> import torch
>>> import numpy as np
>>> x=np.arange(24) + 3 # just to visualize the difference between indices and values
>>> x
array([ 3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
       20, 21, 22, 23, 24, 25, 26])
# taking the values you want and viewing as matrix
>>> ft = torch.FloatTensor(x)
>>> ft[[13, 16, 19, 22]]
tensor([16., 19., 22., 25.])
>>> ft[[13, 16, 19, 22]].view(2,2)
tensor([[16., 19.],
        [22., 25.]])

It is easier to think about if you first grab the values you care about and only then use view to interpret it as a matrix:

# setting up
>>> import torch
>>> import numpy as np
>>> x=np.arange(24) + 3 # just to visualize the difference between indices and values
>>> x
array([ 3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
       20, 21, 22, 23, 24, 25, 26])
# taking the values you want and viewing as matrix
>>> ft = torch.FloatTensor(x)
>>> ft[[13, 16, 19, 22]]
tensor([16., 19., 22., 25.])
>>> ft[[13, 16, 19, 22]].view(2,2)
tensor([[16., 19.],
        [22., 25.]])

pytorch中有没有索引方法?

半城柳色半声笛 2025-01-21 14:24:42

鉴于您在评论中注意到实际上有“另一个列行号”,您现在使用它来删除高行号。

首先是一个要“工作”的假数据表:

CREATE OR REPLACE TABLE too_much_data AS 
select *
from values
    ('name_2022/01/01.csv' ,262627, '2022-01-01','2022-01-02', 1),
    ('name_2022/01/01.csv' ,262627, '2022-01-01','2022-01-02', 2),
    ('name_2022/01/02.csv' ,262627, '2022-01-02','2022-01-03', 3),
    ('name_2022/01/02.csv' ,262627, '2022-01-02','2022-01-03', 4)
    t(filename, id, start1, end1, file_row_number);

现在让我们看看该表:

SELECT * FROM too_much_data; 
FILENAMEIDSTART1END1FILE_ROW_NUMBER
name_2022/01/01.csv2626272022-01-012022-01-021
name_2022/01/01.csv2626272022-01-012022-01-022
name_2022/01/02.csv2626272022-01-022022-01-033
name_2022/01/02.csv2626272022-01-022022-01-034

所以这些是我们的行想要删除的是:

SELECT filename, id, start1, end1, file_row_number 
FROM too_much_data 
QUALIFY file_row_number <> min(file_row_number) over(partition by filename, id, start1, end1);
FILENAMEIDSTART1END1FILE_ROW_NUMBER
name_2022/01/01.csv2626272022-01-012022-01-022
name_2022/01/02.csv2626272022-01-022022-01-034

因此 DELETE 可以是:

DELETE FROM too_much_data as d
USING (
    SELECT filename, id, start1, end1, file_row_number 
    FROM too_much_data 
    QUALIFY file_row_number <> min(file_row_number) over(partition by filename, id, start1, end1)
) as td 
WHERE d.filename = td.filename and td.id = d.id and td.start1 = d.start1 and td.end1 = d.end1 and td.file_row_number = d.file_row_number;
行数已删除
2
SELECT * FROM too_much_data; 
文件名IDSTART1END1FILE_ROW_NUMBER
name_2022/01/01.csv2626272022-01-012022-01-021
name_2022/01/02.csv2626272022-01-022022-01-033

Well given you have noted in a comment that there is actually "another column row number" you now use that to delete the high row numbers.

first a table of fake data to "work" on:

CREATE OR REPLACE TABLE too_much_data AS 
select *
from values
    ('name_2022/01/01.csv' ,262627, '2022-01-01','2022-01-02', 1),
    ('name_2022/01/01.csv' ,262627, '2022-01-01','2022-01-02', 2),
    ('name_2022/01/02.csv' ,262627, '2022-01-02','2022-01-03', 3),
    ('name_2022/01/02.csv' ,262627, '2022-01-02','2022-01-03', 4)
    t(filename, id, start1, end1, file_row_number);

now let look at that table:

SELECT * FROM too_much_data; 
FILENAMEIDSTART1END1FILE_ROW_NUMBER
name_2022/01/01.csv2626272022-01-012022-01-021
name_2022/01/01.csv2626272022-01-012022-01-022
name_2022/01/02.csv2626272022-01-022022-01-033
name_2022/01/02.csv2626272022-01-022022-01-034

so these are the rows we want to delete are:

SELECT filename, id, start1, end1, file_row_number 
FROM too_much_data 
QUALIFY file_row_number <> min(file_row_number) over(partition by filename, id, start1, end1);
FILENAMEIDSTART1END1FILE_ROW_NUMBER
name_2022/01/01.csv2626272022-01-012022-01-022
name_2022/01/02.csv2626272022-01-022022-01-034

thus there DELETE can be:

DELETE FROM too_much_data as d
USING (
    SELECT filename, id, start1, end1, file_row_number 
    FROM too_much_data 
    QUALIFY file_row_number <> min(file_row_number) over(partition by filename, id, start1, end1)
) as td 
WHERE d.filename = td.filename and td.id = d.id and td.start1 = d.start1 and td.end1 = d.end1 and td.file_row_number = d.file_row_number;
number of rows deleted
2
SELECT * FROM too_much_data; 
FILENAMEIDSTART1END1FILE_ROW_NUMBER
name_2022/01/01.csv2626272022-01-012022-01-021
name_2022/01/02.csv2626272022-01-022022-01-033

Snowflake - 根据满足的条件删除重复行

半城柳色半声笛 2025-01-21 10:41:49

您可以使用 基于脚本排序相同。您可以提供排序顺序,脚本将根据该顺序对您的响应进行排序。

POST index/_search
{
  "query": {
    "terms": {
      "id": ["12", "34", "6", "22"]
    }
  },
  "sort": {
    "_script": {
      "type": "number",
      "script": {
        "inline": "params.sortOrder.indexOf(doc['id'].value)",
        "params": {
          "sortOrder": ["12", "34", "6", "22"]
        }
      },
      "order": "asc"
    }
  }
}

You can use Script based sorting for same. you can provide sort order and based on it, script will order your response.

POST index/_search
{
  "query": {
    "terms": {
      "id": ["12", "34", "6", "22"]
    }
  },
  "sort": {
    "_script": {
      "type": "number",
      "script": {
        "inline": "params.sortOrder.indexOf(doc['id'].value)",
        "params": {
          "sortOrder": ["12", "34", "6", "22"]
        }
      },
      "order": "asc"
    }
  }
}

如何强制“terms”查询的顺序?

半城柳色半声笛 2025-01-21 06:57:17

这两个错误完全无关。

数据库连接错误

第一个问题,我相信这是您目前真正关心的问题,根据问题的标题,它表明 Heroku CLI 无法在您的本地找到 PostgreSQL 客户端 > 机器。

文档提出以下建议

在 Windows 上设置 Postgres

使用 Windows 安装程序在 Windows 上安装 Postgres。

请记住更新 PATH 环境变量以添加 Postgres 安装的 bin 目录。该目录类似于:C:\Program Files\PostgreSQL\\bin。像 heroku pg:psql 这样的命令取决于 PATH,如果 PATH 不正确,这些命令将不起作用。

如果您尚未在本地安装 Postgres,请安装。 (无论如何,这是一个好主意,因为您应该在本地开发,并且可能需要一个数据库。)

然后确保 添加其 < code>bin/ 目录添加到 PATH 环境变量

Nodemon 错误

第二个问题是因为您尝试在生产中使用nodemon。 Heroku 在构建 Node.js 应用程序后将开发依赖项从 Node.js 应用程序中剥离出来,这通常是有意义的。 Nodemon 是一个开发工具,不应该用于生产托管。

根据 package.json 的内容,这可能就像将 start 脚本从 nodemon some-script.js 更改为 start 脚本一样简单代码>节点some-script.js。或者,您可以添加一个 Procfile 以及您实际想要在 Heroku 上运行的命令:

web: node some-script.js

另请参阅需要帮助部署使用 MongoDB Atlas 和 Express 创建的 RESTful API

These two errors are completely unrelated.

The database connection error

The first issue, which I believe is the one you actually care about at the moment based on the title of the question, indicates that the Heroku CLI can't find a PostgreSQL client on your local machine.

The documentation makes the following recommendation

Set up Postgres on Windows

Install Postgres on Windows by using the Windows installer.

Remember to update your PATH environment variable to add the bin directory of your Postgres installation. The directory is similar to: C:\Program Files\PostgreSQL\<VERSION>\bin. Commands like heroku pg:psql depend on the PATH and do not work if the PATH is incorrect.

If you haven't already installed Postgres locally, do so. (This is a good idea anyway as you should be developing locally and you'll probably need a database.)

Then make sure to add its bin/ directory to your PATH environment variable.

The Nodemon error

The second issue is because you are trying to use nodemon in production. Heroku strips development dependencies out of Node.js applications after building them, which normally makes sense. Nodemon is a development tool, not something that should be used for production hosting.

Depending on the contents of your package.json, this might be as simple as changing your start script from nodemon some-script.js to node some-script.js. Alternatively, you can add a Procfile with the command you actually want to run on Heroku:

web: node some-script.js

See also Need help deploying a RESTful API created with MongoDB Atlas and Express

无法连接到 heroku 上的我的数据库插件

半城柳色半声笛 2025-01-21 05:40:45

从技术上讲,使用 Spring Boot for Apache Geode [同样适用于 GemFire] (SBDG),特别是在连接 Spring [Boot] 应用程序的 Pivotal CloudFoundry (PCF) 环境中运行时,或者在您的情况下,Spring Cloud Data Flow (SCDF) 应用程序连接到 Pivotal Cloud Cache (PCC) 服务实例(即 PCF 中的 GemFire),然后 SBDG 将自动连接、身份验证和授权当您的 Spring [Boot] 应用程序被推送到 PCF 后,您的应用程序。

注意:Pivotal CloudFoundry (PCF) 现在更名为 VMware Tanzu Appliation Service (TAS),Pivotal Cloud Cache (PCC) 现在更名为 VMware Tanzu GemFire for VMS。

当然,这假设在配置 PCC 服务实例时正确设置和配置了 PCF/PCC 环境,特别是 VCAP 环境变量。

如果您没有使用 Spring Boot for Apache Geode,则不会对 PCF/PCC 环境(VCAP 环境变量)进行“自动”检查,因此您需要负责用于处理连接、身份验证等。SBDG

专门设计用于跨环境处理这些问题,并提供自动配置来处理 Spring Boot 应用程序推送到连接到的 PCF 时的连接、身份验证和其他问题个人计算机委员会。

更多详细信息可以在文档< /a>.

此外, 获取Started Sample 引导用户在本地上下文中使用 Apache Geode 构建 Spring Boot 应用程序,然后在本地切换到非托管客户端/服务器拓扑,最后在 PCF 等托管上下文中推送并运行该应用程序,连接(和验证)与PCC。

不过,所有这些都需要SBDG。

我不确定 SCDF 在幕后是否使用了 SBDG。它可能仅使用 Spring Data for Apache Geode (SDG),在这种情况下,您可能需要将 SDG 依赖项替换为 SBDG。

此过程中很可能还涉及其他工作,因为我不清楚 SCDF 在使用 SCDF(源/接收器)时代表您创建哪些特定的 GemFire/Geode 对象(例如缓存实例),这些对象可能与 自动配置由SBDG提供。

例如,如果 SCDF 为您创建一个缓存实例(即 ClientCache),那么它将覆盖自动创建 ClientCache 的 SBDG 自动配置 > 实例]3< /a>,默认情况下。如果是这种情况,那么您将再次对安全性(身份验证)负责,因为必须在创建 GemFire/Geode 缓存实例(例如 ClientCache)之前配置安全性。

注意:这是 GemFire/Geode 要求,而不是 Spring 要求。

因此,SBDG 的自动配置安排在应用时的优先级和顺序上是经过深思熟虑的。如果 SBDG 自动配置被您显式覆盖,或者被另一个框架(例如 SCDF)隐式覆盖,那么您就有责任了解 GemFire/Geode 配置的期望(内部)。

另一方面,如果您确定 SBDG 位于应用程序类路径中并且正确使用,那么此问题可能源于应用程序使用了错误的分配用户。

如果您的环境相当复杂,声明多个用户具有不同的分配权限集,那么您的应用程序可能需要使用不同的用户分配来运行,在这种情况下,您应该查看这个特定的 文档部分

与往常一样,您应该确保 Spring [Boot | CDF] 应用程序在本地非托管环境中正确运行,并且在 PCF 等托管环境中远程运行之前具有类似的设置和配置。

目标 SBDG 一直很明确,并且 SBDG 已经过测试并证明了这一效果。

请在这里分享尽可能多的细节(代码、配置等),以便我们能够正确分类这个问题。

Technically, using Spring Boot for Apache Geode [equally for GemFire] (SBDG), particularly when running in a Pivotal CloudFoundry (PCF) environment connecting your Spring [Boot] app, or in your case, Spring Cloud Data Flow (SCDF) app, to a Pivotal Cloud Cache (PCC) service instance (i.e. GemFire in PCF), then SBDG will automatically connect, authenticate and authorize your application once your Spring [Boot] app has been pushed up to PCF.

NOTE: Pivotal CloudFoundry (PCF) is now known as VMware Tanzu Appliation Service (TAS) and Pivotal Cloud Cache (PCC) is now known as VMware Tanzu GemFire for VMS.

Of course, this assumes that the PCF/PCC environment, and specifically, the VCAP environment variables, were setup and configured properly when the PCC service instance was provisioned.

If you are not using Spring Boot for Apache Geode, then there is no "automatic" inspection of the PCF/PCC environment (VCAP env vars) and therefore, you become responsible for handling connections, auth, etc.

SBDG was specifically designed to handle these concerns across environments and provides auto-configuration to handle connections, auth and other concerns when a Spring Boot app is pushed up to PCF connected to PCC.

More details can be found in the documentation.

Additionally, the Getting Started Sample walks a user through building a Spring Boot app using Apache Geode in a local context, then switching to a non-managed client/server topology locally, and finally pushing and running the app in a managed context like PCF, connecting (and authenticating) with PCC.

All of this requires SBDG though.

I am not certain that SCDF uses SBDG under the hood. It may simply only use Spring Data for Apache Geode (SDG), in which case, you may need to swap out the SDG dependency for SBDG.

There is most likely other work involved in this process as well since it is unclear to me what specific GemFire/Geode objects (e.g. a cache instance) SCDF creates on your behalf when using SCDF (sources/sinks) that may conflict with the auto-configuration provided in and by SBDG.

For instance, if SCDF creates a cache instance (i.e. ClientCache) for you, then it will override the SBDG auto-configuration that automatically creates a ClientCache instance]3, by default. If this is the case, then once again, you become responsible for security (auth) since security must be configured before an GemFire/Geode cache instance (e.g. ClientCache) is created.

NOTE: This is a GemFire/Geode requirement, not a Spring requirement.

Therefore, SBDG's auto-configuration arrangement is very deliberate in its precedence and ordering when being applied. If the SBDG auto-configuration is explicitly overridden either by you, or implicitly by another framework (e.g. SCDF), then you become responsible for knowing the expectations (internals) of GemFire/Geode configuration.

On the other hand, if you are certain SBDG is in the application classpath and being used properly, then perhaps this problem stems from the app using the wrong assigned user.

If your environment is rather complex, declaring multiple users with different sets of assigned permissions, then maybe your app needs to be run with a different user assignment, in which case, you should review this particular section of the documentation.

As always you should make sure you Spring [Boot | CDF] application runs correctly in a local, non-managed environment with the similar setup and configuration before running remotely in a managed environment like PCF.

The goals of SBDG have always been clear and SBDG is tested and proven to this effect.

Please share as many specifics (code, configuration, etc) as you can here in order for us to be able to triage this problem correctly.

Geode/GemFire/PCF/SCDF 错误 - 用户未获得 DATA:WRITE / DATA:READ 权限

半城柳色半声笛 2025-01-21 04:38:58

问题在于,您使用计划设置了一堆作业(每 7 秒一个,每 8 秒一个,...),然后一次运行它们,因此实际运行的作业数量会高得多。如果我理解正确的话,您真正想要的是运行每个作业之前 7 到 14 秒之间的随机等待时间。

您可以在没有计划包的情况下轻松完成此操作(尽管也可能有一种方法,但不确定):

import random
import webbrowser as wb
import time

def job():
    wb.open('https://en.wikipedia.org/wiki/Main_Page')

X = str(input("Are your ready, yes or no (answer in lower case)"))
while X != "yes":
    X = str(input("put yes!!!"))

while True:
    job()
    time.sleep(random.randint(7, 14))

The problem is that you set a bunch of jobs using schedule (one each 7 secs, on each 8, ...) and then run them all at once, so the actual number of jobs run will be much higher. If I understand you right what you actually want is a random wait time between 7 and 14 seconds before running each job.

You can do this easiely without the schedule package (although there might be a method for it as well, not sure):

import random
import webbrowser as wb
import time

def job():
    wb.open('https://en.wikipedia.org/wiki/Main_Page')

X = str(input("Are your ready, yes or no (answer in lower case)"))
while X != "yes":
    X = str(input("put yes!!!"))

while True:
    job()
    time.sleep(random.randint(7, 14))

为什么我的 python 代码没有在范围内正确执行 i

半城柳色半声笛 2025-01-21 02:12:58

GoLand 或带有 Go 插件的 IntelliJ IDEA 有一个声明新变量的便捷快捷方式:

  • 键入 : 您想要在哪里定义变量。
  • Tab 应用实时模板

简短有关如何应用实时模板的视频

您可以通过首选项/设置 | 配置缩写或展开快捷方式编辑|实时模板 |去 |变量声明。

GoLand or IntelliJ IDEA with Go plugin has a handy shortcut to declare a new variable:

  • Type : where do you want to define a variable.
  • Press Tab to apply Live Template.

A short video on how to apply a live template

You can configure an abbreviation or expand shortcut via Preferences/Settings | Editor | Live Templates | Go | Variable declaration.

Intellij 输入字符组合,例如 :=

半城柳色半声笛 2025-01-20 18:09:06

首先,进入发生错误的区域中的 Cloudwatch 日志,使用 Log Insights 查找错误。您将获得有关 Lambda 提出 503 原因的更多详细信息。

我敢打赌这是 SQS 权利。
正如这里引用的:
https://github.com/aws-amplify/amplify-hosting /issues/2175#issuecomment-900514998

可像这样修复:

TL;DR:向您的 lambda 添加 SQS 权限函数执行角色。

1/ 通过日志错误,您将获得 lambda 函数名称
输入图像描述这里

2/ 进入lambda函数配置,获取角色名称,然后点击编辑
输入图像描述这里

3/ 在 JSON 中编辑权限策略并添加以下内容:

       {
            "Action": [
                "sqs:*"
            ],
            "Resource": [
                "arn:aws:sqs:us-east-1:*:*"
            ],
            "Effect": "Allow"
        }

查看并应用,它应该可以工作。

First, go in Cloudwatch logs in the region where the error occurred, with Log Insights, find the error. You will get more details about the reason Lambda raised a 503.

I bet it's SQS rights.
As quoted here:
https://github.com/aws-amplify/amplify-hosting/issues/2175#issuecomment-900514998

Fixable like this:

TL;DR: Add SQS rights to your lambda function execution role.

1/ With your log error, you will get the lambda function name
enter image description here

2/ Go to the lambda function configuration, and get Role name, then click to edit
enter image description here

3/ Edit the permission policiy in JSON and add this:

       {
            "Action": [
                "sqs:*"
            ],
            "Resource": [
                "arn:aws:sqs:us-east-1:*:*"
            ],
            "Effect": "Allow"
        }

Review and apply, it should work.

使用 Next.js getStaticProps 重新验证时出现 AWS Amplify 503 错误

半城柳色半声笛 2025-01-20 07:06:19

这是准确的音译。不过,可能有更好的方法来做到这一点,如果没有其他人这样做的话,也许我会稍后再回来讨论。

defmodule Example do
  def run do
    input = ~w(A B C D E F)
    tuples = Enum.zip(input, tl(input))

    results = [false, true, true, false, true]
    combined = Enum.zip(tuples, results)

    first_item = hd(input)
    small_list = [first_item]
    big_list = []

    {small_list, big_list} =
      Enum.reduce(combined, {small_list, big_list}, fn
        {{_left, right}, true}, {small_list, big_list} ->
          {[right | small_list], big_list}

        {{_left, right}, false}, {small_list, big_list} ->
          {[right], [Enum.reverse(small_list) | big_list]}
      end)

    case small_list do
      [] -> big_list
      _ -> [Enum.reverse(small_list) | big_list]
    end
    |> Enum.reverse()
  end
end

输出:

[["A"], ["B", "C", "D"], ["E", "F"]]

This is the exact transliteration. There's probably a much better way to do it though, maybe I'll come back to it later if nobody else does.

defmodule Example do
  def run do
    input = ~w(A B C D E F)
    tuples = Enum.zip(input, tl(input))

    results = [false, true, true, false, true]
    combined = Enum.zip(tuples, results)

    first_item = hd(input)
    small_list = [first_item]
    big_list = []

    {small_list, big_list} =
      Enum.reduce(combined, {small_list, big_list}, fn
        {{_left, right}, true}, {small_list, big_list} ->
          {[right | small_list], big_list}

        {{_left, right}, false}, {small_list, big_list} ->
          {[right], [Enum.reverse(small_list) | big_list]}
      end)

    case small_list do
      [] -> big_list
      _ -> [Enum.reverse(small_list) | big_list]
    end
    |> Enum.reverse()
  end
end

Output:

[["A"], ["B", "C", "D"], ["E", "F"]]

将命令式算法转化为 Elixir

半城柳色半声笛 2025-01-20 05:58:42

尝试这样的事情:

import time
from selenium import webdriver
driver = webdriver.Chrome()

driver.get('https://www.abstractsonline.com/pp8/#!/10517/sessions/@timeSlot=Apr08/1')
//page_source = driver.page_source
element = driver.find_elements_by_xpath('.//li[@class="result clearfix"]//h1')
for el in element:
    id=el.get_attribute("data-id")
    print(id)

Try something like this:

import time
from selenium import webdriver
driver = webdriver.Chrome()

driver.get('https://www.abstractsonline.com/pp8/#!/10517/sessions/@timeSlot=Apr08/1')
//page_source = driver.page_source
element = driver.find_elements_by_xpath('.//li[@class="result clearfix"]//h1')
for el in element:
    id=el.get_attribute("data-id")
    print(id)

XPath-XPath 正在打印重复值

半城柳色半声笛 2025-01-19 12:37:05

您可以使用我为我的一个项目构建的这个函数。我在日、小时、分钟、秒级别上构建了这个功能。我已经根据您的需要减少了它。

def compare_hours(constraint: dict) -> bool:
    from datetime import datetime
    """
    This is used to compare a given hours,minutes against current time
    The constraint must contain a single key and value.
    Accepted keys are: 
        1. before:
            eg {"before": "17:30"}
        2. after:
            eg: {"after": "17:30"}
        3. between:
            eg: {"between": "17:30, 18:30"}
        4. equal:
            eg: {"equal": "15:30"}

    Parameters
    ----------
    constraint : dict
        A dictionary with keys like before, after, between and equal with their corresponding values.

    Returns
    -------
    True if constraint matches else False

    """
    accepted_keys = ("before", "after", "between", "equal")
    assert isinstance(constraint, dict), "Constraint must be a dict object"
    assert len(constraint.keys()) == 1, "Constraint contains 0 or more than 1 keys, only 1 is allowed"
    assert list(constraint.keys())[0] in accepted_keys, f"Invalid key provided. Accepted keys are {accepted_keys}"
    key = list(constraint.keys())[0]
    time_split = lambda x: list(map(int, x.split(":")))
    try:
        if key == "before":
            hours, minutes = time_split(constraint.get(key))
            dt = datetime.now()
            if dt < dt.replace(hour=hours, minute=minutes):
                return True
            return False
        elif key == "after":
            hours, minutes = time_split(constraint.get(key))
            dt = datetime.now()
            if dt > dt.replace(hour=hours, minute=minutes):
                return True
            return False
        elif key == "between":
            dt = datetime.now()
            values = constraint.get(key).replace(' ', '').split(",")
            assert len(values) == 2, "Invalid set of constraints given for between comparison"
            hours, minutes = time_split(values[0])
            dt1 = dt.replace(hour=hours, minute=minutes)
            hours, minutes = time_split(values[1])
            dt2 = dt.replace(hour=hours, minute=minutes)
            assert dt2 > dt1, "The 1st item in between must be smaller than second item"
            if dt > dt1 and dt < dt2:
                return True
            return False
        else:
            hours, minutes = time_split(constraint.get(key))
            dt = datetime.now()
            if dt == dt.replace(hour=hours, minute=minutes):
                return True
            return False
    except Exception as e:
        print(e)

您可以根据需要重复使用该功能。例如:

if compare_hours({"before": "21:00"}) and compre_hours({"after": "18:30"}):
    "do something"

这本质上与使用“ Between”选项相同。

You can use this function which I built for one of my projects. I had this function built at day, hour, minute, second level. I have reduced it for your needs.

def compare_hours(constraint: dict) -> bool:
    from datetime import datetime
    """
    This is used to compare a given hours,minutes against current time
    The constraint must contain a single key and value.
    Accepted keys are: 
        1. before:
            eg {"before": "17:30"}
        2. after:
            eg: {"after": "17:30"}
        3. between:
            eg: {"between": "17:30, 18:30"}
        4. equal:
            eg: {"equal": "15:30"}

    Parameters
    ----------
    constraint : dict
        A dictionary with keys like before, after, between and equal with their corresponding values.

    Returns
    -------
    True if constraint matches else False

    """
    accepted_keys = ("before", "after", "between", "equal")
    assert isinstance(constraint, dict), "Constraint must be a dict object"
    assert len(constraint.keys()) == 1, "Constraint contains 0 or more than 1 keys, only 1 is allowed"
    assert list(constraint.keys())[0] in accepted_keys, f"Invalid key provided. Accepted keys are {accepted_keys}"
    key = list(constraint.keys())[0]
    time_split = lambda x: list(map(int, x.split(":")))
    try:
        if key == "before":
            hours, minutes = time_split(constraint.get(key))
            dt = datetime.now()
            if dt < dt.replace(hour=hours, minute=minutes):
                return True
            return False
        elif key == "after":
            hours, minutes = time_split(constraint.get(key))
            dt = datetime.now()
            if dt > dt.replace(hour=hours, minute=minutes):
                return True
            return False
        elif key == "between":
            dt = datetime.now()
            values = constraint.get(key).replace(' ', '').split(",")
            assert len(values) == 2, "Invalid set of constraints given for between comparison"
            hours, minutes = time_split(values[0])
            dt1 = dt.replace(hour=hours, minute=minutes)
            hours, minutes = time_split(values[1])
            dt2 = dt.replace(hour=hours, minute=minutes)
            assert dt2 > dt1, "The 1st item in between must be smaller than second item"
            if dt > dt1 and dt < dt2:
                return True
            return False
        else:
            hours, minutes = time_split(constraint.get(key))
            dt = datetime.now()
            if dt == dt.replace(hour=hours, minute=minutes):
                return True
            return False
    except Exception as e:
        print(e)

You can reuse the function for your need. For eg:

if compare_hours({"before": "21:00"}) and compre_hours({"after": "18:30"}):
    "do something"

This is essentially the same as using the "between" option.

将当前日期时间与一天中包含分钟的时间进行比较

半城柳色半声笛 2025-01-18 22:48:34

您声明了数组,但没有初始化它。这意味着数组的每个槽内部都没有 0,但可以有一个随机值。

因此,执行 a[part-1] += 1 会将 1 添加到随机值(请注意,如果 part 为 0 或 > 到 k,则超出范围)。你需要将每个“槽”初始化为 0。

尝试编译并执行它来理解:

#include <stdio.h>

int main()
{
    int arr[5];
    for (int i = 0; i < 5; i++)
    {
        printf("%d\n", arr[i]);
    }
}

You declared your array but you did not initialize it. It means that every slot of your array does not have a 0 inside but can have a random value in it.

So doing a[part-1] += 1 add 1 to a random value (note that if part is 0 or > to k you are out of bound). You need to initialize every "slot" to 0.

try to compile and execute this to understand:

#include <stdio.h>

int main()
{
    int arr[5];
    for (int i = 0; i < 5; i++)
    {
        printf("%d\n", arr[i]);
    }
}

我的代码生成一个随机整数输出,我不知道为什么

半城柳色半声笛 2025-01-18 22:12:29

首先,您需要从列表中找到要比较的对象。最简单的方法是迭代您的列表并检查标题(假设标题在您的列表中):

int firstComparisonValue = 0;
for(TopMangaData m :TopMangaData)
        if(m.getTitle().equals(firstComparison))
            firstComparisonValue = m.getRating();

int secondComparisonValue = 0;
for(TopMangaData m :TopMangaData)
        if(m.getTitle().equals(secondComparison))
            secondComparisonValue = m.getRating();

然后比较您的值并打印出大的标题

if(firstComparisonValue > secondComparisonValue){
   System.out.println("First manga has higher rating")
}
else if (firstComparisonValue < secondComparisonValue){
   System.out.println("Second manga has higher rating")
}
else {
   System.out.println("Both have the same rating!")
}

First you're going to need to find the objects from your list that you want to compare. Easiest way is to just iterate over your list and check the title (assuming the title is within your list):

int firstComparisonValue = 0;
for(TopMangaData m :TopMangaData)
        if(m.getTitle().equals(firstComparison))
            firstComparisonValue = m.getRating();

int secondComparisonValue = 0;
for(TopMangaData m :TopMangaData)
        if(m.getTitle().equals(secondComparison))
            secondComparisonValue = m.getRating();

Then just compare your values and print out the title of the large one

if(firstComparisonValue > secondComparisonValue){
   System.out.println("First manga has higher rating")
}
else if (firstComparisonValue < secondComparisonValue){
   System.out.println("Second manga has higher rating")
}
else {
   System.out.println("Both have the same rating!")
}

我希望能够比较两部漫画的评分,但我不知道如何制作,所以我可以输入两个标题,然后获取分数进行比较

更多

推荐作者

转角预定愛

文章 0 评论 0

玩物

文章 0 评论 0

qq_dEbOhs

文章 0 评论 0

陆九渊

文章 0 评论 0

qq_ScZtKg

文章 0 评论 0

权谋诡计

文章 0 评论 0

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文