对风讲故事

文章 评论 浏览 28

对风讲故事 2025-02-21 01:00:44

访问此链接,我希望它能帮助您

http://jsfiddle.net.net/9suwp/

<form namRune="Login" method="post" action="">
    <fieldset>
        <legend>Password Strength</legend>        
        <input type="password" name="pass" id="pass">
        <span id="passstrength"></span>
    </fieldset>
</form>       

$(document).ready(function() {

    $('#pass').keyup(function(e) {
        var strongRegex = new RegExp("^(?=.{8,})(?=.*[A-Z])(?=.*[a-z])(?=.*[0-9])(?=.*\\W).*$", "g");
        var mediumRegex = new RegExp("^(?=.{7,})(((?=.*[A-Z])(?=.*[a-z]))|((?=.*[A-Z])(?=.*[0-9]))|((?=.*[a-z])(?=.*[0-9]))).*$", "g");
        var enoughRegex = new RegExp("(?=.{6,}).*", "g");
        if (false === enoughRegex.test($(this).val())) {
            $('#passstrength').html('More Characters');
        } else if (strongRegex.test($(this).val())) {
            $('#passstrength').className = 'ok';
            $('#passstrength').html('Strong!');
        } else if (mediumRegex.test($(this).val())) {
            $('#passstrength').className = 'alert';
            $('#passstrength').html('Medium!');
        } else {
            $('#passstrength').className = 'error';
            $('#passstrength').html('Weak!');
        }
        return true;
    });

});

Visit this link I hope it would help you

http://jsfiddle.net/9SUWP/

<form namRune="Login" method="post" action="">
    <fieldset>
        <legend>Password Strength</legend>        
        <input type="password" name="pass" id="pass">
        <span id="passstrength"></span>
    </fieldset>
</form>       

$(document).ready(function() {

    $('#pass').keyup(function(e) {
        var strongRegex = new RegExp("^(?=.{8,})(?=.*[A-Z])(?=.*[a-z])(?=.*[0-9])(?=.*\\W).*
quot;, "g");
        var mediumRegex = new RegExp("^(?=.{7,})(((?=.*[A-Z])(?=.*[a-z]))|((?=.*[A-Z])(?=.*[0-9]))|((?=.*[a-z])(?=.*[0-9]))).*
quot;, "g");
        var enoughRegex = new RegExp("(?=.{6,}).*", "g");
        if (false === enoughRegex.test($(this).val())) {
            $('#passstrength').html('More Characters');
        } else if (strongRegex.test($(this).val())) {
            $('#passstrength').className = 'ok';
            $('#passstrength').html('Strong!');
        } else if (mediumRegex.test($(this).val())) {
            $('#passstrength').className = 'alert';
            $('#passstrength').html('Medium!');
        } else {
            $('#passstrength').className = 'error';
            $('#passstrength').html('Weak!');
        }
        return true;
    });

});

如何使用JavaScript添加密码强度计到输入字段?

对风讲故事 2025-02-21 00:03:55

目前尚不清楚如何将论点传递给灯具。我只能猜测它们应该是您的测试功能的参数它们被传递给 func1 固定装置:

import pytest

@pytest.fixture()
def func1(request):
     return request.param[0] + request.param[1]

@pytest.mark.parametrize('func1', [(1, 3)], indirect=True)
def test_func1(func1):
     assert func1 == 4

It's not completely clear how you want the arguments to be passed to the fixture. I can only guess that they are supposed to be parameters of your test function in which case you need to define them in @pytest.mark.parametrize and set them as indirect so that they are passed to func1 fixture:

import pytest

@pytest.fixture()
def func1(request):
     return request.param[0] + request.param[1]

@pytest.mark.parametrize('func1', [(1, 3)], indirect=True)
def test_func1(func1):
     assert func1 == 4

pytest:找不到固定变量

对风讲故事 2025-02-20 20:39:43

这些日期是 d/m/y 还是 m/d/y 格式?似乎Spark在这里没有正确解析这些日期,

我建议您另一种方法,以避免使用UDF并与Pandas合作,如果您正在使用大数据,您可能会知道导致OOM错误。您还可以尝试此代码以检查您的环境是否正在返回这些预期的输出。我使用了较小的日期范围,因此可以更明显地看到结果

#exemple df:

df = spark.createDataFrame(
    [
    ('A','01/01/2020','05/01/2020'),
    ('B','02/06/2021','04/06/2021')
    ],
    ["partition", "min_date", "max_date"]
)

df.show()

+---------+----------+----------+
|partition|  min_date|  max_date|
+---------+----------+----------+
|        A|01/01/2020|05/01/2020|
|        B|02/06/2021|04/06/2021|
+---------+----------+----------+

的步骤:

1-转换min_date和max_date到日期格式

2-计算这两个日期之间的UNIX_TIMESTAMP时间差。除以86400,因此我们可以在天数(1天= 86400s)

3-创建一个用','s的列表,每天爆炸,然后

按分区,min和max日期绘制日期4-组。 收集该组中数据的列表

import pyspark.sql.functions as F

df\
        .withColumn('min_date',F.to_date(F.col('min_date'), 'd/M/y'))\
        .withColumn('max_date',F.to_date(F.col('max_date'), 'd/M/y'))\
        .withColumn("timedelta", (F.unix_timestamp('max_date') - F.unix_timestamp('min_date'))/(86400))\
        .withColumn("repeat", F.expr("split(repeat(',', timedelta), ',')"))\
        .select("*", F.posexplode("repeat").alias("days_count", "val"))\
        .withColumn("interval_date_time_exp", (F.unix_timestamp("min_date") + F.col("days_count")*86400).cast('timestamp'))\
        .groupby('partition', 'min_date', 'max_date').agg(F.collect_list('interval_date_time_exp'))\
        .show(truncate = False)

+---------+----------+----------+---------------------------------------------------------------------------------------------------------+
|partition|min_date  |max_date  |collect_list(interval_date_time_exp)                                                                     |
+---------+----------+----------+---------------------------------------------------------------------------------------------------------+
|A        |2020-01-01|2020-01-05|[2020-01-01 00:00:00, 2020-01-02 00:00:00, 2020-01-03 00:00:00, 2020-01-04 00:00:00, 2020-01-05 00:00:00]|
|B        |2021-06-02|2021-06-04|[2021-06-02 00:00:00, 2021-06-03 00:00:00, 2021-06-04 00:00:00]                                          |
+---------+----------+----------+---------------------------------------------------------------------------------------------------------+

然后通过希望这是您要寻找的结果,

Are these dates in d/M/y or M/d/y format? Seems that spark is not parsing these dates correctly

Here I suggest you another approach, to avoid using UDFs and working with pandas, that you might know leads to oom errors if you are working with big data. You might also try this code to check if your environment is returning these expected outputs. I used a small date range so the results can be more visible

#exemple df:

df = spark.createDataFrame(
    [
    ('A','01/01/2020','05/01/2020'),
    ('B','02/06/2021','04/06/2021')
    ],
    ["partition", "min_date", "max_date"]
)

df.show()

+---------+----------+----------+
|partition|  min_date|  max_date|
+---------+----------+----------+
|        A|01/01/2020|05/01/2020|
|        B|02/06/2021|04/06/2021|
+---------+----------+----------+

Here are the steps performed:

1 - transform min_date and max_date to date format

2 - calculate the unix_timestamp time difference between these two dates. Divide by 86400 so we can have the delta in days (1 day = 86400s)

3 - create a list with ','s, one for each day, explode and then plot the dates

4 - group by partition, min and max dates. Then collect a list for the datas in this group by

import pyspark.sql.functions as F

df\
        .withColumn('min_date',F.to_date(F.col('min_date'), 'd/M/y'))\
        .withColumn('max_date',F.to_date(F.col('max_date'), 'd/M/y'))\
        .withColumn("timedelta", (F.unix_timestamp('max_date') - F.unix_timestamp('min_date'))/(86400))\
        .withColumn("repeat", F.expr("split(repeat(',', timedelta), ',')"))\
        .select("*", F.posexplode("repeat").alias("days_count", "val"))\
        .withColumn("interval_date_time_exp", (F.unix_timestamp("min_date") + F.col("days_count")*86400).cast('timestamp'))\
        .groupby('partition', 'min_date', 'max_date').agg(F.collect_list('interval_date_time_exp'))\
        .show(truncate = False)

+---------+----------+----------+---------------------------------------------------------------------------------------------------------+
|partition|min_date  |max_date  |collect_list(interval_date_time_exp)                                                                     |
+---------+----------+----------+---------------------------------------------------------------------------------------------------------+
|A        |2020-01-01|2020-01-05|[2020-01-01 00:00:00, 2020-01-02 00:00:00, 2020-01-03 00:00:00, 2020-01-04 00:00:00, 2020-01-05 00:00:00]|
|B        |2021-06-02|2021-06-04|[2021-06-02 00:00:00, 2021-06-03 00:00:00, 2021-06-04 00:00:00]                                          |
+---------+----------+----------+---------------------------------------------------------------------------------------------------------+

Hope this is the outcome you are looking for

Pyspark:UDF中的时区转换

对风讲故事 2025-02-20 19:04:50

一个更清洁的替代方案(如

create or replace secure function table_within(since date, until date )
returns table(i number, s string, d date)
as $
select i, s, d
from mytable3
where d between since and until
$;

然后,您可以使用 select *从table(function_name(function_name)(function_name(直到))

select * 
from table(table_within('2019-01-01'::date, '2021-01-01'::date))

“在此处输入图像描述”

A cleaner alternative (as suggested on reddit): Create a SQL table function requiring the filtering parameters, and then returns the filtered table:

create or replace secure function table_within(since date, until date )
returns table(i number, s string, d date)
as $
select i, s, d
from mytable3
where d between since and until
$;

Then you can use it with select * from table(function_name(since, until)):

select * 
from table(table_within('2019-01-01'::date, '2021-01-01'::date))

enter image description here

如何强制雪花用户使用列值(例如日期/时间戳)从表中过滤结果?

对风讲故事 2025-02-20 16:20:28

您可以使用Switch如下:

public String getFullType(String type) {
    // Check if type is enough long to not have an ArrayIndexOutOfBoundException
    switch(type.substr(0, 3)) {
       case "INT": return "integer";
       case "STR": return "string";
       case "DBL": return "double";  
    }    
    return "not found";
}

You can use switch as follow:

public String getFullType(String type) {
    // Check if type is enough long to not have an ArrayIndexOutOfBoundException
    switch(type.substr(0, 3)) {
       case "INT": return "integer";
       case "STR": return "string";
       case "DBL": return "double";  
    }    
    return "not found";
}

替换多个IF语句以分配Java中的值

对风讲故事 2025-02-20 07:52:06

condenate()是通过执行 condenate(** args)([layers])

keras.layers.concatenate([layer_1, layer_2,layer_3], axis=1)

应为(请注意大写)

keras.layers.Concatenate(axis=1)([layer_1, layer_2,layer_3])
# axis=1 is default, so you can just do
# keras.layers.Concatenate()([layer_1, layer_2,layer_3])

然后对其他 concateNate进行相同的操作来完成()

我不确定您要对此做什么:

model_dfu_spnet=Dense(200, activation='relu')(concatenate_3)

但是遵循图片,该层应该具有32个神经元(似乎有点小,但是IDK ...)

model_dfu_spnet=Dense(32, activation='relu')(concatenate_3)

您不会将激活功能放在 droupout < /code>

mode_dfu_spnet.add(Dropout(0.3,activation='softmax'))

,但您可能希望它在另一个密集层上,以及类作为神经元的数量。

mode_dfu_spnet.add(Dropout(0.3))
mode_dfu_spnet.add(Dense(num_of_classes, activation="softmax", name="visualized_layer"))

我不习惯进行连接酸盐的顺序模型,通常是功能性的,但没有什么不同。

Concatenate() is done by doing Concatenate(**args)([layers])

keras.layers.concatenate([layer_1, layer_2,layer_3], axis=1)

should be (note the capitalization)

keras.layers.Concatenate(axis=1)([layer_1, layer_2,layer_3])
# axis=1 is default, so you can just do
# keras.layers.Concatenate()([layer_1, layer_2,layer_3])

Then do the same for the other Concatenate().

I'm not sure what you want to do with this:

model_dfu_spnet=Dense(200, activation='relu')(concatenate_3)

But following the picture, that layer should have 32 neurons (seems kinda small for that but idk...)

model_dfu_spnet=Dense(32, activation='relu')(concatenate_3)

You don't put the activation function on Droupout

mode_dfu_spnet.add(Dropout(0.3,activation='softmax'))

but you probably want it on another Dense layer, also with the number of classes as the neurons.

mode_dfu_spnet.add(Dropout(0.3))
mode_dfu_spnet.add(Dense(num_of_classes, activation="softmax", name="visualized_layer"))

I'm not used to doing Sequential models with Concatenate, usually Functional but it shouldn't be any different.

从流程图中制作卷积神经网络

对风讲故事 2025-02-20 07:01:52

正如作者所确认的( https://github.com/zakakjan/leafeaflet-lasso/问题/50 ),该错误在传单中。

As confirmed by the author ( https://github.com/zakjan/leaflet-lasso/issues/50 ), the bug is in leaflet-lasso.

使用传单 - 曲线使用传单 - 曲线时的奇怪错误

对风讲故事 2025-02-20 03:01:24

如果您想从内部元素中获取数字,它会像这样:

<b>(\d+)<\/b>

$ 1(第一组)将是数字2321
regexr.com/6p0st

If you want to get the number from the inner element it goes like this:

<b>(\d+)<\/b>

which the $1 (first group) will be the number 2321
regexr.com/6p0st

如何创建简单的正则

对风讲故事 2025-02-19 22:22:58

我的错误是写错了Kibana环境,例如

  • elasticsearch_host = http:// elastic_image:9200

正确的一个是

  • elasticsearch_hosts = http:// elastic_image:9200

我缺少“ S”字符和Kibana Doest连接。

My mistake is wrong write to kibana environment such as

  • ELASTICSEARCH_HOST=http://elastic_image:9200

Correct one is

  • ELASTICSEARCH_HOSTS=http://elastic_image:9200

I have missing "S" character and Kibana doest connect.

Docker用Xpack Security Kibana组成弹性堆栈

对风讲故事 2025-02-19 20:20:53

看起来 if($ _ server ['request_method'] =='get')没有 $ response 。除非代码还有更多内容,否则 $ response 是未定义的。

 if($_SERVER['REQUEST_METHOD']=='GET') {
  $db = new DbOperation();
  $users = $db->getHighestRatingWith31Results();
  saveResultOfQueryToArray($users, $chooseMethod);
  $response['error'] = false;
  $response['message'] = 'Whatever you want returned here.';
else {
    $response['error'] = true;
    $response['message'] = "Invalid Request";
}

echo json_encode($response);

像这样的事情应该解决问题!我还建议您研究http的响应,例如http 405。 https://developer.mozilla.org/en-us/docs/web/http/status/405

编辑:所以我看到了您的更新,对不起,但我提出了更多问题。也就是说, $ db-&gt; gethighEstratingWithWith31Results(); 做什么?函数 saveresultofquerytoArray()接受一个参数,但是用法给出了两个参数吗? SaveresultofqueryToArray正在调用MySqli_fetch_Array(),该array()期望一个mysqli_result实例。

我会推荐的:

It looks like the if($_SERVER['REQUEST_METHOD'] == 'GET') has no $response. Unless there's more to the code, $response is undefined.

 if($_SERVER['REQUEST_METHOD']=='GET') {
  $db = new DbOperation();
  $users = $db->getHighestRatingWith31Results();
  saveResultOfQueryToArray($users, $chooseMethod);
  $response['error'] = false;
  $response['message'] = 'Whatever you want returned here.';
else {
    $response['error'] = true;
    $response['message'] = "Invalid Request";
}

echo json_encode($response);

Something like that should do the trick! I'd also advise looking into HTTP responses, like HTTP 405. https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/405

EDIT: So I see your update, and I'm sorry, but it raises further questions. Namely, what does $db->getHighestRatingWith31Results(); do? The function saveResultOfQueryToArray() accepts one argument, but the usage is giving the function two arguments? The saveResultOfQueryToArray is calling mysqli_fetch_array(), which expects a mysqli_result instance.

Here's what I would recommend:

  • Use PDO instead of mysqli. I know you said this was old code, but PDO is fantastic. https://www.php.net/manual/en/class.pdo and https://phpdelusions.net/pdo
  • I probably wouldn't make $response a global var. I'd either return $response from saveResultOfQueryToArray() or look into passing by reference. https://www.php.net/manual/en/language.references.pass.php
  • Again, I'd suggest looking into HTTP response codes.
  • Finally, and this is hard, I know, but name your vars something a little easier to understand and document your code with comments. An old joke in computer science goes "The two hardest parts of computer science is cache invalidation, naming things, and off-by-one errors."
    "Documentation is a love letter that you write to your future self." - Damian Conway

为什么我的PHP API停止工作,而不再返回JSON?

对风讲故事 2025-02-18 17:29:08

没关系,我在这个答案中发现了需要信息(即使我之前搜索,我现在刚刚找到了整个帖子和答案) -

for-n #48624628>使用EF Core core core sidentityContext和dbContext以及dbContext都进行订单管理与自己相关,然后我必须基于使用某些属性在两个实体中“共享”的属性创建一种动态关系。然后,我可以使用其中一个属性(在两个实体中具有相同的值)来选择与之相关的其他实体。

如果我想将所有数据存储在一个数据库中,那么我只会使我的数据库上下文从 IdentityDbContext 继承。但这并非如此,因为我想将两个数据集与彼此分开。

并且关于修改身份用户 - Yiyi您提到的DOC中,您只需要创建一个可以从“ IdentityUser”继承的类,并在我们的用户中使用某些添加剂属性,在我们的数据库上下文中更改使用的用户(因此“ AddDbContext”)和所述数据库上下文类(修改继承),然后迁移和更新。

Nevermind, I've found needed informations in this answer (even though I searched before, I've just now found that whole post and answer) - using EF Core IdentityContext and DbContext both for order management

So basically - if I want to use seperate databases with entities that relate to themselves, then I have to create kind of dynamic relation based on using some property that will be "shared" among both of entities. Then I can use one of those properties (that will have the same value in both entities) to select those other entities that should be related with it.

If I would like to store all of my data in one database then I would just make my database context inherit from IdentityDbContext. But that wasn't the case here because I wanted to seperate both data sets from eachother.

And concerning modifying Identity users - it's all in doc mentioned by Yiyi You, you just need to create a class that will inherit from "IdentityUser" with some additonal properties that we want in our users, change used user in our database context (so in "AddDbContext") and in said database context class (modify inheritance), then just migrate and update.

在使用ASP .NET身份时,您是否应该在业务数据库中存储其他用户信息?

对风讲故事 2025-02-18 16:49:47

您需要使用硒来操纵HTML元素。您可以使用这样的代码:

from selenium import webdriver
#set chromodriver.exe path
driver = webdriver.Chrome(executable_path="C:\\chromedriver.exe")
#implicit wait
driver.implicitly_wait(0.5)
#maximize browser
driver.maximize_window()
#launch URL
driver.get("https://www.tutorialspoint.com/index.htm")
#identify element
l =driver.find_element_by_xpath("//button[text()='Check it Now']")
#perform click
l.click()
print("Page title is: ")
print(driver.title)
#close browser
driver.quit()

只需检查有关硒方法的文档,然后找到适合您最好的方法。

You need to use Selenium for manipulating html elements. You can use code like this:

from selenium import webdriver
#set chromodriver.exe path
driver = webdriver.Chrome(executable_path="C:\\chromedriver.exe")
#implicit wait
driver.implicitly_wait(0.5)
#maximize browser
driver.maximize_window()
#launch URL
driver.get("https://www.tutorialspoint.com/index.htm")
#identify element
l =driver.find_element_by_xpath("//button[text()='Check it Now']")
#perform click
l.click()
print("Page title is: ")
print(driver.title)
#close browser
driver.quit()

Just check docs on methods of Selenium and find a method which fits you the best.

使用Python requests_html执行JavaScript函数

对风讲故事 2025-02-18 06:28:16

我的设置非常相似,对我有用。我相信问题是您在proxy_pass中使用了“应用”而不是“ app_direct”。这是我的nginx代理配置(Localhost而不是127.0.0.1或0.0.0.0应该可以的):

location / {
   proxy_pass          http://127.0.0.1:8080/app_direct/mimosa/;

   proxy_http_version 1.1;
   proxy_set_header Upgrade $http_upgrade;
   proxy_set_header Connection "upgrade";
   proxy_read_timeout 600s;

   proxy_redirect    off;
   proxy_set_header  Host              $http_host;
   proxy_set_header  X-Real-IP         $remote_addr;
   proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
   proxy_set_header  X-Forwarded-Proto $scheme;

 }

使用 / App / Path似乎混淆了Shinyproxy。如果您直接通过Java(带有设置)运行Shinyproxy,则会看到与正确的URI匹配的请求。您还可以检查控制台(铬中的F12),该控制台显示资源的负载失败。

不确定是否可以使用NGINX配置轻松修复这种情况。

通常,不需要顶部的Navbar,因此App_Direct是一个简单的解决方案。希望它有帮助。如果没有,您可以发布整个nginx配置和application.yml吗? (您可以删除敏感零件)

I have a very similar setup and it works for me. I believe the problem is that you are using "app" instead of "app_direct" in proxy_pass. This is my nginx proxy config (localhost instead of 127.0.0.1 or 0.0.0.0 should be fine):

location / {
   proxy_pass          http://127.0.0.1:8080/app_direct/mimosa/;

   proxy_http_version 1.1;
   proxy_set_header Upgrade $http_upgrade;
   proxy_set_header Connection "upgrade";
   proxy_read_timeout 600s;

   proxy_redirect    off;
   proxy_set_header  Host              $http_host;
   proxy_set_header  X-Real-IP         $remote_addr;
   proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
   proxy_set_header  X-Forwarded-Proto $scheme;

 }

Using the /app/ path seems to confuse shinyproxy. If you run shinyproxy via java directly (with your setup) you will see requests that do not match the correct URI. You can also check the console (F12 in chromium), which shows failed loading of resources.

Not sure if this can be fixed easily with the nginx config.

Usually, the navbar at the top is not needed, so app_direct is a simple solution. Hope it helps. If not, can you post your entire nginx config and application.yml? (you can remove sensitive parts)

配置子域shinyproxy

对风讲故事 2025-02-18 04:54:09

来自

    rank_ : int
        Rank of matrix `X`. Only available when `X` is dense.
    singular_ : array of shape (min(X, y),)
        Singular values of `X`. Only available when `X` is dense.

两者均通过 FRORE} .lstsq which in turn calls LAPACK:?gelsd 来自Trefethen,Lloyd N.和David Bau III。数值线性代数。卷。 50。暹罗,1997年。

From the source code:

    rank_ : int
        Rank of matrix `X`. Only available when `X` is dense.
    singular_ : array of shape (min(X, y),)
        Singular values of `X`. Only available when `X` is dense.

Both are computed by scipy.linalg.lstsq which in turn calls LAPACK:?gelsd. The singular value decomposition (SVD) is introduced in Lecture 4 from Trefethen, Lloyd N., and David Bau III. Numerical linear algebra. Vol. 50. Siam, 1997.

什么是线性回归中的.rank_和.singular_?

对风讲故事 2025-02-17 23:09:57

https://softwareengineering.stackexchange.com/questions/127178/two-html-elements-with-same-id-ritibute-how-bad-is-is-it-really

上面未提及的一个花絮是,如果有几个相同的 ID> ID> ID S相同页面(即使违反了标准,也会发生这种情况):

如果您必须解决此问题(可悲的是),则可以使用 $(“*#foo”)它会说服jQuery使用 getElementsbytybytagname 并返回所有匹配元素的列表。

There are great answers for the same question at https://softwareengineering.stackexchange.com/questions/127178/two-html-elements-with-same-id-attribute-how-bad-is-it-really.

One tidbit not mentioned above is that if there are several identical ids one the same page (which happens, even though it violates the standard):

If you have to work around this (that's sad), you can use $("*#foo") which will convince jQuery to use getElementsByTagName and return a list of all matched elements.

HTML元素的ID属性在整个页面中是否必须是唯一的?

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文