天赋异禀

文章 评论 浏览 28

天赋异禀 2025-02-20 19:37:55

我找到了错误。

在JS类层次中,我应该使用tf.sub而不是写入输入[0] - 输入[1]

I've found the bug.

In the js class LayerAbs, I should use tf.sub instead of writing inputs[0] - inputs[1] directly

输入0与XXX层不兼容(可能有关将Keras lambda转换为Tensorflow.js的自定义类的问题)

天赋异禀 2025-02-20 18:44:27

让我们选择一个最小可重复的示例,以展示如何使用各种约束来从数据框架中过滤行。为简单起见,让我们选择一个具有20个定期重复条目的两列数据框。使用乘数 n 将两个列表连接到50,000次的列表中,该帧完全有100万行。

# Example dataframe
N = 50000
df_big = pd.DataFrame({'col1' : [7,2, 24,1, 27,15,7,27,26,10,7,2,10,8,4,5,17,10,3,28]*N, 
                       'col2' : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]*N})

现在,我们可以应用各种可以通过(即& )关联的各种约束(平等,不平等或基于功能) >(IE | 运算符,如下所示。在依次切除任何约束所不存在的行之后,我们可以应用 .mean()获得所需的所需 。

# inequality constraints
df_big = df_big[(df_big['col1'] > 5) & (df_big['col1'] <= 15)]

# equality constraints
df_big = df_big[(df_big['col1'] == 10) | (df_big['col1'] == 7)]

# function-based condition
df_big = df_big[df_big['col1'].apply(isPrime)]

df_mean = df_big.mean()

结果

col1    7.000000
col2    6.333333
dtype: float64

def isPrime(x:int):
    assert i>0
    if x==1:
        return False
    elif x==2:
        return True
    else:
        return not(any([x%i==0 for i in range(2,x,1)]))

Let's pick a minimum reproducible example to showcase how various constraints can be used to filter rows from the dataframe. For simplicity, let's pick a two-column dataframe with 20 periodically repeating entries. With the multiplier N that concatenates the two lists 50,000 times each, this frame has exactly 1 million rows.

# Example dataframe
N = 50000
df_big = pd.DataFrame({'col1' : [7,2, 24,1, 27,15,7,27,26,10,7,2,10,8,4,5,17,10,3,28]*N, 
                       'col2' : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]*N})

Now, we can apply various constraints (equality, inequality, or function-based) that can be associated via and (i.e. &) / or (i.e. | operator as seen below. After sequentially slicing out the rows for which any of the constraints does not hold, we can apply .mean() to obtain the desired result.

# inequality constraints
df_big = df_big[(df_big['col1'] > 5) & (df_big['col1'] <= 15)]

# equality constraints
df_big = df_big[(df_big['col1'] == 10) | (df_big['col1'] == 7)]

# function-based condition
df_big = df_big[df_big['col1'].apply(isPrime)]

df_mean = df_big.mean()

which yields

col1    7.000000
col2    6.333333
dtype: float64

How long does this take? Less than a single second. Precisely 765 ms.

EDIT: Here the method isPrime() used above:

def isPrime(x:int):
    assert i>0
    if x==1:
        return False
    elif x==2:
        return True
    else:
        return not(any([x%i==0 for i in range(2,x,1)]))

过滤大型CSV文件,然后平均进行过滤值

天赋异禀 2025-02-19 16:31:10

可能。只是对您的字典进行了稍作修改。让我知道您是否有问题

字典

d={'1a': "%, head flow", '2a': "%, head flow", '3a': "%, mass flow"}

#create ma表达

m_expr1 = create_map([lit(x) for x in chain(*d.items())])

映射值并将字符串分为列表

df.withColumn('tag1', split(m_expr1[lower(df['tag'])],'\,')).show()

Its possible. Just slight modification to your dictionary. Let me know if you have issues

dictionary

d={'1a': "%, head flow", '2a': "%, head flow", '3a': "%, mass flow"}

#Create ma expression

m_expr1 = create_map([lit(x) for x in chain(*d.items())])

Map Values and split string into list

df.withColumn('tag1', split(m_expr1[lower(df['tag'])],'\,')).show()

更新带有字典的设置值,每个键都有多个值

天赋异禀 2025-02-18 13:31:43

文档@slaw提供了帮助我理解为什么我可以做这样的事情:

实现(“ group:artifact:1.0.0”)

但不是

mycustomconfig(“ group:group:artifact:1.0.0: ”)

实现被声明为支持

将依赖关系与 mycustomConfig 相关联的最简单方法是这样做的(请参阅这些文档):

> “ mycustomconfig”代码>

The documentation @Slaw provided helped me understand why I can do something like this:

implementation("group:artifact:1.0.0")

but not

myCustomConfig("group:artifact:1.0.0")

implementation being declared that way is supported because it comes from a plugin (the Kotlin/Java plugins)

The simplest way to associate a dependency with myCustomConfig would be to do this (see these docs):

"myCustomConfig"("group:artifact:1.0.0")

为什么我需要引用带有方括号的自定义Gradle配置?

天赋异禀 2025-02-18 06:07:34

不,这是不可能的,因为时间表是一个有参数的块,而不是参数本身。地图是骨料类型,它们是由原始类型制成的(例如,在这种情况下是数字)。可以在[1](H/T:Matt Schuchard)中找到有关原始类型和骨料类型的更详细的解释。在这种情况下,我更喜欢做类似的事情:

variable "schedule" {
  type = object({
    hours = number
    minutes = number
  })
  description = "Variable to define the values for hours and minutes."
  
  default = {
    hours = 0
    minutes = 0
  }
}

然后,在资源中:

resource "azurerm_data_factory_trigger_schedule" "sfa-data-project-agg-pipeline-trigger" {
  name            = "aggregations_pipeline_trigger"
  data_factory_id = var.data_factory_resource_id
  pipeline_name   = "my-pipeline"

  frequency = var.frequency
  schedule {
    hours   = [var.schedule.hours]
    minutes = [var.schedule.minutes]
  }
}

[1] https://www.terraform.io/plugin/sdkv2/schemas/schema-types#typemap

Nope, that is not possible as schedule is a block with arguments and it is not an argument itself. Maps are aggregate types and and they are made of primitive types (e.g., numbers in this case). A more detailed explanation on primitive and aggregate types with examples can be found in [1] (h/t: Matt Schuchard). In such cases, I prefer to do something like this:

variable "schedule" {
  type = object({
    hours = number
    minutes = number
  })
  description = "Variable to define the values for hours and minutes."
  
  default = {
    hours = 0
    minutes = 0
  }
}

Then, in the resource:

resource "azurerm_data_factory_trigger_schedule" "sfa-data-project-agg-pipeline-trigger" {
  name            = "aggregations_pipeline_trigger"
  data_factory_id = var.data_factory_resource_id
  pipeline_name   = "my-pipeline"

  frequency = var.frequency
  schedule {
    hours   = [var.schedule.hours]
    minutes = [var.schedule.minutes]
  }
}

[1] https://www.terraform.io/plugin/sdkv2/schemas/schema-types#typemap

用变量替换Terraform块

天赋异禀 2025-02-18 05:22:34

克里斯·雷德福(Chris Redford)的答案也适用于QT容器(当然)。这是一个改编(注意i返回 constebegin(),分别 constend()从const_iterator方法):

class MyCustomClass{
    QList<MyCustomDatatype> data_;
public:    
    // ctors,dtor, methods here...

    QList<MyCustomDatatype>::iterator begin() { return data_.begin(); }
    QList<MyCustomDatatype>::iterator end() { return data_.end(); }
    QList<MyCustomDatatype>::const_iterator begin() const{ return data_.constBegin(); }
    QList<MyCustomDatatype>::const_iterator end() const{ return data_.constEnd(); }
};

Chris Redford's answer also works for Qt containers (of course). Here is an adaption (notice I return a constBegin(), respectively constEnd() from the const_iterator methods):

class MyCustomClass{
    QList<MyCustomDatatype> data_;
public:    
    // ctors,dtor, methods here...

    QList<MyCustomDatatype>::iterator begin() { return data_.begin(); }
    QList<MyCustomDatatype>::iterator end() { return data_.end(); }
    QList<MyCustomDatatype>::const_iterator begin() const{ return data_.constBegin(); }
    QList<MyCustomDatatype>::const_iterator end() const{ return data_.constEnd(); }
};

如何使我的自定义类型与“基于范围的循环”一起使用。

天赋异禀 2025-02-18 04:58:11

As mentioned in the documentation, your backend EC2 servers can send messages to the connected clients directly via the @connections API. This documentation page walks you through how to do that.

For this, you'll need to add the connectionId to the header. See this answer on how to do so.

系统设计:使用API​​网关 / SQS的AWS聊天应用程序 - 服务器如何发送消息给收件人?

天赋异禀 2025-02-18 01:38:57

您是否在GCP MySQL数据库的设置中将公共IP列入了白色?

您可以在这里做到这一点:
您的GCP项目⇾
SQL(按左角的三个水平条)⇾
连接⇾
授权网络⇾添加网络

您可以使用网站以获取您的公共IP。

Have you whitelisted your public IP in the settings of your GCP MySQL database?

You can do that here:
Your GCP Project ⇾
SQL (Press the three horizontal bars in the left corner) ⇾
Connections ⇾
Authorised networks ⇾ ADD NETWORK

You can use this Website to get your public IP.

尝试将简单的Java程序连接到Google Cloud Platform MySQL数据库

天赋异禀 2025-02-17 22:19:41

您需要使用此 https://pub.dev/packages/flutter_svg svg image的包装。

例如:-

final Widget networkSvg = SvgPicture.network(
  'http://www.w3.org/2000/svg',
  semanticsLabel: 'A shark?!',
  placeholderBuilder: (BuildContext context) => Container(
      padding: const EdgeInsets.all(30.0),
      child: const CircularProgressIndicator()),
);

You need to use this https://pub.dev/packages/flutter_svg package for Svg image.

eg:-

final Widget networkSvg = SvgPicture.network(
  'http://www.w3.org/2000/svg',
  semanticsLabel: 'A shark?!',
  placeholderBuilder: (BuildContext context) => Container(
      padding: const EdgeInsets.all(30.0),
      child: const CircularProgressIndicator()),
);

Flutter SVG图像无法负载

天赋异禀 2025-02-17 03:31:30

您错过了拆分功能的 lim> 选项。如果给它一个值2,则结果列表最多将有2个条目:

val result = "Bladder Infection".split("i", ignoreCase = true, limit = 2)

You miss the limit option of the split function. If you give it a value of 2 the result list will have a maximum of 2 entries:

val result = "Bladder Infection".split("i", ignoreCase = true, limit = 2)

从字符串中分开字符,以惯用方式kotlin

天赋异禀 2025-02-16 18:18:53

这将是使用 np.where()的好时机

import pandas as pd
import numpy as np

name_list = ['James', 'Sally', 'Sarah', 'John']
df = pd.DataFrame({
    'Names' : ['James', 'Roberts', 'Stephen', 'Hannah', 'John', 'Sally']
})

df['ColumnB'] = np.where(df['Names'].isin(name_list), 1, 0)
df

This would be a good time to use np.where()

import pandas as pd
import numpy as np

name_list = ['James', 'Sally', 'Sarah', 'John']
df = pd.DataFrame({
    'Names' : ['James', 'Roberts', 'Stephen', 'Hannah', 'John', 'Sally']
})

df['ColumnB'] = np.where(df['Names'].isin(name_list), 1, 0)
df

如何通过列循环并将每个值比较到列表

天赋异禀 2025-02-16 17:35:25

使用 json.load(file)而不是 file.read()

import json
with open("a.txt", encoding="UTF8") as f:
    a = json.load(f)

print(type(a)) # <class 'list'>
print(a[0])
print(a[1])

document:https://docs.python.org/3/library/json.html

Use json.load(file) instead of file.read()

import json
with open("a.txt", encoding="UTF8") as f:
    a = json.load(f)

print(type(a)) # <class 'list'>
print(a[0])
print(a[1])

document: https://docs.python.org/3/library/json.html

如何从TXT文件中读取DICS列表?

天赋异禀 2025-02-16 09:50:55

我使用了CSV模块。

import json
import csv
import os

PATH = os.path.dirname(__file__)    # Get the path of the used directory

with open(PATH+r"\input.json", "r") as file:    # Access the data    
    json_data = json.load(file)
    json_data = [item for item in json_data[0]]

with open(PATH+r"\output.csv", "w+", newline='') as file:
    writer = csv.writer(file)
    headers = [list(data.keys()) for data in json_data]     # Divide the data in
    rows = [list(data.values()) for data in json_data]    # headers and rows
    for i in range(len(json_data)):
        writer.writerow(headers[i])    # Write everything
        writer.writerow(rows[i])

如果您不想让标头删除此行 writer.writerow(headers [i])

这是我作为输出获得的数据:

id,networkId,name,applianceIp,subnet,fixedIpAssignments,reservedIpRanges,dnsNameservers,dhcpHandling,dhcpLeaseTime,dhcpBootOptionsEnabled,dhcpOptions,interfaceId,networkName
1,L_1111,VLAN1,1.1.1.1,1.1.1.0/24,{},[],upstream_dns,Run a DHCP server,1 day,False,[],1,NETWORK1
id,networkId,name,applianceIp,subnet,fixedIpAssignments,reservedIpRanges,dnsNameservers,dhcpHandling,interfaceId,networkName
2,L_2222,VLAN2,2.2.2.2,2.2.2.0/24,{},[],upstream_dns,Do not respond to DHCP requests,2,NETWORK2

I used the csv module.

import json
import csv
import os

PATH = os.path.dirname(__file__)    # Get the path of the used directory

with open(PATH+r"\input.json", "r") as file:    # Access the data    
    json_data = json.load(file)
    json_data = [item for item in json_data[0]]

with open(PATH+r"\output.csv", "w+", newline='') as file:
    writer = csv.writer(file)
    headers = [list(data.keys()) for data in json_data]     # Divide the data in
    rows = [list(data.values()) for data in json_data]    # headers and rows
    for i in range(len(json_data)):
        writer.writerow(headers[i])    # Write everything
        writer.writerow(rows[i])

If you don't want to have headers just remove this line writer.writerow(headers[i])

Here is the data I get as output:

id,networkId,name,applianceIp,subnet,fixedIpAssignments,reservedIpRanges,dnsNameservers,dhcpHandling,dhcpLeaseTime,dhcpBootOptionsEnabled,dhcpOptions,interfaceId,networkName
1,L_1111,VLAN1,1.1.1.1,1.1.1.0/24,{},[],upstream_dns,Run a DHCP server,1 day,False,[],1,NETWORK1
id,networkId,name,applianceIp,subnet,fixedIpAssignments,reservedIpRanges,dnsNameservers,dhcpHandling,interfaceId,networkName
2,L_2222,VLAN2,2.2.2.2,2.2.2.0/24,{},[],upstream_dns,Do not respond to DHCP requests,2,NETWORK2

python的嵌套词典列表到CSV文件

天赋异禀 2025-02-16 01:18:22

如何定义 base_string = '00 .00.00'然后用base_string填充其他字符串:

base_str = '00.00.00'
df = pd.DataFrame({'ms_str':['72.1','61','25.73.20','33.12']})
print(df)

df['ms_str'] = df['ms_str'].apply(lambda x: x+base_str[len(x):])
print(df)

输出:

     ms_str
0      72.1
1        61
2  25.73.20
3     33.12


     ms_str
0  72.10.00
1  61.00.00
2  25.73.20
3  33.12.00

How about defining base_string = '00.00.00' then fill other string in each row with base_string:

base_str = '00.00.00'
df = pd.DataFrame({'ms_str':['72.1','61','25.73.20','33.12']})
print(df)

df['ms_str'] = df['ms_str'].apply(lambda x: x+base_str[len(x):])
print(df)

Output:

     ms_str
0      72.1
1        61
2  25.73.20
3     33.12


     ms_str
0  72.10.00
1  61.00.00
2  25.73.20
3  33.12.00

左PANDAS String列与图案合理

天赋异禀 2025-02-14 18:42:28

如果我正确理解您,您的目标是删除具有相同 ID 状态>状态>的重复事件 ,并仅保留其中一个有最新时间。

代码的问题:

  • 您正在使用字符串的身份比较字符串e.getStatus()==“ inprogress” 而不是 equals()
  • 如果开始将是类型 LocalDateTime 而不是 String ,则将更加干净。
  • 目前尚不清楚您是否要修改现有列表或根据方法执行的结果生成新列表(因为在代码中,您要创建一个新列表,然后立即重新分配变量)。如果您需要修改现有列表,则更多的性能将不一度删除元素,而是生成应保留的事件的 Hashset ,然后一次删除所有其他事件使用 rearainall()。它将将最坏情况的二次时间变成线性。
  • 方法名称删除了()和参数名称 re java命名约定

假设事件的平等根据 equals/hashcode 实现并不仅基于 ID> ID status 。我们可以通过串联 id 状态创建的生成地图。然后根据该地图的值生成由此产生的列表。

下面的代码生成a 新列表,如果您需要修改传递给方法的列表,然后用 collectors.tolist() collectors.toset替换()并在事件的初始列表中应用 recainall()。

public List<Event> removeDuplicates(List<Event> events) {
    DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-mm-dd HH:mm:ss");
    
    return events.stream()
        .filter(e -> e.getId() != null && !e.getId().isEmpty() && e.getStatus().equals("InProgress"))
        .collect(Collectors.groupingBy(
            e -> e.getId() + ":" + e.getStatus(),
            Collectors.maxBy(Comparator.comparing(e -> LocalDateTime.parse(e.getStartTime(), formatter)))
        ))
        .values().stream()
        .filter(Optional::isPresent)
        .map(Optional::get)
        .collect(Collectors.toList());
}

If I understood you correctly, your goal is to remove duplicated events having the same id and status and retain only one of them having the latest time.

The issues with your code:

  • You're using identity comparison for strings strings e.getStatus() == "InProgress" instead of equals().
  • It would be much cleaner if startTime would be of type LocalDateTime and not a String in the first place.
  • It's not clear do you want to modify the existing list or generate a new one as result of the method execution (because in the code you are creating a new list and then immediately reassigning the variable). In case if you need to modify the existing list, it would more performant not to remove elements one by one, but instead generate a HashSet of events that should be retained and then remove all other events in one go using retainAll(). It will turn the worst case quadratic time instead into a linear.
  • Method name removeduplicates() and parameter name re are not aligned with Java naming conventions.

Assuming that equality of events according to the equals/hashCode implementation isn't based solely on id and status. We can generate a map with keys created by concatenating id and status. And then generate a resulting list from based on the values of this map.

The code below generates a new list, if you need to modify the one that was passed to a method, then replace Collectors.toList() with Collectors.toSet() and apply retainAll() on the initial list of events.

public List<Event> removeDuplicates(List<Event> events) {
    DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-mm-dd HH:mm:ss");
    
    return events.stream()
        .filter(e -> e.getId() != null && !e.getId().isEmpty() && e.getStatus().equals("InProgress"))
        .collect(Collectors.groupingBy(
            e -> e.getId() + ":" + e.getStatus(),
            Collectors.maxBy(Comparator.comparing(e -> LocalDateTime.parse(e.getStartTime(), formatter)))
        ))
        .values().stream()
        .filter(Optional::isPresent)
        .map(Optional::get)
        .collect(Collectors.toList());
}

删除具有相同ID及其状态的对象

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文