不再让梦枯萎

文章 评论 浏览 31

不再让梦枯萎 2025-02-07 05:44:58

它返回一个唯一数字的原因,因为查找函数循环从开始和每个迭代的末端开始,因此,如果两个 indexof 的索引>启动和 lastIndexof 从末端循环相等​​,这意味着当前项目旁边的数组上没有重复。

The reason that it return a unique number, because the find function loop from the start and from the end on each iteration, so if the index of both indexOf that loops from the start and lastIndexOf that loops from the end is equal it's mean that there are no duplicates across the array beside the current item.

在数组中找到一个唯一的数字

不再让梦枯萎 2025-02-06 15:17:08

如果您以较长的形式设置数据框,可能会更容易。

尝试一下

## app.R ##
library(shiny)
library(shinydashboard)
library(plotly)
library(tidyr)
BRAND<-c("CHOKIS","CHOKIS","CHOKIS","CHOKIS","CHOKIS","CHOKIS","LARA CHOCO CHIPS","LARA CHOCO CHIPS","LARA CHOCO CHIPS")
BRAND_COLOR<-c("#8050f0","#8050f0","#8050f0","#8050f0","#8050f0","#8050f0","#f050c0","#f050c0","#f050c0")

x<-c(23,34,56,77,78,34,34,64,76)
y<-c(43,54,76,78,87,98,76,76,56)
x1<-c(23,34,56,75,78,34,34,64,76)
y1<-c(33,54,76,76,87,98,76,76,56)
x2<-c(53,34,56,77,78,34,34,84,76)
y2<-c(63,54,76,78,87,98,76,76,86)
r<-c(58,46,76,76,54,21,69,98,98)

mt <- c('Sell Out','Gross Sales','Gross Profit')

graph1.data<-data.frame(BRAND,BRAND_COLOR,x,y,x1,y1,x2,y2)

ui <- dashboardPage(
  dashboardHeader(),
  dashboardSidebar(
    selectInput("metric","Metric",c('Gross Sales','Gross Profit','Sell Out'),multiple = T,selected = "Sell Out")
  ),
  dashboardBody(
    plotlyOutput("line")
  )
)

server <- function(input, output) {

  mydata <- eventReactive(input$metric,{
    
    df <- graph1.data %>% rename(x0=x,y0=y) %>% 
      dplyr::mutate(row = 1:n(),r=r) %>% 
      pivot_longer(cols = -c(row,BRAND,BRAND_COLOR,r))   %>% 
      separate(col = name, into = c("var", "series"), sep = 1) %>%
      pivot_wider(id_cols = c(BRAND,BRAND_COLOR,r,row, series), names_from = "var", values_from = "value") %>% 
      dplyr::mutate(metric=ifelse(series==0,mt[1],ifelse(series==1,mt[2],mt[3]))) %>% 
      dplyr::mutate(label=ifelse(series==0,paste(BRAND,mt[1]),ifelse(series==1,paste(BRAND,mt[2]),paste(BRAND,mt[3])))) %>% print(n=Inf)
    
    df %>% dplyr::filter(metric %in% input$metric)
  })

  myplot <- reactive({
    req(mydata(),input$metric)
    brand.colors <- mydata()$BRAND_COLOR
    names(brand.colors) <- mydata()$label
    
    if(length(input$metric) == 1) {
      p <- mydata() %>% ggplot2::ggplot(aes(x, y, color = label))
    }else {
      p <- mydata() %>% ggplot2::ggplot(aes(x, y, group=metric, color = label))
    }
    p <- p + ggplot2::geom_line(aes(x)) +
      # warnings suppressed on text property
      suppressWarnings(ggplot2::geom_point(aes(x, y, size = r), show.legend = TRUE)) +
      ggplot2::scale_color_manual(values = brand.colors)
    p
  })

  output$line <- renderPlotly({
    req(myplot())
    ggplotly(myplot())
  })
}

shinyApp(ui, server)

It might be easier if you setup your data frame in a long form.

Try this

## app.R ##
library(shiny)
library(shinydashboard)
library(plotly)
library(tidyr)
BRAND<-c("CHOKIS","CHOKIS","CHOKIS","CHOKIS","CHOKIS","CHOKIS","LARA CHOCO CHIPS","LARA CHOCO CHIPS","LARA CHOCO CHIPS")
BRAND_COLOR<-c("#8050f0","#8050f0","#8050f0","#8050f0","#8050f0","#8050f0","#f050c0","#f050c0","#f050c0")

x<-c(23,34,56,77,78,34,34,64,76)
y<-c(43,54,76,78,87,98,76,76,56)
x1<-c(23,34,56,75,78,34,34,64,76)
y1<-c(33,54,76,76,87,98,76,76,56)
x2<-c(53,34,56,77,78,34,34,84,76)
y2<-c(63,54,76,78,87,98,76,76,86)
r<-c(58,46,76,76,54,21,69,98,98)

mt <- c('Sell Out','Gross Sales','Gross Profit')

graph1.data<-data.frame(BRAND,BRAND_COLOR,x,y,x1,y1,x2,y2)

ui <- dashboardPage(
  dashboardHeader(),
  dashboardSidebar(
    selectInput("metric","Metric",c('Gross Sales','Gross Profit','Sell Out'),multiple = T,selected = "Sell Out")
  ),
  dashboardBody(
    plotlyOutput("line")
  )
)

server <- function(input, output) {

  mydata <- eventReactive(input$metric,{
    
    df <- graph1.data %>% rename(x0=x,y0=y) %>% 
      dplyr::mutate(row = 1:n(),r=r) %>% 
      pivot_longer(cols = -c(row,BRAND,BRAND_COLOR,r))   %>% 
      separate(col = name, into = c("var", "series"), sep = 1) %>%
      pivot_wider(id_cols = c(BRAND,BRAND_COLOR,r,row, series), names_from = "var", values_from = "value") %>% 
      dplyr::mutate(metric=ifelse(series==0,mt[1],ifelse(series==1,mt[2],mt[3]))) %>% 
      dplyr::mutate(label=ifelse(series==0,paste(BRAND,mt[1]),ifelse(series==1,paste(BRAND,mt[2]),paste(BRAND,mt[3])))) %>% print(n=Inf)
    
    df %>% dplyr::filter(metric %in% input$metric)
  })

  myplot <- reactive({
    req(mydata(),input$metric)
    brand.colors <- mydata()$BRAND_COLOR
    names(brand.colors) <- mydata()$label
    
    if(length(input$metric) == 1) {
      p <- mydata() %>% ggplot2::ggplot(aes(x, y, color = label))
    }else {
      p <- mydata() %>% ggplot2::ggplot(aes(x, y, group=metric, color = label))
    }
    p <- p + ggplot2::geom_line(aes(x)) +
      # warnings suppressed on text property
      suppressWarnings(ggplot2::geom_point(aes(x, y, size = r), show.legend = TRUE)) +
      ggplot2::scale_color_manual(values = brand.colors)
    p
  })

  output$line <- renderPlotly({
    req(myplot())
    ggplotly(myplot())
  })
}

shinyApp(ui, server)

根据Shiny App中的输入选择设置传奇标签

不再让梦枯萎 2025-02-06 11:46:23

- checksum-crc32(string)

此标头可以用作数据完整性检查以验证
收到的数据与最初发送的数据相同。这个头
指定对象的基本64编码的32位CRC32校验和。为了
更多信息,请参阅Amazon S3用户中检查对象的完整性
指南。

因此,您的 ABCD1234 需要是 Q80SNA == nblnqw == ,具体取决于它们是否期望在Big中呈现32位 - 分别是endian或小型秩序。我在文档中没有看到任何内容。

The documentation says that the CRC needs to be Base-64 encoded, not hexadecimal:

--checksum-crc32 (string)

This header can be used as a data integrity check to verify that the
data received is the same data that was originally sent. This header
specifies the base64-encoded, 32-bit CRC32 checksum of the object. For
more information, see Checking object integrity in the Amazon S3 User
Guide .

So your ABCD1234 would need to be either q80SNA== or NBLNqw==, depending on whether they expect the 32 bits to be rendered in big-endian or little-endian order, respectively. I didn't see anything in the documentation that says which it is.

AWS S3API put-Object:未知选项(chechsum-crc32)

不再让梦枯萎 2025-02-06 03:06:14

如果您尝试从 versions.json 工具中包含 versions.json.json 中的键,则可以使用数组符号到拔出值:

tools.forEach(function (tool) {
  cy.readFile('cypress/fixtures/versions.json').then((data) => {
    let version = data[tool] //will hold whatever value `versions.json` has for the given `tool` key

    cy.visit(url)
    cy.get('div.nuxt-content')
      .first('h2')
      .then(txt => {
        const versionTxt = txt.find("h2").text() //get latest version
        versionTxt = version 
        cy.writeFile('cypress/fixtures/versions.json', {
          tool: versionTxt
        })
      })
  })
})
// ...

if you're trying to access version values from versions.json and tools contains the keys in versions.json, you can use array notation to pull out the values:

tools.forEach(function (tool) {
  cy.readFile('cypress/fixtures/versions.json').then((data) => {
    let version = data[tool] //will hold whatever value `versions.json` has for the given `tool` key

    cy.visit(url)
    cy.get('div.nuxt-content')
      .first('h2')
      .then(txt => {
        const versionTxt = txt.find("h2").text() //get latest version
        versionTxt = version 
        cy.writeFile('cypress/fixtures/versions.json', {
          tool: versionTxt
        })
      })
  })
})
// ...

如何引用JS数组值并插入JSON

不再让梦枯萎 2025-02-05 23:23:26

在此处添加,因为这要长时间发表评论:

这不应该发生,因此该字段应为 and ed,并且根据我的经验。我看不出您从您提供的代码中不起作用的理由。

  1. 您可以在服务器端看到多少总记录
  2. ,这是收到的确切请求?
  3. 您如何确切地发送此请求? CLI?浏览器?

我唯一能想到的是,在发送之前(可能是通过urlencododing),查询串正在被修复(可能是通过urlencododing)。

这是一些简单的 print()调试您可以添加到视图集中以在各个点检查而无需附加调试器。

# add this to print the actual query text before & after filtering
def filter_queryset(self, queryset):
     print("[before]", queryset.query)
     queryset = super().filter_queryset(queryset)
     print("[after ]", queryset.query)
     return queryset

# add this override to see what your path / qs / etc are
def list(self, request, *args, **kwargs):
    print(request.path)
    print(request.META["QUERY_STRING"])
    print(request.query_params)
    return super().list(request, *args, **kwargs)

Adding here since this is too long for a comment:

This shouldn't happen, the fields should be anded, and in my experience they are. I can see no reason why this would not work from the code you've given.

  1. How many total records are there
  2. Can you see, on the server side, the exact request received?
  3. How exactly are you sending this request? cli? browser?

The only thing I can think is that the querystring is getting mangled (possibly by urlencoding it) before sending.

Here is some simple print() debugging you can add to the viewset to check at various points and without attaching a debugger.

# add this to print the actual query text before & after filtering
def filter_queryset(self, queryset):
     print("[before]", queryset.query)
     queryset = super().filter_queryset(queryset)
     print("[after ]", queryset.query)
     return queryset

# add this override to see what your path / qs / etc are
def list(self, request, *args, **kwargs):
    print(request.path)
    print(request.META["QUERY_STRING"])
    print(request.query_params)
    return super().list(request, *args, **kwargs)

当明确设置过滤器值时

不再让梦枯萎 2025-02-05 21:47:09

使用numpy广播:

np.where(B[:, None]==A)[1]

nb。 a 中的值必须是唯一的

输出:

array([1, 1, 3, 0, 2, 2, 2])

Use numpy broadcasting:

np.where(B[:, None]==A)[1]

NB. the values in A must be unique

Output:

array([1, 1, 3, 0, 2, 2, 2])

B中的B中值位置索引

不再让梦枯萎 2025-02-05 17:51:22

read_line 在返回的字符串中包含终止的newline。将 .trim_right_matches(“ \ r \ n”)添加到您的定义 recripe_name 以删除终端newline。

read_line includes the terminating newline in the returned string. Add .trim_right_matches("\r\n") to your definition of correct_name to remove the terminating newline.

为什么我的字符串在阅读stdin的用户输入时不匹配?

不再让梦枯萎 2025-02-05 11:29:41

AnimatePresence 在儿童导航时重新增强父路由的原因是 prop。当更改时,React将其视为一个全新的组件。由于 location.key 在路由更改时始终会更改,因此将 location.key 传递到两个路由组件组件每次都会触发路由动画。

我能想到的唯一解决方案是手动管理两个路由组件的密钥。如果更改URL结构以使基本路径路径/page1 非常简单:

分叉演示

要保持当前的URL结构,您需要为顶级路由创建一个键 comp从/导航到/page2 时会更改,但是从/导航到/nested1 或/Nested1 to /Nested2

The reason why AnimatePresence is re-animating the parent route upon child navigation is because of the key prop. When the key changes, React treats it as an entirely new component. Since the location.key will always change when a route changes, passing location.key to both Routes components will trigger the route animations every time.

The only solution I can think of is to manage the keys for both Routes components manually. If you change the URL structure to make the base route path /page1, the solution is pretty simple:

Forked Demo

To keep your current URL structure, you'll need to create a key for the top-level Routes comp that changes when navigating from / to /page2, but doesn't change when navigating from / to /nested1 or from /nested1 to /nested2.

React路由器V6中的嵌套动画路线

不再让梦枯萎 2025-02-04 09:36:28

更好的设计:

不要使用cron或事件进行重复任务,可能需要花费比分配的间隔更长的时间。

相反,拥有一个单独的程序,该程序可以通过所有4000个项目(需要长时间),然后重新开始。

进一步的评论:

使用多处理与多线程可能无关紧要。

如果螺纹,请记住MySQL连接不是线程安全;您必须为每个线程使用单独的连接。

虽然mySQL可以轻松处理4000 idle 连接,但它与4000查询同时 Active效果不佳。 (通常100对此是具有挑战性的。通常,解决方案是加快查询的速度。)

将4000分成多个线程可能没有用。与CPU内核相比,拥有更多的线程很少有用。他们以新的方式偶然发现彼此 - 操作系统如何处理协调。

将4000个额外的过程复杂化 - 您可能需要一个主计划,该程序正在看着孩子,以查看何时重新开始。或者(可能更好),例如,每个执行200个任务的20个线程。

Better design:

Don't use cron or Events for a repeating task that might take longer than the allotted interval to finish.

Instead, have a separate program that runs through all 4000 items (taking as long as needed), then starts over.

Further comments:

It may not matter much whether you use multi-processing versus multi-threading.

If threading, remember that MySQL connections are not thread-safe; you must use a separate connection for each thread.

While MySQL can easily handle 4000 idle connections, it does not do well with 4000 queries active simultaneously. (Often 100 is challenging for it. Often, the solution is to speed up the queries.)

Splitting the 4000 into multiple threads may not be useful. It is rarely useful to have more threads than CPU cores. They stumble over each other in new ways -- how the OS deals with coordination.

Splitting up the 4000 complicates this extra process -- You may need a master program that is watching its children to see when it is time to start over. Or (probably better), have, say, 20 threads each doing a specific 200 tasks.

在许多DB记录中实现Java中线程的有效方法

不再让梦枯萎 2025-02-04 07:53:50

返回退出功能并返回您指定的元素。在您的情况下,只有第一个返回,第二个返回被完全忽略,将永远不会被执行。这就是为什么您在这样工作的返回的原因,它是一个单一的返回,总计递归功能需要左右

return exits the function and returns the element you specify. in your case only the first return is called, the second one is ignored entirely and will never be executed. Thats why the return you found on SO works, its a single return which sums the recursive function calls for right and left

为什么二进制树(Python)中节点的递归函数数量不返回正确的结果?

不再让梦枯萎 2025-02-04 05:40:56

如果要选择具有特定名称的列,那么

A <- mtcars[,which(conames(mtcars)==cols[1])]
# and then
colnames(mtcars)[A]=cols[1]

您也可以在循环中运行它
添加动态名称的反向方法,例如,如果a是数据框架,并且xyz列为x,那么我确实喜欢这样做

A$tmp <- xyz
colnames(A)[colnames(A)=="tmp"]=x

,也可以在循环中添加

if you want to select column with specific name then just do

A <- mtcars[,which(conames(mtcars)==cols[1])]
# and then
colnames(mtcars)[A]=cols[1]

you can run it in loop as well
reverse way to add dynamic name eg if A is data frame and xyz is column to be named as x then I do like this

A$tmp <- xyz
colnames(A)[colnames(A)=="tmp"]=x

again this can also be added in loop

使用$和字符值动态选择数据框列

不再让梦枯萎 2025-02-03 06:39:43

是的,这是可能的,您可以收听多个从多个mongo收集中更改流。您只需要在 pipeline 中提供收集名称的正则,如果您有多个数据库,您甚至可以为数据库名称提供正则拨号。

"pipeline": "[{\"$match\":{\"$and\":[{\"ns.db\":{\"$regex\":/^database-name$/}},{\"ns.coll\":{\"$regex\":/^journal_.*/}}]}}]"  

您甚至可以使用 $ nin 来排除任何给定的数据库,您不想收听任何变更流。

"pipeline": "[{\"$match\":{\"$and\":[{\"ns.db\":{\"$regex\":/^database-name$/,\"$nin\":[/^any_database_name$/]}},{\"ns.coll\":{\"$regex\":/^journal_.*/}}]}}]"

这是完整的Kafka连接器配置。

Mongo到Kafka源连接器

{
  "name": "mongo-to-kafka-connect",
  "config": {
    "connector.class": "com.mongodb.kafka.connect.MongoSourceConnector",
    "publish.full.document.only": "true",
    "tasks.max": "3",
    "key.converter.schemas.enable": "false",
    "topic.creation.enable": "true",
    "poll.await.time.ms": 1000,
    "poll.max.batch.size": 100,
    "topic.prefix": "any prefix for topic name",
    "output.json.formatter": "com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson",
    "connection.uri": "mongodb://<username>:<password>@ip:27017,ip:27017,ip:27017,ip:27017/?authSource=admin&replicaSet=xyz&tls=true",
    "value.converter.schemas.enable": "false",
    "copy.existing": "true",
    "topic.creation.default.replication.factor": 3,
    "topic.creation.default.partitions": 3,
    "topic.creation.compacted.cleanup.policy": "compact",
    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
    "mongo.errors.log.enable": "true",
    "heartbeat.interval.ms": 10000,
    "pipeline": "[{\"$match\":{\"$and\":[{\"ns.db\":{\"$regex\":/^database-name$/}},{\"ns.coll\":{\"$regex\":/^journal_.*/}}]}}]"
  }
}

您可以从官方文档中获取更多详细信息。

Yes, this is possible, you can listen to multiple change streams from multiple mongo collections. You just need to provide the Regex for the collection names in pipeline, you can even provide the Regex for database names if you have multiple databases.

"pipeline": "[{\"$match\":{\"$and\":[{\"ns.db\":{\"$regex\":/^database-name$/}},{\"ns.coll\":{\"$regex\":/^journal_.*/}}]}}]"  

You can even exclude any given database using $nin, which you dont want to listen for any change-stream.

"pipeline": "[{\"$match\":{\"$and\":[{\"ns.db\":{\"$regex\":/^database-name$/,\"$nin\":[/^any_database_name$/]}},{\"ns.coll\":{\"$regex\":/^journal_.*/}}]}}]"

Here is the complete Kafka connector configuration.

Mongo to Kafka source connector

{
  "name": "mongo-to-kafka-connect",
  "config": {
    "connector.class": "com.mongodb.kafka.connect.MongoSourceConnector",
    "publish.full.document.only": "true",
    "tasks.max": "3",
    "key.converter.schemas.enable": "false",
    "topic.creation.enable": "true",
    "poll.await.time.ms": 1000,
    "poll.max.batch.size": 100,
    "topic.prefix": "any prefix for topic name",
    "output.json.formatter": "com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson",
    "connection.uri": "mongodb://<username>:<password>@ip:27017,ip:27017,ip:27017,ip:27017/?authSource=admin&replicaSet=xyz&tls=true",
    "value.converter.schemas.enable": "false",
    "copy.existing": "true",
    "topic.creation.default.replication.factor": 3,
    "topic.creation.default.partitions": 3,
    "topic.creation.compacted.cleanup.policy": "compact",
    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
    "mongo.errors.log.enable": "true",
    "heartbeat.interval.ms": 10000,
    "pipeline": "[{\"$match\":{\"$and\":[{\"ns.db\":{\"$regex\":/^database-name$/}},{\"ns.coll\":{\"$regex\":/^journal_.*/}}]}}]"
  }
}

You can get more details from official docs.

多个收藏品蒙古到kafka主题

不再让梦枯萎 2025-02-03 03:39:45

据我所知,没有本地.NET API,但似乎ASP.NET(core)支持在某种程度上读取它(检查在这里在这里),但我无法说出如何创建一个。

最懒惰的解决方案可能只是将您的对象序列化到JSON,然后 httputitions.urlencode(json),然后将其传递给查询参数,这是这样的:

&payload=%7B%22response_type%22%3A%20%22code%22%2C%22client_id%22%3A%20%22a%3Ab%22%7D

在另一端,只有 jsonserializer.deserialize&lt; authEndPointArgs&gt;(httputitional.urldecode(pareload))喜欢

虽然听起来有点愚蠢,但它在某些术语中起作用,甚至比将您的 authendPointArgs 直接直接序列化到查询字符串更好,因为查询字符串的标准缺少某些定义,例如如何处理数组,也要复杂的选项。 JS和PHP社区似乎具有非正式的标准,但它们都需要在两端进行手动实施。因此,我们还需要推出自己的“标准”和实施,除非我们说我们只能序列化满足以下条件

  • 的对象:没有复杂的对象,因为属性
  • 没有列表/阵列作为属性

side note:< /strong> url的最大长度取决于很多因素,并且通过查询参数发送复杂的对象,您可以超过该限制,请参见此处有关此主题的更多信息。最好是用 toqueryparams 之类的硬码,例如

。想要一个与这些条件保持一致的通用实现,我们的实现实际上很简单:

public static class QueryStringSerializer
{
    public static string Serialize(object source)
    {
        var props = source.GetType().GetProperties(
            BindingFlags.Instance | BindingFlags.Public
        );

        var output = new StringBuilder();

        foreach (var prop in props)
        {
            // You might want to extend this check, things like 'Guid'
            // serialize nicely to a query string but aren't primitive types
            if (prop.PropertyType.IsPrimitive || prop.PropertyType == typeof(string))
            {
                var value = prop.GetValue(source);
                if (value is null)
                    continue;

                output.Append($"{GetNameFromMember(prop)}={HttpUtility.UrlEncode(value.ToString())}");
            }
            else
                throw new NotSupportedException();
        }
    }
}

private static string GetNameFromMember(MemberInfo prop)
{
    string propName;

    // You could also implement a 'QueryStringPropertyNameAttribute'
    // if you want to be able to override the name given, for this you can basically copy the JSON attribute
    // https://github.com/dotnet/runtime/blob/main/src/libraries/System.Text.Json/src/System/Text/Json/Serialization/Attributes/JsonPropertyNameAttribute.cs
    if (Attribute.IsDefined(prop, typeof(JsonPropertyNameAttribute)))
    {
        var attribute = Attribute.GetCustomAttribute(prop, typeof(JsonPropertyNameAttribute)) as JsonPropertyNameAttribute;
        // This check is technically unnecessary, but VS wouldn't shut up
        if (attribute is null)
            propName = prop.Name;
        else
            propName = attribute.Name;
    }
    else
        propName = prop.Name;

    return propName;
}

如果我们想支持以枚举为属性的对象或以“复杂”对象作为成员,我们需要定义如何序列化,

class Foo
{
    public int[] Numbers { get; set; }
}

则可以将类似的东西序列化为

?numbers[]=1&numbers[]=2

或到1个索引的“列表”

?numbers[1]=1&numbers[2]=2

或到逗号界的列表

?numbers=1,2

,或者仅是一个实例的多个=枚举的

?numbers=1&numbers=2

多个格式。但是所有这些都是框架/实施的特定于接收这些电话的框架/实施,因为没有官方标准,而类似的事情也是如此,

class Foo
{
    public AuthEndPointArgs Args { get; set; }
}

?args.response_type=code&args.client_id=a%3Ab

我现在不愿意思考的更多不同方式也是如此

To my knowledge there is no native .NET API, it seems like ASP.NET (Core) has support for reading it to some extent though (check here and here) but I can't tell how to create one.

The laziest solution would probably be to just serialize your object to JSON, and then HttpUtility.UrlEncode(json), then pass that to a query param, which would like so:

&payload=%7B%22response_type%22%3A%20%22code%22%2C%22client_id%22%3A%20%22a%3Ab%22%7D

At the other end just JsonSerializer.Deserialize<AuthEndPointArgs>(HttpUtility.UrlDecode(payload)) like so. This is assuming you can edit both ends.

While it sounds kinda stupid, it works, at in certain terms may even be better than serializing your AuthEndPointArgs to a query string directly, because the standard for a query string lacks some definitions, like how to deal with arrays, also complex options. It seems like the JS and PHP community have unofficial standards, but they require a manual implementation on both ends. So we'll also need to roll our own "standard" and implementation, unless we say that we can only serialize an object that fulfils the following criteria:

  • No complex objects as properties
  • No lists/ arrays as properties

Side note: URLs have a maximum length depending on a lot of factors, and by sending complex objects via query parameters you may go above that limit pretty fast, see here for more on this topic. It may just be best to hardcode something like ToQueryParams like Ady suggested in their answer

If we do want a generic implementation that aligns with those criteria, our implementation is actually quite simple:

public static class QueryStringSerializer
{
    public static string Serialize(object source)
    {
        var props = source.GetType().GetProperties(
            BindingFlags.Instance | BindingFlags.Public
        );

        var output = new StringBuilder();

        foreach (var prop in props)
        {
            // You might want to extend this check, things like 'Guid'
            // serialize nicely to a query string but aren't primitive types
            if (prop.PropertyType.IsPrimitive || prop.PropertyType == typeof(string))
            {
                var value = prop.GetValue(source);
                if (value is null)
                    continue;

                output.Append(
quot;{GetNameFromMember(prop)}={HttpUtility.UrlEncode(value.ToString())}");
            }
            else
                throw new NotSupportedException();
        }
    }
}

private static string GetNameFromMember(MemberInfo prop)
{
    string propName;

    // You could also implement a 'QueryStringPropertyNameAttribute'
    // if you want to be able to override the name given, for this you can basically copy the JSON attribute
    // https://github.com/dotnet/runtime/blob/main/src/libraries/System.Text.Json/src/System/Text/Json/Serialization/Attributes/JsonPropertyNameAttribute.cs
    if (Attribute.IsDefined(prop, typeof(JsonPropertyNameAttribute)))
    {
        var attribute = Attribute.GetCustomAttribute(prop, typeof(JsonPropertyNameAttribute)) as JsonPropertyNameAttribute;
        // This check is technically unnecessary, but VS wouldn't shut up
        if (attribute is null)
            propName = prop.Name;
        else
            propName = attribute.Name;
    }
    else
        propName = prop.Name;

    return propName;
}

If we want to support objects with enumerables as properties or with "complex" objects as members we need to define how to serialize them, something like

class Foo
{
    public int[] Numbers { get; set; }
}

Could be serialized to

?numbers[]=1&numbers[]=2

Or to a 1 indexed "list"

?numbers[1]=1&numbers[2]=2

Or to a comma delimited list

?numbers=1,2

Or just multiple of one instance = enumerable

?numbers=1&numbers=2

And probably a lot more formats. But all of these are framework/ implementation specific of whatever is receiving these calls as there is no official standard, and the same goes for something like

class Foo
{
    public AuthEndPointArgs Args { get; set; }
}

Could be

?args.response_type=code&args.client_id=a%3Ab

And a bunch more different ways I can't be bothered to think off right now

将带有JSONProperTyname装饰的Poco对象转换为URL查询字符串

不再让梦枯萎 2025-02-02 23:28:04

您没有提及您的操作系统和版本,但是我在Ubuntu 22.04中看到了此错误。尝试通过系统软件包安装PYQTWEBENGINE,并在虚拟环境之外运行代码。在Ubuntu:

sudo apt install python3-pyqt5.qtwebengine

You didn't mention your OS and version, but I've seen this error in Ubuntu 22.04. Try installing PyQtWebEngine via system package and running your code outside the virtual environment. In Ubuntu:

sudo apt install python3-pyqt5.qtwebengine

pyqt5:qwebengineview not of load URL

不再让梦枯萎 2025-02-02 22:14:19

您的查询有一些问题:

  1. 少于向后的较小/大
  2. 您的倒数
  3. ,您需要,而不是

您是什么可能要寻找的是:

SELECT DISTINCT
    o.Ord_No,
    odf.OrdFuel_Order_Qty,
    odf.OrdFuel_Deliv_Net_Qty
FROM Order_Details_Fuel odf
JOIN Orders o ON odf.OrdFuel_Ord_Key = o.Ord_Key
WHERE odf.OrdFuel_Deliv_Net_Qty <= (300 + odf.OrdFuel_Order_Qty)
and odf.OrdFuel_Deliv_Net_Qty >= (odf.OrdFuel_Order_Qty - 300)

正如Jnevill所指出的那样,这甚至可以简化为:

ABS(odf.OrdFuel_Deliv_Net_Qty - odf.OrdFuel_Order_Qty) <= 300

在那个上完全归功于他。

You've got a few things wrong with that query:

  1. You have the less/greater than backwards
  2. You have the subtraction backwards
  3. You need an and, not an or

What you are probably looking for is:

SELECT DISTINCT
    o.Ord_No,
    odf.OrdFuel_Order_Qty,
    odf.OrdFuel_Deliv_Net_Qty
FROM Order_Details_Fuel odf
JOIN Orders o ON odf.OrdFuel_Ord_Key = o.Ord_Key
WHERE odf.OrdFuel_Deliv_Net_Qty <= (300 + odf.OrdFuel_Order_Qty)
and odf.OrdFuel_Deliv_Net_Qty >= (odf.OrdFuel_Order_Qty - 300)

As pointed out by JNevill, this could even be simplified to:

ABS(odf.OrdFuel_Deliv_Net_Qty - odf.OrdFuel_Order_Qty) <= 300

Full credit to him on that one.

SQL超过/下列比较

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文