止于盛夏

文章 评论 浏览 30

止于盛夏 2025-02-08 17:58:49

我使用WWP 13的Genexus(16 U10)遇到了类似的问题。解决方案是创建一个负责隐藏scrollbar设置的JavaScript外部对象 docuct.body.body.body.style.overflowx to “隐藏” 。尝试此解决方案,添加 $(“ iframe”)。高度(*所需的高度*),也许很有趣。

I have faced a similar problem using Genexus (16 U10) with WWP 13. The solution was to create a Javascript external object that was responsible for hiding the scrollbar setting document.body.style.overflowX to 'hidden'. Maybe it might be interesting for you to try this solution, adding $("iframe").height(*desired height*) as you see fit.

弹出式弹出式太小,在Genexus中带有滚动条

止于盛夏 2025-02-08 03:56:52

通过使用 输出 元素。

<form oninput="result.value=parseInt(a.valueAsNumber)+parseInt(b.valueAsNumber)">
  <input type="number" id="a" name="a" value="10" /> +
  <input type="number" id="b" name="b" value="50" /> =
  <output name="result" for="a b">60</output>
</form>

https://jsfiddle.net/gxu1rtql/

用于计算或输出用户操作的容器元素。您还可以将HTML类型从号码更改为 range ,并使用不同的UI元素保留相同的代码和功能,如下所示。

<form oninput="result.value=parseInt(a.valueAsNumber)+parseInt(b.valueAsNumber)">
  <input type="range" id="a" name="a" value="10" /> +
  <input type="number" id="b" name="b" value="50" /> =
  <output name="result" for="a b">60</output>
</form>

This can also be achieved with a more native HTML solution by using the output element.

<form oninput="result.value=parseInt(a.valueAsNumber)+parseInt(b.valueAsNumber)">
  <input type="number" id="a" name="a" value="10" /> +
  <input type="number" id="b" name="b" value="50" /> =
  <output name="result" for="a b">60</output>
</form>

https://jsfiddle.net/gxu1rtqL/

The output element can serve as a container element for a calculation or output of a user's action. You can also change the HTML type from number to range and keep the same code and functionality with a different UI element, as shown below.

<form oninput="result.value=parseInt(a.valueAsNumber)+parseInt(b.valueAsNumber)">
  <input type="range" id="a" name="a" value="10" /> +
  <input type="number" id="b" name="b" value="50" /> =
  <output name="result" for="a b">60</output>
</form>

https://jsfiddle.net/gxu1rtqL/2/

添加两个数字会加入它们,而不是计算总和

止于盛夏 2025-02-08 00:43:00

您可能已经解决了这个问题,但是我发现这个问题是由Firefox浏览器引起的(至少在我的情况下)。切换到Brave解决了控制台中的误差,并正确加载了我的模型。

我先前以前在

you've probably solved this one already, but I found that this issue was caused by the Firefox browser (at least in my case). Switching to Brave solved the error in the console and loaded my model just right.

I previously tested my model on https://sandbox.babylonjs.com/ and the gltf validator on https://github.khronos.org/glTF-Validator/, but it didn't find any issue in the file itself.

three.js gltfloader请求构造函数:数据:应用程序/八位字节; base64

止于盛夏 2025-02-07 22:58:52

@value(“ $ {application-flags.load-ip-db in-memory}”)

为null
检查您是否错了

或可以使用
@value(“ $ {application-flags.load-ip-db-in-inmory:如果不在app.prop}中,则默认值”)
私人字符串一定的默认;

the @Value("${application-flags.load-ip-db-in-memory}")

is null
check if you are wrong

or you can use
@Value("${application-flags.load-ip-db-in-memory : here default value if its not in the app.prop }")
private String someDefault;

春季应用程序部署失败

止于盛夏 2025-02-07 16:03:25

递归时,您可以使用以下方式计算每个节点的索引

root.left.index  = 2 * root.index + 1
root.right.index = 2 * root.index + 2

When recursing the tree, you can compute the index of each node using:

root.left.index  = 2 * root.index + 1
root.right.index = 2 * root.index + 2

如何计算二进制搜索树中N&#x27元素的索引?

止于盛夏 2025-02-07 14:50:00

感谢您的有用建议。我认为这很奇怪,但有效。

  func options(for question: [String : Any?]) -> [AnyHashable] {
    
       var qoptions: [String] = []
       for val in question["options"] as! [[String:Any]] {
           let value = val["label"] as! String
           qoptions.append(value)
       }
       return qoptions as! [AnyHashable]
   }

Thanks for your useful recommendation. I think it is some weirds but works.

  func options(for question: [String : Any?]) -> [AnyHashable] {
    
       var qoptions: [String] = []
       for val in question["options"] as! [[String:Any]] {
           let value = val["label"] as! String
           qoptions.append(value)
       }
       return qoptions as! [AnyHashable]
   }

从[Anyhashable]获取特殊价值观到其他[Anyhashable]

止于盛夏 2025-02-07 04:55:49

首先,以下行似乎是

{'Company': {'S': 'AAPL'}, 'DailyPrice': {'S': '166.02'}}

您需要在日志条目中具有日志选项,时间戳和其他标准值的日志条目。这似乎是一个代码问题。插件标准日志记录lib如log4j,并添加日志级别,例如调试/警告/错误等,以输出正确的日志事件。这将有助于您解决问题。

first of all the below line doesnt appears to be a log entry

{'Company': {'S': 'AAPL'}, 'DailyPrice': {'S': '166.02'}}

you need to have log option, timestamp and other standard values in the log entry. It appears to be a code issue. plug-in standard logging lib like log4j and add log level like debug/warn/error etc to output proper log events. This would help you troubleshoot the issue.

我的豆荚一直在重新启动,我可以弄清楚为什么

止于盛夏 2025-02-06 18:35:17

AFAIK,使用OAuth 2.0授权APIM保护API的过程是:

  1. 在Azure AD中注册该应用程序,并授予用户使用有效的OAuth Token访问API。
  2. APIM API请求的授权标题中添加了该令牌。
  3. 可以使用apim使用 validate-jwt 策略来验证令牌。

This Validate JWT策略用于在APIM中预先授权请求。

As Will told in the comment, validate-JWT policy enforces a JSON web token's existence and validity came from either a specified query parameter or a HTTP Header.

Please refer to the article that contains practical workaround for configuring the JWT validation policy at product level, API Level and All APIs level which explains that the OAuth 2.0 implementation is required to protect the APIs more securely.

AFAIK, the process of using OAuth 2.0 authorization for APIM to protect APIs is:

  1. Registering the App in Azure AD and granting the users to access the API from it with a valid OAuth token.
  2. That token is added in the Authorization header of APIM's API requests.
  3. That token can be validated using the validate-jwt policy by APIM.

This Validate JWT policy is used to pre-authorize requests in APIM.

As Will told in the comment, validate-JWT policy enforces a JSON web token's existence and validity came from either a specified query parameter or a HTTP Header.

Please refer to the article that contains practical workaround for configuring the JWT validation policy at product level, API Level and All APIs level which explains that the OAuth 2.0 implementation is required to protect the APIs more securely.

通过OAuth2.0确保API与通过入站策略验证JWT有什么区别?

止于盛夏 2025-02-06 18:31:41

以下工作是, a_type_alias 的缺点未在##&nbsp; interface; interface; interface part下明确声明。我看着 pep&nbsp; 563 最初使用___future__ import import import import import import import import import import import import import import import import import import intoctation ,但事实证明Pyright(Verison&nbsp; 1.1.247)类型在未来导入的情况下检查以下内容。

from typing import Literal, Final, TypeAlias


# PUBLIC_INTERFACE

set_1 : Final[set['a_type_alias']]
set_2 : Final[set['a_type_alias']]


# IMPLEMENTATION

a_type_alias : TypeAlias = Literal[0,1]

set_1 = {0}   # no error from Pyright (correct)
set_2 = {0,2} # error from Pyright (correct)

The following works, with the drawback that a_type_alias is not explicitly declared under the # PUBLIC INTERFACE part. I looked at PEP 563 and initially used a solution with from __future__ import annotations but turns out Pyright (verison 1.1.247) type checks the following with or without that future import.

from typing import Literal, Final, TypeAlias


# PUBLIC_INTERFACE

set_1 : Final[set['a_type_alias']]
set_2 : Final[set['a_type_alias']]


# IMPLEMENTATION

a_type_alias : TypeAlias = Literal[0,1]

set_1 = {0}   # no error from Pyright (correct)
set_2 = {0,2} # error from Pyright (correct)

使用在类型注释中声明但未定义的Typealias?

止于盛夏 2025-02-06 13:14:22

该解决方案并不是您所要求的,但是我认为这是可视化数据结构的一种方法:

library(ggplot2)

Platform <- c("FB Left-wing", "FB Right-wing",
              "IG Left-wing", "IG Right-wing",
              "TW Left-wing", "TW Right-wing")

Party <- c("FB", "FB", "IG", "IG", "TW", "TW")
  
Ideology <- c("Left", "Right", "Left", "Right","Left", "Right")

CI_low <- c(1.049, 0.906, 1.212, 0.989, 1.122, 1.080)
CI_high <- c(1.299, 1.144, 1.483, 1.235, 1.362, 1.335)
CI_mid <- c(1.167, 1.018, 1.340, 1.105, 1.236, 1.201)

dat_figure <- data.frame(Platform, Party, Ideology, CI_low, CI_high, CI_mid)

ggplot(dat_figure, aes(x = Party, y = CI_mid, colour = Ideology)) + 
  scale_colour_manual(values = c("blue", "red")) +
  geom_pointrange(aes(ymin = CI_low, ymax = CI_high), position = position_dodge(0.5)) + 
  ylim(0, 4) + 
  ylab("Odds Ratio") + 
  xlab("Platform and Ideology") + 
  theme_bw() 

“在此处输入图像说明”

This solution isn't exactly what you asked for, but I think it is one way to visualize the structure of data:

library(ggplot2)

Platform <- c("FB Left-wing", "FB Right-wing",
              "IG Left-wing", "IG Right-wing",
              "TW Left-wing", "TW Right-wing")

Party <- c("FB", "FB", "IG", "IG", "TW", "TW")
  
Ideology <- c("Left", "Right", "Left", "Right","Left", "Right")

CI_low <- c(1.049, 0.906, 1.212, 0.989, 1.122, 1.080)
CI_high <- c(1.299, 1.144, 1.483, 1.235, 1.362, 1.335)
CI_mid <- c(1.167, 1.018, 1.340, 1.105, 1.236, 1.201)

dat_figure <- data.frame(Platform, Party, Ideology, CI_low, CI_high, CI_mid)

ggplot(dat_figure, aes(x = Party, y = CI_mid, colour = Ideology)) + 
  scale_colour_manual(values = c("blue", "red")) +
  geom_pointrange(aes(ymin = CI_low, ymax = CI_high), position = position_dodge(0.5)) + 
  ylim(0, 4) + 
  ylab("Odds Ratio") + 
  xlab("Platform and Ideology") + 
  theme_bw() 

enter image description here

在X轴GGPLOT上的每个第二个标签之间添加空间或断开空间

止于盛夏 2025-02-06 12:26:28

问题在于您的JSON数据具有天气数组,但是您的模型中有一个对象。请参阅

 "weather": [
        {
          "id": 804,
          "main": "Clouds",
          "description": "overcast clouds",
          "icon": "04d"
        }
      ],

方括号表明这是一个数组。但是在您的模型中,您将其映射到词典上。
解决方案是从JSON文件中删除这些括号,或者像这样的模型中的天气

var weather: [ForecastWeatherResponse]

The problem is that your JSON data has array of Weather but you have single object in your model. See

 "weather": [
        {
          "id": 804,
          "main": "Clouds",
          "description": "overcast clouds",
          "icon": "04d"
        }
      ],

the square brackets indicate that this is an array. But in your model you are mapping this on a Dictionary.
The solution is either remove these brackets from your JSON file or make the weather in your model like this

var weather: [ForecastWeatherResponse]

Swiftui解码JSON数据并在列表中显示 - 解码错误

止于盛夏 2025-02-06 07:56:55

看来,在引擎盖下运行 df.count()实际上使用 count 聚合类。我将其基于 count 方法的定义, org/apache/spark/sql/dataset.scala“ rel =“ nofollow noreferrer”> dataset.scala 。

  /**
   * Returns the number of rows in the Dataset.
   * @group action
   * @since 1.6.0
   */
  def count(): Long = withAction("count", groupBy().count().queryExecution) { plan =>
    plan.executeCollect().head.getLong(0)
  }

是否还有其他优化技术,例如存储价值
DataFrame的元数据?

它将采用催化剂使用的所有相同优化策略。它创建了有向表达式的图表,评估并将其卷起。它不是将计数值作为元数据存储,这将违反Spark的懒惰评估原则。

我进行了一个实验,并验证了 df.count() df.groupby()。count()产生相同的物理计划。

df = spark.createDataFrame(pd.DataFrame({"a": [1,2,3], "b": ["a", "b", "c"]}))

# count using the Dataframe method
df.count()

# count using the Count aggregator
cnt_agg = df.groupBy().count()

这两个工作都制定了相同的身体计划:

== Physical Plan ==
AdaptiveSparkPlan (9)
+- == Final Plan ==
   * HashAggregate (6)
   +- ShuffleQueryStage (5), Statistics(sizeInBytes=64.0 B, rowCount=4, isRuntime=true)
      +- Exchange (4)
         +- * HashAggregate (3)
            +- * Project (2)
               +- * Scan ExistingRDD (1)
+- == Initial Plan ==
   HashAggregate (8)
   +- Exchange (7)
      +- HashAggregate (3)
         +- Project (2)
            +- Scan ExistingRDD (1)

It appears that underneath the hood running df.count() actually uses the Count aggregation class. I am basing this on the definition of the count method in Dataset.scala.

  /**
   * Returns the number of rows in the Dataset.
   * @group action
   * @since 1.6.0
   */
  def count(): Long = withAction("count", groupBy().count().queryExecution) { plan =>
    plan.executeCollect().head.getLong(0)
  }

is there any other optimised technique like storing value in
dataframe's metadata?

It is going to employ all the same optimization strategies used by Catalyst. It creates a directed graph of expressions, evaluates and rolls them up. It is not storing the count value as metadata, which would violate Spark's lazy evaluation principle.

I ran an experiment and verified that df.count() and df.groupBy().count() produce the same physical plan.

df = spark.createDataFrame(pd.DataFrame({"a": [1,2,3], "b": ["a", "b", "c"]}))

# count using the Dataframe method
df.count()

# count using the Count aggregator
cnt_agg = df.groupBy().count()

Both jobs produced the same Physical Plan:

== Physical Plan ==
AdaptiveSparkPlan (9)
+- == Final Plan ==
   * HashAggregate (6)
   +- ShuffleQueryStage (5), Statistics(sizeInBytes=64.0 B, rowCount=4, isRuntime=true)
      +- Exchange (4)
         +- * HashAggregate (3)
            +- * Project (2)
               +- * Scan ExistingRDD (1)
+- == Initial Plan ==
   HashAggregate (8)
   +- Exchange (7)
      +- HashAggregate (3)
         +- Project (2)
            +- Scan ExistingRDD (1)

Spark如何计算数据框中的记录数?

止于盛夏 2025-02-06 02:28:30

使用 bigdecimal#valueof 方法,用于从转换为 bigdecimal

stat.setCount_human_dna(BigDecimal.valueOf(dnaSamples.stream().filter(x -> x.getType().equals("Human")).collect(Collectors.counting())));

请参阅以获取更多详细信息。

Use the BigDecimal#valueOf method for the conversion from long to BigDecimal.

stat.setCount_human_dna(BigDecimal.valueOf(dnaSamples.stream().filter(x -> x.getType().equals("Human")).collect(Collectors.counting())));

See the JavaDocs for more detail.

如何在使用流的同时将长期转换为bigdecimal

止于盛夏 2025-02-05 16:10:38

我可以肯定的是,此功能是Intellij Ideas 2022.1.1,构建的,2022年5月10日,

我知道的唯一解决方法 - 毫无疑问,这在大多数情况下是不可能的 - 特征代替案例类

在Intellij Scala插件上打开的票添加此功能。

如果您也希望看到此功能添加到Intellij中,则可以对该票进行投票。

I am fairly certain that this feature is not available as of IntelliJ IDEA 2022.1.1, built May 10, 2022

The only workaround I am aware of – which no doubt is impossible in most cases – is to use trait or class in place of a case class.

There is a ticket open on the IntelliJ Scala Plugin for adding this functionality.

If you too would like to see this feature added to IntelliJ, you can vote on that ticket.

在Intellij UML图中隐藏Scala生成的案例类方法?

止于盛夏 2025-02-05 09:59:21

ID_EMPL与表PDB无关。它仅对您要调用的函数知道。因此,在触发器中使用它,在触发器上处理表格上的操作似乎是错误的地方。

据我所知,您似乎想将负责操作的员工存储在伐木表中。通常,人们只需存储当前用户,而是在具有连接池的环境中,所有用户共享相同的登录帐户,这无济于事。

解决此问题的一种方法是存储Osuser:

SELECT SYS_CONTEXT('USERENV', 'OS_USER') FROM DUAL;

如果这不足,则必须存储ID_EMPL,则必须在会话上下文中记住ID_EMPL并从那里使用它。一种简单的方法是使用带有公共变量的PL/SQL软件包。

包装头

CREATE OR REPLACE PACKAGE pk_pdb_actions IS
  pkv_id_empl  empl.id%TYPE;
BEGIN
  PROCEDURE delete_pdb(p_id_pdb IN pdb.id%TYPE, p_id_empl IN empl.id%TYPE);
  ...
END pk_pdb_actions;

包装

CREATE OR REPLACE PACKAGE BODY pk_pdb_actions IS
BEGIN
  PROCEDURE delete_pdb(p_id_pdb IN pdb.id%TYPE, p_id_empl IN empl.id%TYPE) IS
  BEGIN
    pkv_id_empl := p_id_empl;
    DELETE FROM pdb WHERE id = p_id_pdb;
  END delete_pdb;
  ...
END pk_pdb_actions;

扳机

CREATE OR REPLACE trigger trg_archive_pdb 
AFTER UPDATE OR DELETE on pdb
FOR EACH ROW
BEGIN
  INSERT INTO archive_pdb (id_empl, id_pdb) 
    VALUES (pk_pdb_actions.pkv_id_empl, :old.id);
END trg_archive_pdb;

The id_empl has nothing to do with the table pdb. It is only known to the function you are calling. Hence using it in a trigger where you deal with an action on the table seems to be the wrong place.

From what I see, it seems that you want to store the employee responsible for the action in the logging table. Usually one would simply store the current user instead, but in environments with a connection pool for instance, where all users share the same login account, this does not help.

One way to deal with this is to store the OSUSER instead:

SELECT SYS_CONTEXT('USERENV', 'OS_USER') FROM DUAL;

If this does not suffice and you must store the id_empl instead, you'll have to remember the id_empl in the session context and use it from there. One simple way to do this is to use a PL/SQL package with a public variable.

Package header

CREATE OR REPLACE PACKAGE pk_pdb_actions IS
  pkv_id_empl  empl.id%TYPE;
BEGIN
  PROCEDURE delete_pdb(p_id_pdb IN pdb.id%TYPE, p_id_empl IN empl.id%TYPE);
  ...
END pk_pdb_actions;

Package body

CREATE OR REPLACE PACKAGE BODY pk_pdb_actions IS
BEGIN
  PROCEDURE delete_pdb(p_id_pdb IN pdb.id%TYPE, p_id_empl IN empl.id%TYPE) IS
  BEGIN
    pkv_id_empl := p_id_empl;
    DELETE FROM pdb WHERE id = p_id_pdb;
  END delete_pdb;
  ...
END pk_pdb_actions;

Trigger

CREATE OR REPLACE trigger trg_archive_pdb 
AFTER UPDATE OR DELETE on pdb
FOR EACH ROW
BEGIN
  INSERT INTO archive_pdb (id_empl, id_pdb) 
    VALUES (pk_pdb_actions.pkv_id_empl, :old.id);
END trg_archive_pdb;

如何将参数从一个过程中传递到Oracle中的PL SQL

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文