不即不离

文章 评论 浏览 29

不即不离 2025-02-11 04:00:24

我认为也许有些混乱。您正在尝试计算3个自变量的联合分布。但是,如果在两个tibbles中计算 feature2 的边际分布,您会发现它们不一样,因此它们可能不是独立或某些偏见。无论如何,关节分布取决于变量的边际频率,通常不能混合2。

您要做的是将功能1和2的联合分布乘以功能3的边际分布。

my_tib1 是您的第一个联合分布。这是:

# A tibble: 6 x 3
  feature1 feature2 number
  <chr>    <chr>     <dbl>
1 A        AA         0.1 
2 A        BB         0.1 
3 B        AA         0.3 
4 B        BB         0.4 
5 C        AA         0.05
6 C        BB         0.05

或作为表格:

library(tidyverse)
my_tib1 %>% pivot_wider(names_from = feature2, values_from=number)
    # A tibble: 3 x 3
      feature1    AA    BB
      <chr>    <dbl> <dbl>
    1 A         0.1   0.1 
    2 B         0.3   0.4 
    3 C         0.05  0.05

您的第二个相对频率或关节分布的表是:

my_tib2 %>% pivot_wider(names_from = feature2, values_from=number)
# A tibble: 2 x 3
  feature3    AA    BB
  <chr>    <dbl> <dbl>
1 TT         0.1   0.4
2 FF         0.3   0.2

您可以计算 feature3 的边际分布。如您所见,它是总和1。

marginals3 = my_tib2 %>% 
  pivot_wider(names_from = feature2, values_from=number) %>% 
  rowwise() %>% 
  mutate(marginals3 = AA+BB) 
> marginals3
# A tibble: 2 x 4
# Rowwise: 
  feature3    AA    BB marginals3
  <chr>    <dbl> <dbl>     <dbl>
1 TT         0.1   0.4       0.5
2 FF         0.3   0.2       0.5

您无需旋转它来计算它,只需按'feature3'组组:

marginals3 = my_tib2 %>% 
  group_by(feature3) %>% 
  summarise(feature2, marginals3 = sum(number))

如果您使用 features2 将其汇总,则可以将其与 my_tib1 要计算结果 intim distribution =频率my_tab1 * marginals(feature3)

 my_tib1 %>% 
    left_join(marginals3, by='feature2') %>% 
    mutate(number.mult = number*marginals3)

如果您汇总(sum(number.Mult))您会看到结果是 1

I think maybe there is some confusion. You are trying to calculate the joint distribution of 3 independent variables. But if you calculate the marginal distribution for feature2 in both tibbles, you will see they are not the same, so likely they are not independent or some bias. Anyway, the joint distribution is dependent on the marginal frequencies of the variables, you cannot usually mix 2. You are trying to multiply 2 joint distributions of 2 combinations of variables.

What do you have to do is to multiply the joint distribution of features1 and 2 by the marginal distribution of feature 3.

my_tib1 is your first joint distribution. Which is:

# A tibble: 6 x 3
  feature1 feature2 number
  <chr>    <chr>     <dbl>
1 A        AA         0.1 
2 A        BB         0.1 
3 B        AA         0.3 
4 B        BB         0.4 
5 C        AA         0.05
6 C        BB         0.05

Or as a table:

library(tidyverse)
my_tib1 %>% pivot_wider(names_from = feature2, values_from=number)
    # A tibble: 3 x 3
      feature1    AA    BB
      <chr>    <dbl> <dbl>
    1 A         0.1   0.1 
    2 B         0.3   0.4 
    3 C         0.05  0.05

Your second table of relative frequencies or joint distribution is:

my_tib2 %>% pivot_wider(names_from = feature2, values_from=number)
# A tibble: 2 x 3
  feature3    AA    BB
  <chr>    <dbl> <dbl>
1 TT         0.1   0.4
2 FF         0.3   0.2

You can calculate the marginal distribution of feature3. As you see, it sums 1.

marginals3 = my_tib2 %>% 
  pivot_wider(names_from = feature2, values_from=number) %>% 
  rowwise() %>% 
  mutate(marginals3 = AA+BB) 
> marginals3
# A tibble: 2 x 4
# Rowwise: 
  feature3    AA    BB marginals3
  <chr>    <dbl> <dbl>     <dbl>
1 TT         0.1   0.4       0.5
2 FF         0.3   0.2       0.5

You don't need to pivot to calculate it, just group by 'feature3':

marginals3 = my_tib2 %>% 
  group_by(feature3) %>% 
  summarise(feature2, marginals3 = sum(number))

If you summarise it with feature2 you can combine it with my_tib1 to calculate the resulting joint distribution = frequencies my_tab1 * marginals(feature3):

 my_tib1 %>% 
    left_join(marginals3, by='feature2') %>% 
    mutate(number.mult = number*marginals3)

If you summarise(sum(number.mult)) you will see the result is 1.

如何从给定独立性的边际分布中计算联合分布?

不即不离 2025-02-10 19:31:59

总结所有现有答案

(并添加了一些我的观点)

说明:

if name == "Kevin" or "Jon" or "Inbar":

逻辑上等同于:

if (name == "Kevin") or ("Jon") or ("Inbar"):

对于用户bob而言,它等同于:

if (False) or ("Jon") or ("Inbar"):

注意:python评估任何非零整数的逻辑值为> true 。因此,所有非空列表,集合,字符串等都是可评估的,并返回 true

操作员选择具有正面真实价值的第一个参数。

因此,“乔恩”具有正面的真实价值,而IF块执行,因为它现在等同于

if (False) or (True) or (True):

导致“访问授予”的内容,而不论其名称输入如何。

解决方案:

解决方案1:使用多个 == 操作员明确检查每个值

if name == "Kevin" or name == "Jon" or name == "Inbar":
    print("Access granted.")
else:
    print("Access denied.")

解决方案2:构成有效值的集合(一组,例如,列表或元组),并在中使用运算符中的测试成员资格(更快,首选方法)

if name in {"Kevin", "Jon", "Inbar"}:
    print("Access granted.")
else:
    print("Access denied.")

if name in ["Kevin", "Jon", "Inbar"]:
    print("Access granted.")
else:
    print("Access denied.")

解决方案3:使用基本(并且不是很高效) if-elif-else 结构

if name == "Kevin":
    print("Access granted.")
elif name == "Jon":
    print("Access granted.")
elif name == "Inbar":
    print("Access granted.")
else:
    print("Access denied.")

Summarising all existing answers

(And adding a few of my points)

Explanation :

if name == "Kevin" or "Jon" or "Inbar":

is logically equivalent to:

if (name == "Kevin") or ("Jon") or ("Inbar"):

Which, for user Bob, is equivalent to:

if (False) or ("Jon") or ("Inbar"):

NOTE : Python evaluates the logical value of any non-zero integer as True. Therefore, all Non-empty lists, sets, strings, etc. are evaluable and return True

The or operator chooses the first argument with a positive truth value.

Therefore, "Jon" has a positive truth value and the if block executes, since it is now equivalent to

if (False) or (True) or (True):

That is what causes "Access granted" to be printed regardless of the name input.

Solutions :

Solution 1 : Use multiple == operators to explicitly check against each value

if name == "Kevin" or name == "Jon" or name == "Inbar":
    print("Access granted.")
else:
    print("Access denied.")

Solution 2 : Compose a collection of valid values (a set, a list or a tuple for example), and use the in operator to test for membership (faster, preferred method)

if name in {"Kevin", "Jon", "Inbar"}:
    print("Access granted.")
else:
    print("Access denied.")

OR

if name in ["Kevin", "Jon", "Inbar"]:
    print("Access granted.")
else:
    print("Access denied.")

Solution 3 : Use the basic (and not very efficient) if-elif-else structure

if name == "Kevin":
    print("Access granted.")
elif name == "Jon":
    print("Access granted.")
elif name == "Inbar":
    print("Access granted.")
else:
    print("Access denied.")

为什么要“ a == x或y或z”总是评估为真?我如何比较“ a”所有这些?

不即不离 2025-02-10 09:34:51

您不能以描述方式使用它。关于通用类型的观点是,尽管您可能在“编码时间”中不知道它们,但编译器需要能够在编译时解决它们。为什么?因为在引擎盖下,编译器将消失并为“开放”通用类型的每种不同用法创建一种新类型(有时称为封闭的通用类型)。

换句话说,在编译之后,

DoesEntityExist<int>

与此类型是不同的类型

DoesEntityExist<string>

,是编译器能够实现编译时类型安全性的方式。

对于您描述的方案,您应该将类​​型传递为可以在运行时进行检查的参数。

如其他答案中所述,另一个选项是使用反射从开放类型创建封闭类型,尽管我可能会说,这可能是在我说的极端利基方案以外的任何其他方法中推荐的。

You can't use it in the way you describe. The point about generic types, is that although you may not know them at "coding time", the compiler needs to be able to resolve them at compile time. Why? Because under the hood, the compiler will go away and create a new type (sometimes called a closed generic type) for each different usage of the "open" generic type.

In other words, after compilation,

DoesEntityExist<int>

is a different type to

DoesEntityExist<string>

This is how the compiler is able to enfore compile-time type safety.

For the scenario you describe, you should pass the type as an argument that can be examined at run time.

The other option, as mentioned in other answers, is that of using reflection to create the closed type from the open type, although this is probably recommended in anything other than extreme niche scenarios I'd say.

C#中的仿制药,使用变量类型为参数

不即不离 2025-02-10 02:40:46

我找到了一个解决方案。很抱歉自我纠缠,但我认为将其写在这里的未来很有用。

使用 log_scale = true stat ='密度',Seaborn使用对数垃圾箱和重新垃圾箱的高度,以使积分为1。因此,一个人需要使用而不是 pdf 函数 g ,使得重新固定的箱上的积分与PDF相同。这给出了 g(x)= f(x) * x * log(10)确实有效:

params = (20, -1500, 8000)
sample = stats.invgauss.rvs(size=1000, *params)
fig, ax = plt.subplots()
sns.histplot(sample, log_scale=True, kde=False, stat='density')
x0, x1 = ax.get_xlim()
x_pdf = np.exp(np.linspace(np.log(x0),np.log(x1),500))
fitted_params = stats.invgauss.fit(sample)
y_pdf = stats.invgauss.pdf(x_pdf, *fitted_params)
y_cdf = stats.invgauss.cdf(x_pdf, *fitted_params)
# rescaled pdf:
y_pdf_rescaled = y_pdf * x_pdf * np.log(10)
ax.plot(x_pdf, y_pdf_rescaled, 'r', lw=2)
plt.savefig('{}/Pdf_check.png'.format(out_dir))

“

I found a solution. Sorry for self-answering but I think it would be useful to have it written here for the future.

With log_scale=True and stat='density', seaborn uses logarithmic bins and rescales bins height such that the integral is 1. So one needs to use instead of pdf a function g such that the integral over the rescaled bins is the same as for the pdf. This gives g(x) = f(x) * x * log(10) and indeed this works:

params = (20, -1500, 8000)
sample = stats.invgauss.rvs(size=1000, *params)
fig, ax = plt.subplots()
sns.histplot(sample, log_scale=True, kde=False, stat='density')
x0, x1 = ax.get_xlim()
x_pdf = np.exp(np.linspace(np.log(x0),np.log(x1),500))
fitted_params = stats.invgauss.fit(sample)
y_pdf = stats.invgauss.pdf(x_pdf, *fitted_params)
y_cdf = stats.invgauss.cdf(x_pdf, *fitted_params)
# rescaled pdf:
y_pdf_rescaled = y_pdf * x_pdf * np.log(10)
ax.plot(x_pdf, y_pdf_rescaled, 'r', lw=2)
plt.savefig('{}/Pdf_check.png'.format(out_dir))

rescaled pdf

如何在x轴上覆盖PDF至海底历史图

不即不离 2025-02-09 21:00:37

我认为这样做的方法没有真正的优化(如果我们认为这比其他任何其他方法都要快得多”)。从根本上讲,这是一个效率低下的操作,而且我真的看不到一个很好的用例。但是,假设您真的已经考虑过并决定这是解决手头问题的最佳方法,我建议您在数据框架上使用 repartition 方法重新考虑;它可以用作列用作分区表达式。它唯一不会做的就是按照您想要的方式将文件分开。

我想这样的事情可能会起作用:

import java.io.File
import scala.reflect.io.Directory

// dummy data
val df = Seq(("A", "B", "XC"), ("D", "E", "YF"), ("G", "H", "ZI"), ("J", "K", "ZL"), ("M", "N", "XO")).toDF("FOO", "BAR", "BAZ")

// List of all possible prefixes for the index column. If you need to generate this
// from the data, replace this with a query against the input dataframe to do that.
val prefixes = List("X", "Y", "Z")

// replace with your path
val path = "/.../data"

prefixes.foreach{p =>
  val data = df.filter(col("uniqueID").startsWith(p))
  val path = new Directory(new File(f"$path/$p"))
  data.repartition(data.count.toInt) // repartition the dataframe with 1 record per partition
  data.write.format("json").save(path)
}

以上不完全满足要求,因为您无法控制输出文件名 1 。我们可以使用shell脚本来
之后修复文件名。这是假设您正在使用 bash JQ 的环境中运行。

#!/usr/bin/env bash

# replace with the path that contains the directories to process
cd /.../data

for sub_data_dir in ./*; do
  cd "${sub_data_dir}"
  rm _SUCCESS
  for f in ./part-*.json; do
    uuid="$(jq -r ."uniqueID" "${f}")"
    mv "${f}" "${uuid}"
  done
  cd ..
done

1:使用 dataframe.write 时,Spark并没有为您提供控制单个文件名的选项,因为这不是要使用的方式。预期的用法是在多节点的hadop群集上,其中可以在节点之间任意分布数据。 写入操作在所有节点之间进行协调,并针对共享HDFS上的路径。在这种情况下,谈论单个文件没有意义方法)

I don't think there is really an optimized (if we take that to mean "much faster than any other") method of doing this. It's fundamentally an inefficient operation, and one that I can't really see a good use case for. But, assuming you really have thought this through and decided this is the best way to solve the problem at hand, I would suggest you reconsider using the repartition method on the dataframe; it can take a column to be used as the partitioning expression. The only thing it won't do is split files across directories the way you want.

I suppose something like this might work:

import java.io.File
import scala.reflect.io.Directory

// dummy data
val df = Seq(("A", "B", "XC"), ("D", "E", "YF"), ("G", "H", "ZI"), ("J", "K", "ZL"), ("M", "N", "XO")).toDF("FOO", "BAR", "BAZ")

// List of all possible prefixes for the index column. If you need to generate this
// from the data, replace this with a query against the input dataframe to do that.
val prefixes = List("X", "Y", "Z")

// replace with your path
val path = "/.../data"

prefixes.foreach{p =>
  val data = df.filter(col("uniqueID").startsWith(p))
  val path = new Directory(new File(f"$path/$p"))
  data.repartition(data.count.toInt) // repartition the dataframe with 1 record per partition
  data.write.format("json").save(path)
}

The above doesn't quite meet the requirement since you can't control the output file name1. We can use a shell script to
fix the file names afterward. This assumes you are running in an environment with bash and jq available.

#!/usr/bin/env bash

# replace with the path that contains the directories to process
cd /.../data

for sub_data_dir in ./*; do
  cd "${sub_data_dir}"
  rm _SUCCESS
  for f in ./part-*.json; do
    uuid="$(jq -r ."uniqueID" "${f}")"
    mv "${f}" "${uuid}"
  done
  cd ..
done

1: Spark doesnt give you an option to control individual file names when using Dataframe.write because that isn't how it is meant to be used. The intended usage is on a multi-node Hadoop cluster where data may be distributed arbitrarily between the nodes. The write operation is coordinated among all nodes and targets a path on the shared HDFS. In that case it makes no sense to talk about individual files because the operation is performed on the dataframe level, and so you can only control the naming of the directory where the output files will be written (as the argument to the save method)

将每行在火花数据框架中写入单独的JSON

不即不离 2025-02-09 16:57:11

当您从功能内部调用功能时,会发生递归。 Python有一个限制您可以使用递归电话的深度。如果您想知道Python的深度,可以执行此操作( sys 是一个内置模块):

import sys
sys.getrecursionlimit()

您的方法是使用递归反复尝试随机数的方法,并检查它们是否以 s 将起作用,但很慢。这很慢,因为您不知道随机化需要多长时间才能以 s 结尾。但是,对于学习经验,让我们尝试一下。

file = open('pluralnoun.txt')
content = file.readlines()

def nouns():
    X = random.randint(1,218)
    snoun = content[X]
    if snoun.endswith("s"):
        print(snoun)
    else:
        nouns()

nouns()

您当前方法不起作用的原因:

  • snoun ==“*s” :您正在尝试使用正则表达式,但这不是使用它的正确方法。一种更重要的方法(无正语)是 snoun.endswith(“ s”)当您检查 snoun ==“*s” 时,您是实际检查 snoun 是字母的字母是否与“*s” 相同,这可能不是。
  • snoun 保持一致:由于 snoun 没有更改, snoun ==“*s” 将始终返回相同的东西,false,导致<代码>名词再次调用,这意味着无限递归。

因此,尽管这种方法有效,但这不好。因为在最坏的情况下, random.randint 永远不会找到以 s 结尾的单词,从而导致无限递归。

解决此问题的一种方法是收集以 s 结尾的单词列表,然后从其中挑选一个随机的单词。这始终是同一时间,一次浏览整个列表,然后选择一个随机的列表。让我们来做:

Bert2me在我写这篇文章时已经提出了解决方案:

import random

with open("pluralnoun.txt") as f:
    words = [i for i in f.readlines() if i.endswith("s")]

snoun = random.choice(words)

Recursion happens when you call the function from inside the function. Python has a limit of how deep you can go with your recursion calls. If you want to know how deep python goes, you can do this (sys is a builtin module):

import sys
sys.getrecursionlimit()

Your method of repeatedly trying random numbers using recursion and checking if they end in s will work, but it is slow. It is slow because you don't know how long the randomizing will take for it to reach a word ending in s. But, for the learning experience, let's try it.

file = open('pluralnoun.txt')
content = file.readlines()

def nouns():
    X = random.randint(1,218)
    snoun = content[X]
    if snoun.endswith("s"):
        print(snoun)
    else:
        nouns()

nouns()

Reasons why your current method doesn't work:

  • snoun == "*s": You are trying to use regular expressions, but that is not the right way to use it. A more pythonic way to do it (without regex) would be snoun.endswith("s") When you are checking whether snoun == "*s", you are actually checking for whether snoun is letter for letter the same as "*s", which it is probably not.
  • snoun stays consistent: Since snoun is not changing, snoun == "*s" will always return the same thing, False, resulting in nouns being called again, which means infinite recursion.

So although this method works, it is not good. Because in the worst case, random.randint is never going to find a word which ends in s, resulting in infinite recursion.

One way to fix this would be by gathering a list of words which end in s and then picking a random one out of those. This takes consistently the same time, going through the entire list once, and then picking a random one. Let's do it:

BeRT2me already came up with the solution while I was writing this:

import random

with open("pluralnoun.txt") as f:
    words = [i for i in f.readlines() if i.endswith("s")]

snoun = random.choice(words)

如何从列表中打印一个随机单词,该列表以&#x27; s&#x27;?

不即不离 2025-02-07 08:43:41

对于多个硬币,最简单的方法是将您的策略​​附加到您要与之交易的交易中的每一个硬币上。这样,您可以在各自的图表上进行测试。

如果您为Binance:BTCUSDT创建一种策略,并考虑在不同的交流中使用此策略,您可以做到,但是首先,我建议对Binance进行测试:BTCPERP,并亲自查看相同的策略如何显示出截然不同的结果(即使BTCUSDT和BTCPERP也应该移动相同)。

对于一个复杂的解决方案,您可以创建一个使用多个证券的单个脚本,但是您将无法通过简单的方法进行回报,您将不得不编写自己的收益/损失计算器,而您还没有。

我沿着同一条道路,我的建议是:

  • 为要交易的硬币创建输入(将进入输入变量)
  • 摘要 strategy.entry.entry()>命令,也就是说,以某种方式构造警报消息,您可以用其中的变量替换值(如上所选择的硬币)
  • 3个commas需要一个bot ID来启动/停止一个bot,也要抽象地删除,并且您将拥有一个良好的样板代码您可以多次重复用作
  • 好的练习(从Kubernetes偷来),除了人类可读名称外,我给我的每个机器人提供了5个字母的标识符,以便简单地识别

一些示例。以下将为硬币和用于交易硬币的机器人ID创建一个选择器。诸如“ bin:gmtperp -osakr”之类的名称完全是我的制作,它们是钥匙(对于键/值对):

symbol_choser = input.string(title='Ticker symbol', defval='BTC-PERP - aktqw', options=[FTX_Multi, 'FTX:MOVE Single - pdikr', 'BIN:GMTPERP - osakr', 'BIN:GMTPERP - rkwif', 'BTC-PERP - aktqw', 'BTC-PERP - ikrtl', 'BTC-PERP - cbdwe', 'BTC-PERP', 'BAL-PERP', 'RUNE-PERP', 'Paper Multi - fjeur', 'Paper Single - ruafh'], group = 'Bot settings')

exchange_symbol = switch symbol_choser // if you use Single Pair bots on 3Commas, the Value should be an empty string
    'BIN:GMTPERP - osakr' => 'USDT_GMTUSDT'
    'BTC-PERP - cbdwe' => 'USD_BTC-PERP'
    'Paper Multi - fjeur' => 'USDT_ADADOWN'

bot_id = switch symbol_choser
    'BIN:GMTPERP - osakr' => '8941983'
    'BTC-PERP - cbdwe' => '8669136'
    'Paper Multi - fjeur' => '8246237'

现在您可以将上述部分组合到两个警报中,以启动/停止bot:

alertMessage_Enter = '{"message_type": "bot",  "bot_id": ' + bot_id + ',  "email_token": "12345678-4321-abcd-xyzq-132435465768",  "delay_seconds": 0,  "pair": "' + exchange_symbol + '"}'
alertMessage_Exit = '{"action": "close_at_market_price", "message_type": "bot",  "bot_id": ' + bot_id + ',  "email_token": "12345678-4321-abcd-xyzq-132435465768",  "delay_seconds": 0,  "pair": "' + exchange_symbol + '"}'

Exchange_Symbol 是您需要提供给机器人的适当的Exchange符号,您可以在3Commas的Bot页面上获得帮助(它们已预先制作了您需要用于某些操作的HTTP请求)。

bot_id 是您的机器人的ID,这很简单。

上述解决方案无法处理单个硬币机器人,其触发消息具有不同的结构。

只要您可以使用多硬币机器人,因为它们可以充当一个机器人,但要有两个例外:

  • 如果您有较长的跨度策略,并且当您启动一个机器人时,您应该已经进行交易,可以手动启动一个机器人, 无法启动多硬币机器人(因为无法提供可以启动交易的硬币信息)
  • 但是,如果您要交易诸如FTX的Move Contracts之类的衍生产品,并且您的脚本附加到基础BTC期货,则 。移动合同每天更改名称(日期是其名称,例如:BTC-MOVE-0523),因此您需要删除警报,更新和重新应用警报,等等。相反,如果您的脚本在BTC--然后,您可以使用单个硬币机器人,该机器人不会期望在警报消息中使用硬币名称,以便它可以启动/停止与其连接到的任何硬币上的机器人,然后您只需要每天更改bot中的硬币名称设置,永远不要触摸警报。

要总结您的问题:

  1. 不包括代码中的图表类型(甚至不是可嵌入的数据),只需将代码应用于您想要使用的任何图表即可。提示:切勿使用Heikin-Ashi进行交易。您可以,但您会为此付出深厚的付出(每个人都在尝试,即使是针对警告,不用担心)

  2. 一一设置它们,因此您可以回头测试

  3. 否,请在图表上设置时间表。稍后,当您经验丰富时,您将能够抽象当前的时间范围(无论它是什么),并编写时间范围的代码。但这很难使您的代码不那么可读。

For multiple coins, the easiest way is to attach your strategy to each and every coin on Tradingview you want to trade with. This way you can backtest each on their respective chart.

If you create a strategy for say BINANCE:BTCUSDT and think of using this strategy on different exchange, you can do it, but first I suggest test it on BINANCE:BTCPERP and see for yourself how the same strategy can show a wildly different result (even though BTCUSDT and BTCPERP should be moving the same).

For a complex solution you can create a single script that uses multiple securities, but you won't be able to backtest that with simple approach, you would have to write your own gain/loss calculator, and you are not there yet.

I was going down the same road, my suggestions are:

  • create an input for the coin you want to trade (that will go into an input variable)
  • abstract the alert message off of the strategy.entry() command, that is, construct the alert message in a way you can replace values with variables in it (like the above selected coin)
  • 3Commas needs a Bot ID to start/stop a bot, abstract that off as well, and you will have a good boilerplate code you can reuse many times
  • as a good practice (stolen from Kubernetes) besides the human readable name, I give a 5 letter identifier to every one of my bots, for easy recognition

A few examples. The below will create a selector for a coin and the Bot ID that is used to trade that coin. The names like 'BIN:GMTPERP - osakr' are entirely my making, they act as a Key (for a key/value pair):

symbol_choser = input.string(title='Ticker symbol', defval='BTC-PERP - aktqw', options=[FTX_Multi, 'FTX:MOVE Single - pdikr', 'BIN:GMTPERP - osakr', 'BIN:GMTPERP - rkwif', 'BTC-PERP - aktqw', 'BTC-PERP - ikrtl', 'BTC-PERP - cbdwe', 'BTC-PERP', 'BAL-PERP', 'RUNE-PERP', 'Paper Multi - fjeur', 'Paper Single - ruafh'], group = 'Bot settings')

exchange_symbol = switch symbol_choser // if you use Single Pair bots on 3Commas, the Value should be an empty string
    'BIN:GMTPERP - osakr' => 'USDT_GMTUSDT'
    'BTC-PERP - cbdwe' => 'USD_BTC-PERP'
    'Paper Multi - fjeur' => 'USDT_ADADOWN'

bot_id = switch symbol_choser
    'BIN:GMTPERP - osakr' => '8941983'
    'BTC-PERP - cbdwe' => '8669136'
    'Paper Multi - fjeur' => '8246237'

And now you can combine the above parts into two Alerts, for starting/stopping the bot:

alertMessage_Enter = '{"message_type": "bot",  "bot_id": ' + bot_id + ',  "email_token": "12345678-4321-abcd-xyzq-132435465768",  "delay_seconds": 0,  "pair": "' + exchange_symbol + '"}'
alertMessage_Exit = '{"action": "close_at_market_price", "message_type": "bot",  "bot_id": ' + bot_id + ',  "email_token": "12345678-4321-abcd-xyzq-132435465768",  "delay_seconds": 0,  "pair": "' + exchange_symbol + '"}'

exchange_symbol is the proper exchange symbol you need to provide to your bot, you can get help on the 3Commas' bot page (they have pre-crafted the HTTP requests you need to use for certain actions).

bot_id is the ID of your Bot, that is straightforward.

The above solution does not handle Single coin bots, their trigger message has a different structure.

Whenever you can, use Multi coin bots as they can act as a Single bot with two exception:

  • if you have a long spanning strategy and when you start a bot, you should be already in a trade, you can manually start a Single bot, but you cannot start a Multi coin bot (as there is no way to provide the coin info on which to start the trade)
  • if you are trading a derivative like FTX's MOVE contracts and your script is attached to the underlying BTC Futures. MOVE contracts changes name every day (the date is in their name, like: BTC-MOVE-0523) so you would need delete an alert, update and reapply the alert every day, etc. Instead, if your script is on the BTC-PERP then you can use a Single coin bot which does not expect a coin name in the alert message so it will start/stop the Bot on whatever coin it is connected to, then you need to change the coin name every day only in the Bot settings and never touch the Alert.

To summarize on your questions:

  1. Do not include chart type in code (that is not even an embeddable data), just apply your code to whatever chart you want to use. Hint: never use Heikin-Ashi for trading. You can, but you will pay for it dearly (everyone tries, even against warnings, no worries)

  2. Set up them one-by-one, so you can backtest them

  3. No, set the timeframe on the chart. Later, when you will be more experienced you will be able to abstract the current timeframe (whatever it is) away and write code that is timeframe-agnostic. But that's hard and make your code less readable.

使用一种交易策略来连接到机器人的多个硬币

不即不离 2025-02-07 01:15:53

此错误是由于您在没有身份验证的情况下发布请求,首先您需要针对Twitter进行身份验证,并且在您能够正确发布请求之后。

This errors is due to you are issuing your request without authentication, first you need to authenticate against twitter and after you're be able to issue the request properly.

在knime上的Twitter搜索中执行失败错误

不即不离 2025-02-06 16:55:41

对于两个功能中的初学者

struct d* rr() {
    struct d* p = malloc(sizeof (struct d*));
    p->f = 33;
    return p;
}

void rr2(struct d* p) {
    p = malloc(sizeof (struct d*));
    p->f = 22;
}

都有错别字。看来您的意思

    struct d* p = malloc(sizeof (struct d));
                                 ^^^^^^^^

    p = malloc(sizeof (struct d));
                       ^^^^^^^^^

    struct d* p = malloc(sizeof ( *p ));
                                 ^^^^^

    p = malloc(sizeof ( *p) );
                       ^^^^^

此功能有关

void rr2(struct d* p) {
    p = malloc(sizeof (struct d*));
    p->f = 22;
}

,然后在此调用中,

struct d *q;
rr2(q);

指针 q 通过值传递给该函数。因此,该功能处理指针 Q 的副本。更改函数中的副本不会反映原始指针 Q 。它保持不变。

要使代码正常工作,您必须通过参考将指针传递(通过指向其指针间接传递)。在这种情况下,该函数将看起来像

void rr2(struct d **p) {
    *p = malloc(sizeof (struct d ));
    ( *p )->f = 22;
}

和称为

rr2( &q );

此代码段

free(g);
rr2(g);
printf("[%i]", g->f);

,然后将其调用不确定的行为,因为在此语句中,

printf("[%i]", g->f);

可以访问已经释放的内存。

For starters in the both functions

struct d* rr() {
    struct d* p = malloc(sizeof (struct d*));
    p->f = 33;
    return p;
}

and

void rr2(struct d* p) {
    p = malloc(sizeof (struct d*));
    p->f = 22;
}

there is a typo. It seems you mean

    struct d* p = malloc(sizeof (struct d));
                                 ^^^^^^^^

and

    p = malloc(sizeof (struct d));
                       ^^^^^^^^^

or

    struct d* p = malloc(sizeof ( *p ));
                                 ^^^^^

and

    p = malloc(sizeof ( *p) );
                       ^^^^^

As for this function

void rr2(struct d* p) {
    p = malloc(sizeof (struct d*));
    p->f = 22;
}

then in this call

struct d *q;
rr2(q);

the pointer q is passed to the function by value. So the function deals with a copy of the pointer q. Changing the copy within the function does not reflect on the original pointer q. It stays unchanged.

To make the code working you have to pass the pointer by reference (indirectly through a pointer to it). In this case the function will look like

void rr2(struct d **p) {
    *p = malloc(sizeof (struct d ));
    ( *p )->f = 22;
}

and be called like

rr2( &q );

As for this code snippet

free(g);
rr2(g);
printf("[%i]", g->f);

then it just invokes undefined behavior because in this statement

printf("[%i]", g->f);

there is an access to already freed memory.

除非以前已分配(并释放),否则可以将功能的结构指针引用

不即不离 2025-02-06 14:30:02

有一个 firebase Extension 这是由条纹和国家建立的:

使用此扩展名作为您的条纹付款的后端。

它可以使用限制的API键因此,您对可以创建哪些记录/记录具有颗粒状的控制通过扩展名读取/更新。

我认为,尝试为单个付款页面制定Express应用程序听起来更容易。

There is a Firebase Extension that is built by Stripe and states:

Use this extension as a backend for your Stripe payments.

It makes use restricted API keys so you have granular control over what records can be created/read/updated by the extension.

I think that sounds easier that trying to work out an Express app for a single payment page.

是否可以在不使用其他后端服务的情况下将我网站上的条纹付款与Firebase Backend Service集成在一起?

不即不离 2025-02-06 04:36:17

使用 data.table 的选项:

library(data.table)
setDT(dat)
setDT(dat2)
dat[!dat2, on = .(col1 = col2)]

输出:

   col1
1:    1
2:    2
3:    3
4:    4

Option using data.table:

library(data.table)
setDT(dat)
setDT(dat2)
dat[!dat2, on = .(col1 = col2)]

Output:

   col1
1:    1
2:    2
3:    3
4:    4

在不在R中另一个数据框的另一列中的列中找到元素

不即不离 2025-02-06 04:16:13

您可以尝试 groupby id 列,然后用 bfill ffill 填充NAN列。最后,将重复项放在“ phone_number”,“食物”,“玩具”中。

test = test.replace('', pd.NA)

out = (test.groupby('Id')
       .apply(lambda g: g.bfill().ffill())
       .drop_duplicates(['phone_number', 'food', 'toy']) # 'toy ' in your given dataframe
       .fillna('')
       )
print(df)

   Id phone_number    food   toy
0  01   9995552222   apple  ball
1  01   9995552222  banana  ball
2  01   9995552222  orange  ball
4  02   3332226666    boba

You can try groupby Id column then fill the NaN column with bfill and ffill. At last drop the duplicates in 'phone_number', 'food', 'toy'.

test = test.replace('', pd.NA)

out = (test.groupby('Id')
       .apply(lambda g: g.bfill().ffill())
       .drop_duplicates(['phone_number', 'food', 'toy']) # 'toy ' in your given dataframe
       .fillna('')
       )
print(df)

   Id phone_number    food   toy
0  01   9995552222   apple  ball
1  01   9995552222  banana  ball
2  01   9995552222  orange  ball
4  02   3332226666    boba

python从不同的行和列中获得值

不即不离 2025-02-05 16:25:08

您可以使用 ord 函数来获取ASCII值,然后减去97。这可以在列表理解中完成。

>>> word = 'green'
>>> [ord(l.lower()) - 97 for l in word]
[6, 17, 4, 4, 13]

如果您想向后走,可以获取数字列表,并使用 chr 函数转换回字母。然后与空白字符串一起加入。

>>> numbers = [6, 17, 4, 4, 13]
>>> ''.join([chr(n + 97) for n in numbers])
'green'

You can use the ord function to get the ascii value in decimal, then subtract 97. This can all be done within a list comprehension.

>>> word = 'green'
>>> [ord(l.lower()) - 97 for l in word]
[6, 17, 4, 4, 13]

If you wanted to go backwards, you can take the list of numbers and use the chr function to convert back to letters. Then join together with a blank string.

>>> numbers = [6, 17, 4, 4, 13]
>>> ''.join([chr(n + 97) for n in numbers])
'green'

如何将字母转换为数字?

不即不离 2025-02-05 10:48:22

使它起作用。如果有人要评论该代码确实可以随意编辑的内容。

void ApplyAIPLabel(Presentation pptPresentation) {
  var customDocumentProperties = pptPresentation.CustomDocumentProperties;
  var typeDocCustomProps = customDocumentProperties.GetType();
  var propertyName = "MSIP_Label_<GUID>_Enabled";
  var propertyValue = "True";
  object[] oArgs = { propertyName, false, MsoDocProperties.msoPropertyTypeString, propertyValue };
  typeDocCustomProps.InvokeMember("Add", BindingFlags.Default | BindingFlags.InvokeMethod, null, customDocumentProperties, oArgs);
}

Got it working. If anyone is up to commenting of what the code does feel free to edit.

void ApplyAIPLabel(Presentation pptPresentation) {
  var customDocumentProperties = pptPresentation.CustomDocumentProperties;
  var typeDocCustomProps = customDocumentProperties.GetType();
  var propertyName = "MSIP_Label_<GUID>_Enabled";
  var propertyValue = "True";
  object[] oArgs = { propertyName, false, MsoDocProperties.msoPropertyTypeString, propertyValue };
  typeDocCustomProps.InvokeMember("Add", BindingFlags.Default | BindingFlags.InvokeMethod, null, customDocumentProperties, oArgs);
}

C#MS Office Interop应用Azure信息保护(AIP)

不即不离 2025-02-05 08:08:49

您是否试图删除“捕获(...)”,就像您在第一个示例代码中所谓的“等待” _this.clearmeekouformat();”,function“ clearmeekouformat(...)”实际上没有预期的返回承诺?

Do you have tried to remove the "catch(...)", like what you called in the first sample code "await _this.clearMeekouFormat();", Function "clearMeekouFormat(...)" actually does not have return expected promise?

Excel JavaScript API呼叫类功能,带有命令的事件处理程序

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文