亢潮

文章 评论 浏览 34

亢潮 2025-02-19 20:02:45

Coff搬迁类型在

image_rel_amd64_absolute r_x86_x64_copy (无搬迁),
image_rel_amd64_addr64 r_x86_x64_64 (s+a)相对应
image_rel_amd64_addr32 r_x86_x64_32 (s+a),
对应
image_rel_amd64_rel32 r_x86_x64_pc32 (S+AP)对应。

COFF relocation types are enumerated at COFF Relocations for x64.

IMAGE_REL_AMD64_ABSOLUTE corresponds with R_X86_X64_COPY (no relocation),
IMAGE_REL_AMD64_ADDR64 corresponds with R_X86_X64_64 (S+A),
IMAGE_REL_AMD64_ADDR32 corresponds with R_X86_X64_32 (S+A),
IMAGE_REL_AMD64_REL32 corresponds with R_X86_X64_PC32 (S+A-P).

COFF X86_64搬迁类型

亢潮 2025-02-19 08:06:58

好的。我找到答案。凤凰城确实支持下推。我的错是我首先使用 substr ,应该用 startswith 喜欢来代替。

证据如下所示。

 /*
    This is the buildScan() implementing Spark's PrunedFilteredScan.
    Spark SQL queries with columns or predicates specified will be pushed down
    to us here, and we can pass that on to Phoenix. According to the docs, this
    is an optimization, and the filtering/pruning will be re-evaluated again,
    but this prevents having to load the whole table into Spark first.
  */
  override def buildScan(requiredColumns: Array[String], filters: Array[Filter]): RDD[Row] = {
    new PhoenixRDD(
      sqlContext.sparkContext,
      tableName,
      requiredColumns,
      Some(buildFilter(filters)),
      Some(zkUrl),
      new Configuration(),
      dateAsTimestamp
    ).toDataFrame(sqlContext).rdd
  }

OK. I find the answer. The phoenix does support push-down. My fault is that I use substr at first, which should be replaced by startsWith or like.

The evidence is shown below.

 /*
    This is the buildScan() implementing Spark's PrunedFilteredScan.
    Spark SQL queries with columns or predicates specified will be pushed down
    to us here, and we can pass that on to Phoenix. According to the docs, this
    is an optimization, and the filtering/pruning will be re-evaluated again,
    but this prevents having to load the whole table into Spark first.
  */
  override def buildScan(requiredColumns: Array[String], filters: Array[Filter]): RDD[Row] = {
    new PhoenixRDD(
      sqlContext.sparkContext,
      tableName,
      requiredColumns,
      Some(buildFilter(filters)),
      Some(zkUrl),
      new Configuration(),
      dateAsTimestamp
    ).toDataFrame(sqlContext).rdd
  }

对于Spark-Phoenix,我不想从HBase中获取全部数据。有什么方法可以根据我的状况从表格中获取数据?

亢潮 2025-02-19 07:40:13

解决了。
我只需要将密钥文件的扩展名从 .gpg 更改为 .ASC ,然后效果很好。

Solved it.
I just had to change the extension of the key file from .gpg to .asc and then it worked fine.

试图在Ansible Playbook中替换APT_KEY

亢潮 2025-02-19 02:30:26

更新 我很长时间前在MySQL安装程序上写了一些东西。只要我检查安装程序的概念,就变得很长。我应该再次审查。


update :此安装程序是一个修改的MSI文件,其中管理安装(基本上是从MSI中提取的荣耀文件),以及许多其他功能,该功能应该是MSI文件支持标准。企业环境需要这些功能来大规模部署产品(在许多具有自定义设置的计算机上安装无声安装)。您必须询问供应商为什么要阻止功能。

此修改后的非标准MSI安装了自己的设置启动器应用程序,进而可以触发通过自定义GUI安装的单个MSI文件。 此启动器应用程序需要.NET Framework版本4.5.2。确保已安装.NET。似乎有2个版本的MSI:一个具有嵌入式MSI文件的版本和一个仅安装启动器的Web version。

管理安装 :您可以使用下面的命令行(技术技巧)强制执行在非标准MSI上执行的管理安装。您创建一个转换,该转换从启动器表中删除条目,并通过msiexec.exe命令行应用。可以使用orca -MSI SDK MSI编辑器工具来创建变换:

msiexec /a mysql-installer-community-8.0.29.0.msi TRANSFORMS=transform.mst TARGETDIR=D:\mysql-installer-community-8.0.29.0

这将从MSI中提取所有嵌入式文件,您将看到启动器可执行文件和一个称为“产品缓存”的文件夹,其中包含所有嵌入式MSI文件。如果您安装了MSI的Web版本,则不是这种情况 - 仅包含启动器而没有嵌入式MSI文件(我假定)。单个MSI文件将通过启动器从Web下载。

通过提取所有设置,您可能只能安装它们的选择,或者按顺序安装所有设置。请注意,它们可能需要按特定顺序安装 - 我不知道。


下面的部署调试

是关于失败MSI安装程序的一些一般调试建议:

我将首先对明显的事情进行一些基本检查(这是原始的部署清单): 1) 验证您的安装媒体(重新下载), 2) 检查缺少Runtimes 3) 带有Admin权利的运行设置 4) 重新启动之前,请先尝试 5) 检查磁盘空间 6) 检查恶意软件 7) 干净的虚拟(或其他物理计算机), 8) 使用新的本地管理员帐户安装(如果有的话,可以工作用户配置文件错误), 9) 在启动设置之前禁用反病毒或恶意软件扫描仪(首先通过 ://virustotal.com“ rel =“ nofollow noreferrer”> virustotal.com ), 10) 检查公司策略的防止交互式安装< /em>等...


基本方法

以上是通用清单。通常有效的是创建和检查MSI日志文件。

MSI记录 :我将启用 MSI全局记录策略 通过修改此注册表密钥(或通过策略进行):
[HKEY_LOCAL_MACHINE \ SOFTWORD \ PALICIES \ MICROSOFT \ WINDOWS \ STARTER]]
“记录” =“ Voice Warmup”
“ debug” = dword:00000007

这将生成名为 msi*.log 的日志文件,其中 * 随机数 temp目录 的每个基于MSI的设置

按时间/日期对 temp文件夹 进行排序,并查找最新日志条目。打开您可以找到并查看的任何MSI日志文件。

MSI日志检查 :首先尝试搜索 “ value 3” 的日志。可以在此处找到了解MSI日志文件的更多提示 - 查看 “解释MSI日志文件”


链接:

UPDATE: I wrote up something on the MySQL installer quite some time ago. It became quite long as I was checking the installer concept. I should review it again.


UPDATE: This installer is a modified MSI file where administrative installation (basically a glorified file extraction from the MSI) has been disabled along with a number of other features that MSI files are supposed to support as standard. Corporate environments need these features for large scale deployment of the product (silent installation on many computers with customized settings). You would have to ask the vendor why the features are blocked.

This modified and non-standard MSI installs its own setup launcher application which in turn can trigger individual MSI files to be installed via a custom GUI. This launcher application requires the .NET Framework version 4.5.2. Make sure you have .NET installed. It seems there are 2 versions of the MSI: one with embedded MSI files and a WEB-version which installs the launcher only.

Launcher application

Administrative Installation: You can use the command line below (technical trick) to force an administrative installation to be performed on the non-standard MSI. You create a transform which deletes entries from the LaunchCondition table and apply it via the msiexec.exe command line. Transforms can be created with Orca - the MSI SDK MSI editor tool:

msiexec /a mysql-installer-community-8.0.29.0.msi TRANSFORMS=transform.mst TARGETDIR=D:\mysql-installer-community-8.0.29.0

This will extract all embedded files from the MSI and you will see the launcher executables and a folder called "Product Cache" which contains all the embedded MSI files. This is not the case if you install the WEB-edition of the MSI - which contains only the launcher and no embedded MSI files (I presume). The individual MSI files will be downloaded from the web via the launcher it seems.

With all the setups extracted you could potentially install only a selection of them, or all of them in sequence. Note that they might need to be installed in a specific order - I don't know.


Deployment Debugging

Below are some general debugging suggestions for failing MSI installers:

I would run some basic checks on the obvious things first (here is the original deployment checklist): 1) Verify your installation media (re-download), 2) check for missing runtimes, 3) run setup with admin rights, 4) reboot before trying again, 5) check disk space, 6) check for malware, 7) try on a clean virtual (or another physical computer), 8) install with a fresh local admin account (can work if there are user profile errors), 9) disable anti-virus or malware scanners before launching setup (scan setup file first via virustotal.com), 10) check for corporate policies preventing interactive installation,etc...


Essential Approach

The above is a general purpose checklist. What usually works is to create and inspect an MSI log file though.

MSI Logging: I would enable MSI global logging policy by modifying this registry key (or going via policy):
[HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\Installer]
"Logging"="voicewarmup"
"Debug"=dword:00000007

This will generate log files named MSI*.log where * is a random number in the TEMP directory for each MSI based setup.

Sort the TEMP folder by time / date and look for the recent log entries on top. Open any MSI log file(s) you can find and have a look.

MSI Log Inspection: The first thing to try is to search the log for "value 3". Further tips for understanding MSI log files can be found here - look in section "Interpreting MSI Log Files":
Enable installation logs for MSI installer without any command line arguments.


Links:

MySQL安装程序开始安装,然后关闭

亢潮 2025-02-18 20:54:24

通常,当您编写>时, union All 与两个条件的同一查询的性能都比更好。

SELECT *
  FROM table
 WHERE product_return_date IS NULL
 UNION ALL
SELECT *
  FROM table
 WHERE product_return_date > SYSDATE

请记住,如果您使用 nvl ,并且product_return_date上有一个索引,它将阻止使用索引。

Generally when you write an OR, a UNION ALL of the the same queries with either of the conditions will have better performance than the OR.

SELECT *
  FROM table
 WHERE product_return_date IS NULL
 UNION ALL
SELECT *
  FROM table
 WHERE product_return_date > SYSDATE

Keep in mind, if you use NVL and there is an index on the product_return_date, it will prevent the use of the index.

改进SQL:或在子句中使用的子句

亢潮 2025-02-18 19:32:20

使用 array.Reduce string.string.split 实现您所需的结果

let permissions = ["newsfeeds-alerts-view","settings-group_details-view","settings-group_details-edit","settings-privileges-view","settings-privileges-edit","settings-my_groups-create","settings-my_groups-view","settings-my_groups-edit","settings-my_groups-delete","settings-users-view","settings-users-edit","settings-users-delete","notifications-email-create","notifications-jira-create","notifications-jira-view","notifications-non_itr_ticket-create","notifications-non_itr_ticket-update","workspace_dashboard-worksapce-create","workspace_dashboard-worksapce-view","dashboard-geographic-maps-view","dashboard-geographic-report-itr-view","configurations-create_alerts-create","configurations-notifications-jira-create",];


const output = permissions.reduce((r, s) => {
    const path = s.split("-");
    if (path.length > 1) {
        const name = path.pop();
        const last = path.pop();
        let destination = r;
        for (let key of path) {
            destination[key] = destination[key] || {};
            destination = destination[key];
        }
        destination[last] = destination[last] || [];
        destination[last].push({ name });
    }
    return r;
}, {});
console.log(output);
.as-console-wrapper { max-height: 100% !important; top: 0; }

Using array.reduce string.split to achieve your desired result

let permissions = ["newsfeeds-alerts-view","settings-group_details-view","settings-group_details-edit","settings-privileges-view","settings-privileges-edit","settings-my_groups-create","settings-my_groups-view","settings-my_groups-edit","settings-my_groups-delete","settings-users-view","settings-users-edit","settings-users-delete","notifications-email-create","notifications-jira-create","notifications-jira-view","notifications-non_itr_ticket-create","notifications-non_itr_ticket-update","workspace_dashboard-worksapce-create","workspace_dashboard-worksapce-view","dashboard-geographic-maps-view","dashboard-geographic-report-itr-view","configurations-create_alerts-create","configurations-notifications-jira-create",];


const output = permissions.reduce((r, s) => {
    const path = s.split("-");
    if (path.length > 1) {
        const name = path.pop();
        const last = path.pop();
        let destination = r;
        for (let key of path) {
            destination[key] = destination[key] || {};
            destination = destination[key];
        }
        destination[last] = destination[last] || [];
        destination[last].push({ name });
    }
    return r;
}, {});
console.log(output);
.as-console-wrapper { max-height: 100% !important; top: 0; }

将权限阵列转换为JSON

亢潮 2025-02-18 16:49:05

更改

if ($woocommerce->cart->get_cart_total() != 0 ) {
    return;
}

if ($woocommerce->cart->get_cart_total() != 0 ) {
    return $fields;
}

Change

if ($woocommerce->cart->get_cart_total() != 0 ) {
    return;
}

To

if ($woocommerce->cart->get_cart_total() != 0 ) {
    return $fields;
}

如果购物车= $ 0

亢潮 2025-02-18 14:39:31

经过更多的搜索和反复试验后,我发现设置 git_trace_curl = 1 (作为CI/CD变量)会导致跑步者显示所有响应标头。相关ID在 x-request-id 字段中:

< Cache-Control: no-cache
< Content-Type: text/plain; charset=utf-8
< Referrer-Policy: strict-origin-when-cross-origin
< Vary: Accept
< WWW-Authenticate: Basic realm="GitLab"
< X-Accel-Buffering: no
< X-Content-Type-Options: nosniff
< X-Download-Options: noopen
< X-Frame-Options: SAMEORIGIN
< X-Permitted-Cross-Domain-Policies: none
< X-Request-Id: 01G6RYV1S0CP53PM5YFG00YGB4
< X-Runtime: 0.089969
< X-Xss-Protection: 1; mode=block
< Date: Thu, 30 Jun 2022 00:05:42 GMT
< Content-Length: 26

After a bit more searching and trial-and-error, I found that setting GIT_TRACE_CURL=1 (as a CI/CD variable) causes the runner to display all of the response headers. The correlation ID is in the X-Request-Id field:

< Cache-Control: no-cache
< Content-Type: text/plain; charset=utf-8
< Referrer-Policy: strict-origin-when-cross-origin
< Vary: Accept
< WWW-Authenticate: Basic realm="GitLab"
< X-Accel-Buffering: no
< X-Content-Type-Options: nosniff
< X-Download-Options: noopen
< X-Frame-Options: SAMEORIGIN
< X-Permitted-Cross-Domain-Policies: none
< X-Request-Id: 01G6RYV1S0CP53PM5YFG00YGB4
< X-Runtime: 0.089969
< X-Xss-Protection: 1; mode=block
< Date: Thu, 30 Jun 2022 00:05:42 GMT
< Content-Length: 26

您如何获得gitlab ci工作的相关性ID?

亢潮 2025-02-18 13:16:27

我认为,如果您使用Windows OS,则应使用另一种 date()的方法。I以下写代码:

filename:(req,file,cb)=>{ cb(null,new Date().toDateString()+file.originalname) }

I think if you work with Windows OS, you should use another methods of Date().i write code like this:

filename:(req,file,cb)=>{ cb(null,new Date().toDateString()+file.originalname) }

Inoent:没有这样的文件或目录。

亢潮 2025-02-18 07:38:33

将其适应您的代码,您应该得到所需的东西。

        Dictionary<string, string> d = new Dictionary<string, string>();
        //add item if it does not exists
        if (!d.ContainsKey("1001"))
        {
            d.Add("1001", "XPTO");
        }
        //get the itens in dictionary
        var x = d.Count;
        if (x > 0)
        {
            //do stuff...
        }

Adapt this to your code and you should get what you need.

        Dictionary<string, string> d = new Dictionary<string, string>();
        //add item if it does not exists
        if (!d.ContainsKey("1001"))
        {
            d.Add("1001", "XPTO");
        }
        //get the itens in dictionary
        var x = d.Count;
        if (x > 0)
        {
            //do stuff...
        }

如何比较两个组合项目,即使至少一个匹配都发现了C#

亢潮 2025-02-18 04:03:40

I ended up asking on the Plotly community forum. See solution provided here: https://community.plotly.com/t/how-to-use-other-peoples-react-components-in-my-dash-app/65627/2

如何在我的DASH应用程序中使用其他人的React组件?

亢潮 2025-02-17 20:48:51
var multi:string[][] = [["String", "String","String"],["String","String","String"]]  
console.log(multi[0][0]) 
var multi:string[][] = [["String", "String","String"],["String","String","String"]]  
console.log(multi[0][0]) 

如果值

亢潮 2025-02-17 19:17:38

您可以穿越数据字典并将值附加到新列表:

data = {'token_1': [['cat', 'run','today'],['dog', 'eat', 'meat']],
        'token_2': [[ 'in', 'the' , 'morning','cat', 'run', 'today',
                      'very', 'quick'],['dog', 'eat', 'meat', 'chicken', 'from', 'bowl']]}

l = []
for i in range(len(data["token_1"])):
    l.append([])
    for j in range(len(data["token_1"][i])):
        a = data["token_2"][i].index(data["token_1"][i][j])
        if a!=-1:
            l[i].append(a)
print(l)

请注意,其他解决方案看起来更清晰,更可读,这只是列表clastension classence

的替代方法。输出:

[[3, 4, 5], [0, 1, 2]]

You can traverse data dictionary and append values to a new list:

data = {'token_1': [['cat', 'run','today'],['dog', 'eat', 'meat']],
        'token_2': [[ 'in', 'the' , 'morning','cat', 'run', 'today',
                      'very', 'quick'],['dog', 'eat', 'meat', 'chicken', 'from', 'bowl']]}

l = []
for i in range(len(data["token_1"])):
    l.append([])
    for j in range(len(data["token_1"][i])):
        a = data["token_2"][i].index(data["token_1"][i][j])
        if a!=-1:
            l[i].append(a)
print(l)

Note that the other solutions look much more clear and readable, this is only an alternative to list comprehension

Output:

[[3, 4, 5], [0, 1, 2]]

在数组中查找单词,并在pandas中获取数据框中的索引

亢潮 2025-02-17 16:55:32

制作一个虚拟文件对象,忽略写入并支持上下文管理器接口:

class NoFile:
    def __enter__(self): return self
    # Propagate any exceptions that were raised, explicitly.
    def __exit__(self, exc_type, exc_value, exc_tb): return False
    # Ignore the .write method when it is called.
    def write(self, data): pass
    # We can extend this with other dummy behaviours, e.g.
    # returning an empty string if there is an attempt to read.

创建一个辅助函数,当文件名 none 时,创建其中一个而不是普通文件:

def my_open(filename, *args, **kwargs):
    return NoFile() if filename is None else open(filename, *args, **kwargs)

使用代码>块来管理文件寿命,就像您应该做的那样 - 但是现在使用 my_open 而不是 open

def myFunc(output1=None,output2=None):
    X, Y = 0, 0
    with my_open(output1, 'w') as f1, my_open(output2, 'w') as f2:
        for i in range(1000):
            X, Y = someCalculation(X, Y) #calculations happen here
            f1.write(X)
            f2.write(Y)
    return X, Y

Make a dummy file object that ignores writes, and supports the context manager interface:

class NoFile:
    def __enter__(self): return self
    # Propagate any exceptions that were raised, explicitly.
    def __exit__(self, exc_type, exc_value, exc_tb): return False
    # Ignore the .write method when it is called.
    def write(self, data): pass
    # We can extend this with other dummy behaviours, e.g.
    # returning an empty string if there is an attempt to read.

Make a helper function that creates one of these instead of a normal file when the filename is None:

def my_open(filename, *args, **kwargs):
    return NoFile() if filename is None else open(filename, *args, **kwargs)

Use with blocks to manage the file lifetimes, as you should do anyway - but now use my_open instead of open:

def myFunc(output1=None,output2=None):
    X, Y = 0, 0
    with my_open(output1, 'w') as f1, my_open(output2, 'w') as f2:
        for i in range(1000):
            X, Y = someCalculation(X, Y) #calculations happen here
            f1.write(X)
            f2.write(Y)
    return X, Y

在循环中,可选地将输出写入文件

亢潮 2025-02-17 11:15:40

我确实花了一些时间研究这个问题。事实证明,执行命令: boost :: fibers :: use_scheduling_algorithm&lt; Priority_scheduler&gt;()用自己的光纤队列创建一个新的Priority_scheduler对象。并且该调度程序与特定于其正在运行的线程的上下文相关联。因此,在我的情况下,当我创建一个新的光纤时,它最终以特定于调用线程的队列(TH2,未运行的th2)纤维)而不是运行我所有纤维的线,Th1。

因此,我放弃了创建一个通过TH2的电话在TH1中运行的纤维的想法。现在,我使用的是从外部线程排队纤维启动请求的队列。纤维线程(TH1)将在执行Scheduler pick_next()函数时检查此队列,如果存在请求,则创建纤维并将其添加到TH1的调度程序队列中。它效果很好 - 尽管我有一个中间队列,我宁愿不拥有(仅出于审美原因)。

I did spend some time looking into this problem. It turns out that executing the command: boost::fibers::use_scheduling_algorithm< priority_scheduler >() creates a new priority_scheduler object with its own fiber queue. And this scheduler is associated with a context that is specific to the thread it is running in. So, in my circumstance, when I created a new fiber it ended up in the queue specific to the calling thread (th2, which wasn't running fibers) instead of the thread that was running all my fibers, th1.

So, I abandoned my idea of creating a fiber to run in th1 by a call from th2. I now using a queue that queues fiber launch requests from external threads. The fiber thread (th1) will check this queue when it executes the scheduler pick_next() function and if requests exist, fibers are created and added to th1's scheduler queue. It works fine--though I have an intermediate queue which I would prefer not to have (for esthetic reasons only).

从线程2中启动线程1中的增强光纤

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文