萌面超妹

文章 评论 浏览 29

萌面超妹 2025-02-06 04:19:44

您必须像扫描“源”那样打印“目标”,以一个一个一个一个一个一个一个元素的元素。

You have to print your "target" like you scan your "source" to print all element of array one by one.

逆转阵列的内容

萌面超妹 2025-02-06 02:39:33

编译器将内联ASM悬挂在您的重复循环中,从而从您的定时区域中提升。

If your goal is performance, https://gcc.gnu.org/wiki/DontUseInlineAsm 。首先花时间学习的有用的是SIMD Intrinsics(以及它们如何 compile to asm )喜欢 _MM256_ADD_EPI64 用单个AVX2指令添加4x uint64_t 。请参阅 https://stackoverflow.com/tags/sse/info (编译器可以自动vectorize fordical-vectorize forness for Assip of Auto natife suptife for Simple Sum,例如 看到它的好处。

如果您使用较小的数组并将重复循环放入定时区域以获取一些缓存命中。)如果您想与ASM一起测试各种CPU上实际上是什么,则可以 在独立的静态可执行文件或您从C ++调用的函数中执行此操作。 https://stackoverflow.com/tags/x86/info 具有一些良好的性能链接。

回复:在 -o0 上进行基准测试,是编译器使慢速ASM 使用默认的 -O0 一致的调试,而根本不尝试优化。击败它并绑在背后时,击败它并不是什么挑战。


为什么您的 ASM 可以将其悬浮在定时区域中

而不是 asm volatile ,您的 asm 语句是输入的纯粹函数您已经告诉编译器是指针,一个长度和 sum = 0 的初始值。它确实不是包括指向内存,因为您没有为此使用虚拟“ m” 输入。 (

如果没有“ memory” clobber,您如何使用内联ASM参数指向的内存 *指向 *)。函数调用,因此,GCC将ASM语句从循环中提升。请参阅 Google的`donotopTimize()执行语句排序有关“内存” clobber的影响的更多详细信息。

请查看 main 中。 -O2 及更高启用 -finline-funnctions ,而 -O1 仅启用 -finline-finclions-complions-call called-ince-once 这不是 static inline ,因此在其他汇编单元的调用中,它必须发出独立的定义。

75NS只是 std :: Chrono 在几乎空的定时区域的功能。它实际上在运行,只是在定时区域内。如果您单步的整个程序的ASM,或者例如在ASM语句上设置断点,则可以看到这一点。在执行可执行文件的ASM级调试时,您可以通过在 MOV $ 0XDEADBEEF,%eax 之前通过放置时髦的指令来帮助自己找到它。您可以在调试器的拆卸输出中搜索的内容(例如GDB的布局ASM 布局reg ;请参阅

请注意,“内存” clobber 没有 asm volatile 仍然会让GCC做 common Subspemention imperination(cse) asm 语句的两个调用之间,如果没有函数之间的调用。就像您在定时区域内放置一个重复循环以在小的阵列上测试性能,以适合某种级别的缓存。

理智检查您的基准

这是正常的读数

,您甚至不得不问这个。 99999999 75ns中的8个字节整数将是 99999999的内存带宽 * 8 B/75 NS = 1066666666 GB/s,而快速双channel ddr4可能会击中32 GB /s。 (如果它很大,则可以缓存带宽,但事实并非如此,因此您的代码瓶颈在内存上)。

或4GHz CPU必须在 99999999/(75*4) = 3333333.33 添加指令每个时钟周期,但管道仅为4至6 UOPS现代CPU,循环分支最佳分支吞吐量。 ( https://uops.info/ https://agner.org/optimize/

即使使用avx-512,这是2/clock 8x uint64_t 添加每个核心,但是编译器不会重新写入您的内联ASM;与使用普通的C ++或内在链接相比,这将失败其目的。

显然,这只是 std :: Chrono 近乎空位区域的时间上的时间。


ASM Code-Review:

如上所述的正确性,我如何表明可以使用inline asm参数指向的内存?

您还缺少& atrine> 中的Clobber声明“+& r”(sum),从理论上讲,它将其选择与输入之一相同的寄存器。但是,由于 sum 也是一个输入,因此只有当 nubmess 长度 也是 0 时,才能做到这一点。

对于 “ =& r” 输出,还是更好地使用“+code>”+& r“ >并将该零放在编译器中。对于您的循环计数器,这是有道理的,因为编译器根本不需要知道。但是,通过手动为其选择RAX(使用Clobber),您可以防止编译器在RAX中选择代码 sum 在RAX中,就像对于非内部功能一样。虚拟 [idx]“ =& r”(虚拟)输出操作数将使编译器为您选择寄存器,例如适当的宽度,例如 intptr_t


ASM代码审查:

David Wohlferd所说的性能: XOR%EAX,%eax 至零RAX。隐式零扩展节省了REX前缀。 (机器代码中的1个代码大小的字节。较小的机器代码通常更好。)

如果您不打算做任何比GCC独立执行的操作,似乎不值得手工编写ASM。代码> -ftree-vectorize 或带有 -mgeneral-regs-nolly -mno-sse2 (即使它是x86-64的基准,也通常是内核代码需要避免SIMD寄存器)。但是我猜这是一个学习练习,可作为内联ASM约束工作的工作和测量的起点。为了获得基准工作,您可以测试更好的循环。

典型的X86-64 CPU每个时钟周期可承受2个负载(自sandybridge以来,AMD以来,自K8以来)或3/时钟上的时钟。在具有AVX/AVX2的现代CPU上,每个负载均可为32个字节宽(或64个字节,具有L1D命中率的最佳案例)。或更像是1/时钟,仅在最近的英特尔上命中L2命中,这是一个合理的缓存目标。

但是,您的循环每一个时钟周期最多可以运行1x 8字节的负载,因为循环分支可以运行1/时钟,并且添加mem,%[sum] 通过<<代码> sum 。

这可能最大程度地消除了DRAM带宽(借助HW Prefetchers),例如8 b/cycle * 4GHz = 32GB/s,现代台式机/笔记本电脑Intel CPU可以管理单个核心(但不是大Xeons)。但是,由于相对于它的DRAM足够快和/或CPU较慢,即使是DRAM也可以避免成为瓶颈。但是,与L3或L2高速缓存带宽相比,瞄准DRAM带宽的标准很低。

因此,即使您想在没有 movdqu / paddq 的情况下继续使用标量代码(或者最好到达内存源 paddq 的对齐边界,如果您想花费一些代码大小来优化此循环),您仍然可以使用两个寄存器累加器展开 sum ,您在最后添加。这揭示了一些指令级并行性,可以每个时钟周期允许两个内存源负载。


您还可以避免使用 cmp ,该>可以将循环开销。较少的UOPS让我们的级别执行官更远。

-length 到零之前,获取一个指向数组结束的指针,并从 -length 中获取指针。像(ARR+LEN)[IDX] ,带有 for(idx = -len; idx!= 0; idx ++)。对于某些HW预摘要而言,通过数组向后循环在某些CPU上有些糟糕,因此通常不建议使用通常是内存绑定的循环。

另请参见 Micro Fusion和Worke Modes - 一种索引的寻址模式只能保持微型融合模式在英特尔·哈斯韦尔(Intel Haswell)和稍后的后端中,仅用于添加 rmw的指令。

因此,您最好的选择是一个指针增量的循环,使用它2到4个使用它的说明,而底部的 cmp/jne

The compiler is hoisting the inline asm out of your repeat loop, and thus out of your timed region.

If your goal is performance, https://gcc.gnu.org/wiki/DontUseInlineAsm. The useful thing to spend your time learning first is SIMD intrinsics (and how they compile to asm) like _mm256_add_epi64 to add 4x uint64_t with a single AVX2 instruction. See https://stackoverflow.com/tags/sse/info (Compilers can auto-vectorize decently for a simple sum like this, which you could see the benefit from if you used a smaller array and put a repeat loop inside the timed region to get some cache hits.)

If you want to play around with asm to test what's actually fast on various CPUs, you can do that in a stand-alone static executable, or a function you call from C++. https://stackoverflow.com/tags/x86/info has some good performance links.

Re: benchmarking at -O0, yes the compiler makes slow asm with the default -O0 of consistent debugging and not trying at all to optimize. It's not much of a challenge to beat it when it has its hands tied behind its back.


Why your asm can get hoisted out of the timed regions

Without being asm volatile, your asm statement is a pure function of the inputs you've told the compiler about, which are a pointer, a length, and the initial value of sum=0. It does not include the pointed-to memory because you didn't use a dummy "m" input for that. (How can I indicate that the memory *pointed* to by an inline ASM argument may be used?)

Without a "memory" clobber, your asm statement isn't ordered wrt. function calls, so GCC is hoisting the asm statement out of the loop. See How does Google's `DoNotOptimize()` function enforce statement ordering for more details about that effect of the "memory" clobber.

Have a look at the compiler output on https://godbolt.org/z/KeEMfoMvo and see how it inlined into main. -O2 and higher enables -finline-functions, while -O1 only enables -finline-functions-called-once and this isn't static or inline so it has to emit a stand-alone definition in case of calls from other compilation units.

75ns is just the timing overhead of std::chrono functions around a nearly-empty timed region. It is actually running, just not inside the timed regions. You can see this if you single-step the asm of your whole program, or for example set a breakpoint on the asm statement. When doing asm-level debugging of the executable, you could help yourself find it by putting a funky instruction like mov $0xdeadbeef, %eax before xor %eax,%eax, something you can search for in the debugger's disassembly output (like GDB's layout asm or layout reg; see asm debugging tips at the bottom of https://stackoverflow.com/tags/x86/info). And yes, you do often want to look at what the compiler did when debugging inline asm, how it filled in your constraints, because stepping on its toes is a very real possibility.

Note that a "memory" clobber without asm volatile would still let GCC do Common Subexpression Elimination (CSE) between two invocations of the asm statement, if there was no function call in between. Like if you put a repeat loop inside a timed region to test performance on an array small enough to fit in some level of cache.

Sanity-checking your benchmark

Is this a normal reading

It's wild that you even have to ask that. 99999999 8-byte integers in 75ns would be a memory bandwidth of 99999999 * 8 B / 75 ns = 10666666 GB/s, while fast dual-channel DDR4 might hit 32 GB/s. (Or cache bandwidth if it was that large, but it's not, so your code bottlenecks on memory).

Or a 4GHz CPU would have had to run at 99999999 / (75*4) = 333333.33 add instructions per clock cycle, but the pipeline is only 4 to 6 uops wide on modern CPUs, with taken-branch throughputs of at best 1 for a loop branch. (https://uops.info/ and https://agner.org/optimize/)

Even with AVX-512, that's 2/clock 8x uint64_t additions per core, but compilers don't rewrite your inline asm; that would defeat its purpose compared to using plain C++ or intrinsics.

This is pretty obviously just std::chrono timing overhead from a near-empty timed region.


Asm code-review: correctness

As mentioned above, How can I indicate that the memory *pointed* to by an inline ASM argument may be used?

You're also missing an & early clobber declaration in "+&r"(sum) which would in theory let it pick the same register for sum as for one of the inputs. But since sum is also an input, it could only do that if numbers or length were also 0.

It's kind of a toss-up whether it's better to xor-zero inside the asm for an "=&r" output, or better to use "+&r" and leave that zeroing to the compiler. For your loop counter, it makes sense because the compiler doesn't need to know about that at all. But by manually picking RAX for it (with a clobber), you're preventing the compiler from choosing to have your code produce sum in RAX, like it would want for a non-inline function. A dummy [idx] "=&r" (dummy) output operand will get the compiler to pick a register for you, of the appropriate width, e.g. intptr_t.


Asm code review: performance

As David Wohlferd said: xor %eax, %eax to zero RAX. Implicit zero-extension saves a REX prefix. (1 byte of code-size in the machine code. Smaller machine-code is generally better.)

It doesn't seem worth hand-writing asm if you're not going to do anything smarter than what GCC would on its own without -ftree-vectorize or with -mgeneral-regs-only or -mno-sse2 (even though it's baseline for x86-64, kernel code generally needs to avoid SIMD registers). But I guess it works as a learning exercise in how inline asm constraints work, and a starting point for measuring. And to get a benchmark working so you can then test better loops.

Typical x86-64 CPUs can do 2 loads per clock cycle (Intel since Sandybridge, AMD since K8) Or 3/clock on Alder Lake. On modern CPUs with AVX/AVX2, each load can be 32 bytes wide (or 64 bytes with AVX-512) best case on L1d hits. Or more like 1/clock with only L2 hits on recent Intel, which is a reasonable cache-blocking target.

But your loop can at best run 1x 8-byte load per clock cycle, because loop branches can run 1/clock, and add mem, %[sum] has a 1 cycle loop-carried dependency through sum.

That might max out DRAM bandwidth (with the help of HW prefetchers), e.g. 8 B / cycle * 4GHz = 32GB/s, which modern desktop/laptop Intel CPUs can manage for a single core (but not big Xeons). But with fast enough DRAM and/or a slower CPU relative to it, even DRAM can avoid being a bottleneck. But aiming for DRAM bandwidth is quite a low bar compared to L3 or L2 cache bandwidth.

So even if you want to keep using scalar code without movdqu / paddq (or better get to an alignment boundary for memory-source paddq, if you want to spend some code-size to optimize this loop), you could still unroll with two register accumulators for sum which you add at the end. This exposes some instruction-level parallelism, allowing two memory-source loads per clock cycle.


You can also avoid the cmp, which can reduce loop overhead. Fewer uops lets out-of-order exec see farther.

Get a pointer to the end of the array and index from -length up towards zero. Like (arr+len)[idx] with for(idx=-len ; idx != 0 ; idx++). Looping backwards through the array is on some CPUs a little worse for some of the HW prefetchers, so generally not recommended for loops that are often memory bound.

See also Micro fusion and addressing modes - an indexed addressing mode can only stay micro-fused in the back-end on Intel Haswell and later, and only for instructions like add that RMW their destination register.

So your best bet would be a loop with one pointer increment and 2 to 4 add instructions using it, and a cmp/jne at the bottom.

内联汇编数组总和基准接近零阵列的大型阵列,并启用了优化,即使使用了结果

萌面超妹 2025-02-05 20:42:48

为了能够查看未直接由doxygen直接支持的扩展名的文件的结果,但必须以Doxygen支持的语言中包含代码。像文件):

INPUT_FILTERS += *.dart
EXTENSION_<APPING = dart=java

当文件在不同的目录中或子目录中时,最好查看设置:

INPUT = 
RECURSIVE=YES

也是如此。

如果不直接支持AA语言,但可以用Doxygen支持的语言进行转换,最好还可以查看doxygen滤波器的可能性(诸如 input_filter 等的设置等)。

To be able to see results of files with extensions that are not directly supported by doxygen, but contain code in a language supported by doxygen a number of settings have to be set (in this case we have Flutter dart files that are actually Java / Java like files):

INPUT_FILTERS += *.dart
EXTENSION_<APPING = dart=java

When files are in a different directories or in subdirectories it is good to look at the settings:

INPUT = 
RECURSIVE=YES

as well.

In case a a language is not supported directly but can be transformed in a language that is supported by doxygen it is good to look at the doxygen filter possibilities as well (settings like INPUT_FILTER etc.).

颤动的doxygen

萌面超妹 2025-02-05 20:23:57

不要忘记将下面的代码添加到您的 babel.config.js

module.exports = {
  ...
  plugins: [
      ...
      'react-native-reanimated/plugin',
  ],

};

do not forget to add code below to your babel.config.js

module.exports = {
  ...
  plugins: [
      ...
      'react-native-reanimated/plugin',
  ],

};

我正在尝试我的代码,但是这个错误总是向我显示

萌面超妹 2025-02-05 13:56:46

我的情况与您相同,显示还可以,但是错误出现在Locat中。
那是我的解决方案:
(1)初始化recyclerview&amp; ,在CREATE()()上绑定适配器()()

RecyclerView mRecycler = (RecyclerView) this.findViewById(R.id.yourid);
mRecycler.setAdapter(adapter);

在RecyClerview的源代码中获取数据时

adapter.notifyDataStateChanged();

(2)调用notifydatastatechanged ,还有其他线程可以检查数据状态。

public RecyclerView(Context context, @Nullable AttributeSet attrs, int defStyle) {
    super(context, attrs, defStyle);
    this.mObserver = new RecyclerView.RecyclerViewDataObserver(null);
    this.mRecycler = new RecyclerView.Recycler();
    this.mUpdateChildViewsRunnable = new Runnable() {
        public void run() {
            if(RecyclerView.this.mFirstLayoutComplete) {
                if(RecyclerView.this.mDataSetHasChangedAfterLayout) {
                    TraceCompat.beginSection("RV FullInvalidate");
                    RecyclerView.this.dispatchLayout();
                    TraceCompat.endSection();
                } else if(RecyclerView.this.mAdapterHelper.hasPendingUpdates()) {
                    TraceCompat.beginSection("RV PartialInvalidate");
                    RecyclerView.this.eatRequestLayout();
                    RecyclerView.this.mAdapterHelper.preProcess();
                    if(!RecyclerView.this.mLayoutRequestEaten) {
                        RecyclerView.this.rebindUpdatedViewHolders();
                    }

                    RecyclerView.this.resumeRequestLayout(true);
                    TraceCompat.endSection();
                }

            }
        }
    };

在dispatchlayout()中,我们可以发现其中有错误:

void dispatchLayout() {
    if(this.mAdapter == null) {
        Log.e("RecyclerView", "No adapter attached; skipping layout");
    } else if(this.mLayout == null) {
        Log.e("RecyclerView", "No layout manager attached; skipping layout");
    } else {

I have the same situation with you, display is ok, but error appear in the locat.
That's my solution:
(1) Initialize the RecyclerView & bind adapter ON CREATE()

RecyclerView mRecycler = (RecyclerView) this.findViewById(R.id.yourid);
mRecycler.setAdapter(adapter);

(2) call notifyDataStateChanged when you get the data

adapter.notifyDataStateChanged();

In the recyclerView's source code, there is other thread to check the state of data.

public RecyclerView(Context context, @Nullable AttributeSet attrs, int defStyle) {
    super(context, attrs, defStyle);
    this.mObserver = new RecyclerView.RecyclerViewDataObserver(null);
    this.mRecycler = new RecyclerView.Recycler();
    this.mUpdateChildViewsRunnable = new Runnable() {
        public void run() {
            if(RecyclerView.this.mFirstLayoutComplete) {
                if(RecyclerView.this.mDataSetHasChangedAfterLayout) {
                    TraceCompat.beginSection("RV FullInvalidate");
                    RecyclerView.this.dispatchLayout();
                    TraceCompat.endSection();
                } else if(RecyclerView.this.mAdapterHelper.hasPendingUpdates()) {
                    TraceCompat.beginSection("RV PartialInvalidate");
                    RecyclerView.this.eatRequestLayout();
                    RecyclerView.this.mAdapterHelper.preProcess();
                    if(!RecyclerView.this.mLayoutRequestEaten) {
                        RecyclerView.this.rebindUpdatedViewHolders();
                    }

                    RecyclerView.this.resumeRequestLayout(true);
                    TraceCompat.endSection();
                }

            }
        }
    };

In the dispatchLayout(), we can find there is the error in it:

void dispatchLayout() {
    if(this.mAdapter == null) {
        Log.e("RecyclerView", "No adapter attached; skipping layout");
    } else if(this.mLayout == null) {
        Log.e("RecyclerView", "No layout manager attached; skipping layout");
    } else {

recyclerview没有附加的适配器;跳过布局

萌面超妹 2025-02-05 13:44:51
  • 您使用 std :: Move 每当您拥有不再使用的命名对象时。只需忘记所有您认为您对 std :: Move 的了解,然后学习一个简单的规则: std ::移动表示:我不再需要此(名称对象)
    因此, func1(std :: move(func2(...)))毫无意义。返回的对象没有名称,您无法再次使用它。编译器知道这一点,并且将在没有 std :: Move 的情况下使用移动语义。说我不再需要这个是毫无意义的,因为 func1 调用临时对象不再存在。

  • make_shared分配了一个新对象,并将其构造到位。但是,当您 emplace_back 您要安置的是共享_ptr,而不是它指向的对象。因此,发生的事情是 shared_ptr 是在 vector 内构建的,并用指向对象和构造对象的指针。
    shared_ptr 完全不存在。这就是 emplace_back 它不会创建临时对象,而是直接在向量内部构造对象。

  • push_back 通常将对象带入向量。因此,您将创建一个临时对象,复制并破坏临时对象。但是STL并不愚蠢,也涵盖了这种情况。如果您 push_back 可以移动的对象,则将其转发到 emplace_back 。因此,这对再次将直接在向量内部构造。
    这留下了 vector_declared_outside_for_for_loop 。由于它是现有对象,因此无法在就地构造。由于它是命名对象,因此无法移动。因此,将复制对象。

    对于 shared_ptr ,这意味着将复制内部指针,并且对象的参考计数将在原子上递增。因此,在循环结束时, shared_ptr 将具有 .size() + 1 的参考计数。然后,在范围的末尾,原始 shared_ptr 不范围将参考计数降低1。向量中使用的对象保留。

  • You use std::move whenever you have a named object that you no longer use. Just forget all you think you know about std::move and learn one simple rule: std::move means: I no longer need this (name object).
    So func1(std::move(func2(...))) makes no sense. The returned object has no name, there is no way for you to use it again. The compiler knows that and will already use move semantic without std::move. Saying I no longer need this is pointless because after the func1 call the temporary object no longer exists.

  • make_shared allocates a new object to point to and constructs it in place. But then when you emplace_back what you are emplacing is the shared_ptr, not the object it points to. So what happens is that shared_ptr is constructed in place inside the vector with a pointer to the object it alloctes and constructs.
    The shared_ptr in the for loop doesn't exist at all. That's the point of the emplace_back that it doesn't create a temporary object but constructs the object in-place inside the vector directly.

  • push_back normally takes an object and copies it into the vector. So you would create a temporary object, copy it and destroy the temporary. But the STL isn't stupid and covers this case too. If you push_back an object that can be moved then it is forwarded to emplace_back. So again the pair will be constructed directly in-place inside the vector.
    That leaves the vector_declared_outside_for_loop. Since it is an existing object it can't be constructed in-place. Since it is a named object it can not be moved. So the objects will be copied.

    For a shared_ptr that means the internal pointer will be copied and the reference count for the object will be incremented atomically. So at the end of the loop the shared_ptr will have a reference count of .size() + 1. Then at the end of the scope the original shared_ptr goes out of scope reducing the reference count by 1. The object used in the vector remains.

在for循环中构造共享_ptr并移动分配

萌面超妹 2025-02-05 07:45:12

尝试,除了最终是最柔软的方法。请注意,下面的代码只有输入的值不是基本10整数(这可能是您想要的,但值得注意的)才会引起错误。

try:
     x = int(input("Please enter a number: "))
     y = int(input("Please enter another number: "))
except ValueError:
    print("Entered values must be a base 10 integer")
finally:
    print(x*y)

Try, except, finally is the most pythonic way of doing this. Note that the code below will only raise an error if the value entered isn't a base 10 integer (which is probably what you want, but worth noting).

try:
     x = int(input("Please enter a number: "))
     y = int(input("Please enter another number: "))
except ValueError:
    print("Entered values must be a base 10 integer")
finally:
    print(x*y)

如何打印错误,说如果两个输入值不是数字,则必须输入整数,而当未捕获错误时将x乘以x

萌面超妹 2025-02-04 20:59:31

另一个选项是用 Inder_list iTerator 任何 和Generator Expression替换您的

例如,使用集合理解:

action_tickets_list = set(ticket['Title'] for ticket in ticket_list if not any(s in ticket['Title'] for s in ignore_list))

它类似于

action_tickets_list = set()
for ticket in ticket_list:
    if not any(s in ticket['Title'] for s in ignore_list):
        action_tickets_list.add(ticket['Title'])

您的ignore_list,它可能值得汇编ighore_list的正则表达式。这意味着您只扫描一次。

import re
ignore = re.compile('|'.join(ignore_list))
action_tickets_list = set()
for ticket in ticket_list:
    if not ignore.search(ticket['Title']):
        action_tickets_list.add(ticket['Title'])

Another option is to replace your ignore_list iterator with any() and generator expression.

For example, using a set comprehension:

action_tickets_list = set(ticket['Title'] for ticket in ticket_list if not any(s in ticket['Title'] for s in ignore_list))

It's similar to

action_tickets_list = set()
for ticket in ticket_list:
    if not any(s in ticket['Title'] for s in ignore_list):
        action_tickets_list.add(ticket['Title'])

If you have a large ignore_list, it's probably worth compiling a regular expression of ignore_list. That means you only scan once.

import re
ignore = re.compile('|'.join(ignore_list))
action_tickets_list = set()
for ticket in ticket_list:
    if not ignore.search(ticket['Title']):
        action_tickets_list.add(ticket['Title'])

字符串列表中存在于dict键列表中?

萌面超妹 2025-02-04 10:58:13

您必须使用纵横比:

fig.update_layout({"scene": {"aspectratio": {"x": 2, "y": 2, "z": 0.75}}})

这是几张图片。第一个不设置纵横比,第二个具有上述代码:

​src =“ https://i.sstatic.net/ooluu.png” alt =“在此处输入图像说明”>

You have to play with the aspect ratio:

fig.update_layout({"scene": {"aspectratio": {"x": 2, "y": 2, "z": 0.75}}})

Here are a couple of pictures. The first without setting aspect ratio, the second with the code above:

enter image description here

enter image description here

我如何将情节轴的轴线划分?

萌面超妹 2025-02-04 04:24:16

只需再次按Shift + F10即可。这是最快的方法,耗时少于清洁 - &gt;重建

just press Shift + F10 again.This is the fastest way, less time consuming than Clean -> Rebuild

执行失败的任务&#x27;:app:MergedEbugresources&#x27; ,无法在源集中找到资源文件

萌面超妹 2025-02-04 03:46:48

严格模式不应用于生产中。严格的模式在状态树上运行同步的深观察器,以检测不适当的突变,这可能会减慢应用程序。为了避免在每次创建生产捆绑包时将严格的变化更改为错误,您应该使用一个构建工具,该工具在创建生产捆绑包时将严格的值false变为false,

您可以使用此模式启用该模式:

const store = new Vuex.Store({  // ...  strict: true});

Strict mode should not be used in production. Strict mode runs a synchronous deep watcher on the state tree for detecting inappropriate mutations, and this can slow down the application. To avoid changing strict to false each time you want to create a production bundle, you should use a build tool that makes the strict value false when creating the production bundle

you can enable that mode with this:

const store = new Vuex.Store({  // ...  strict: true});

Vuex:为什么禁用严格模式?

萌面超妹 2025-02-04 01:32:58

至于定位 tooltip ,它没有比设置 preecebelow to true 或 false false 并添加偏移量或填充物,这将使它显示在小部件的底部或顶部。

如果需要更多自定义化,则必须随身携带显示 offerlayentry 悬停在小部件上,然后在某种情况下或持续时间将其隐藏。更多的麻烦,然后简单地使用 tooltip 。但是可以根据您的需求进行调整对齐。

一些要查看的软件包,可以简化使用覆盖层:

https://pub.dev/packages/modals

https://pub.dev/packages/packages/flutter_portal

As for positioning Tooltip, it does not go any further than setting preferBelow to true or false and adding offset or padding, which will make it appear either on bottom or top of the widget.

If you want more customization, you have to go with showing OverlayEntry on hover on your widget and then hide it on some condition or after duration. Much more trouble then simply using a Tooltip. But there alignments can be adjusted for your needs.

Some packages to look into, that simplify working with overlays:

https://pub.dev/packages/modals

https://pub.dev/packages/flutter_portal

将工具提示放在颤音中

萌面超妹 2025-02-03 17:49:57

以下超载的接受答案确实不会触发 -wtype-limits 。但是它确实触发了未使用的参数警告(在 IS_SIGNED 变量上)。为了避免这些这些论点不应像这样命名:

template <typename T> inline constexpr
  int signum(T x, std::false_type) {
  return T(0) < x;
}

template <typename T> inline constexpr
  int signum(T x, std::true_type) {
  return (T(0) < x) - (x < T(0));
}

template <typename T> inline constexpr
  int signum(T x) {
  return signum(x, std::is_signed<T>());
}

对于C ++ 11及更高的替代方案。

template <typename T>
typename std::enable_if<std::is_unsigned<T>::value, int>::type
inline constexpr signum(T const x) {
    return T(0) < x;  
}

template <typename T>
typename std::enable_if<std::is_signed<T>::value, int>::type
inline constexpr signum(T const x) {
    return (T(0) < x) - (x < T(0));  
}

对我来说,它不会引起GCC 5.3.1上的任何警告。

The accepted answer with the overload below does indeed not trigger -Wtype-limits. But it does trigger unused argument warnings (on the is_signed variable). To avoid these the second argument should not be named like so:

template <typename T> inline constexpr
  int signum(T x, std::false_type) {
  return T(0) < x;
}

template <typename T> inline constexpr
  int signum(T x, std::true_type) {
  return (T(0) < x) - (x < T(0));
}

template <typename T> inline constexpr
  int signum(T x) {
  return signum(x, std::is_signed<T>());
}

For C++11 and higher an alternative could be.

template <typename T>
typename std::enable_if<std::is_unsigned<T>::value, int>::type
inline constexpr signum(T const x) {
    return T(0) < x;  
}

template <typename T>
typename std::enable_if<std::is_signed<T>::value, int>::type
inline constexpr signum(T const x) {
    return (T(0) < x) - (x < T(0));  
}

For me it does not trigger any warnings on GCC 5.3.1.

C/C&#x2b;&#x2B;中是否有标准符号功能(Signum,SGN)?

萌面超妹 2025-02-03 10:08:19

三个步骤时,您需要旧版本DEF时,您需要删除,并且以前已部署。

  1. 编辑定义并删除所有环境。

  2. 创建一个与原始名称不同并保存的默认名称的环境。

  3. 删除环境,因为它不会将其视为有效的部署。

Three steps when you have an older release def you need to delete and it was previously deployed.

  1. Edit the definition, and delete all the environments.

  2. create an environment with a default name that is different than the original and save.

  3. Delete the environment as it will not see it as a valid deployment.

如何删除Azure DevOps版本,该版本显示为已部署 - 错误VS402946

萌面超妹 2025-02-03 03:41:20

您检查过PymongoArrow吗?最新版本具有写支持,您可以将CSV文件导入MongoDB。这是 文档。您也可以使用mongoimport导入CSV文件,文档为,但是我看不到像您使用PymongoArrow那样排除字段的任何方法。

Have you checked out pymongoarrow? the latest release has write support where you can import a csv file into mongodb. Here are the release notes and documentation. You can also use mongoimport to import a csv file, documentation is here, but I can't see any way to exclude fields like the way you can with pymongoarrow.

使用Python脚本将CSV文件的特定列插入MongoDB集合

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文