- 开发者引导
- 理解 JS API
- 高级工具
- JavaScript API
- FAQ 常见问题
- 最小可行性版本
- Features to add after the MVP
- Modules
- 语义
- Binary Encoding
- Web Embedding
- Text Format
- Design Rationale
- Security
- Nondeterminism in WebAssembly
- Guide for C/C++ developers
- Feature Test
- Tooling support
- GC / DOM / Web API Integration :unicorn:
- JIT and Optimization Library
- Dynamic linking
- WebAssembly 的长久目标
- 非网络嵌入
- 可移植性
- 使用场景
Features to add after the MVP
These are features that make sense in the context of the high-level goals of WebAssembly but weren’t part of the initial Minimum Viable Product release.
Note: we are in the process of migrating all post-MVP features to tracking issues.
Tracking Issues
Feature | Tracking issue | Status |
---|---|---|
Specification | 1077 | in progress |
Threads | 1073 | in progress |
Fixed-width SIMD | 1075 | in progress |
Exception handling | 1078 | in progress |
Garbage collection | 1079 | in progress |
ECMAScript module integration | 1087 | no started |
Legacy Future Features
Note: these will soon move to tracking issues.
:star: = Essential features we want to prioritize adding shortly after the MVP.
On Deck for Immediate Design
Great tooling support
:star:
This is covered in the tooling section.
Feature Testing
:star:
Post-MVP, some form of feature-testing will be required. We don’t yet have the experience writing polyfills to know whether has_feature
is the right primitive building block so we’re not defining it (or something else) until we gain this experience. In the interim, it’s possible to do a crude feature test (as people do in JavaScript) by just eval
-ing WebAssembly code and catching validation errors.
See Feature test for a more detailed sketch.
Proposals we might consider in the future
Finer-grained control over memory
Provide access to safe OS-provided functionality including:
map_file(addr, length, Blob, file-offset)
: semantically, this operator copies the specified range fromBlob
into the range[addr, addr+length)
(whereaddr+length <= memory_size
) but implementations are encouraged tommap(addr, length, MAP_FIXED | MAP_PRIVATE, fd)
discard(addr, length)
: semantically, this operator zeroes the given range but the implementation is encouraged to drop the zeroed physical pages from the process’s working set (e.g., by callingmadvise(MADV_DONTNEED)
on POSIX)shmem_create(length)
: create a memory object that can be simultaneously shared between multiple linear memoriesmap_shmem(addr, length, shmem, shmem-offset)
: likemap_file
exceptMAP_SHARED
, which isn’t otherwise valid on read-only Blobsmprotect(addr, length, prot-flags)
: change protection on the range[addr, addr+length)
(whereaddr+length <= memory_size
)decommit(addr, length)
: equivalent tomprotect(addr, length, PROT_NONE)
followed bydiscard(addr, length)
and potentially more efficient than performing these operators in sequence.
The addr
and length
parameters above would be required to be multiples of page_size
.
The mprotect
operator would require hardware memory protection to execute efficiently and thus may be added as an “optional” feature (requiring a feature test to use). To support efficient execution even when no hardware memory protection is available, a restricted form of mprotect
could be added which is declared statically and only protects low memory (providing the expected fault-on-low-memory behavior of native C/C++ apps).
The above list of functionality mostly covers the set of functionality provided by the mmap
OS primitive. One significant exception is that mmap
can allocate noncontiguous virtual address ranges. See the FAQ for rationale.
Large page support
Some platforms offer support for memory pages as large as 16GiB, which can improve the efficiency of memory management in some situations. WebAssembly may offer programs the option to specify a larger page size than the default.
More expressive control flow
Some types of control flow (especially irreducible and indirect) cannot be expressed with maximum efficiency in WebAssembly without patterned output by the relooper and jump-threading optimizations in the engine. Target uses for more expressive control flow are:
- Language interpreters, which often use computed-
goto
. - Functional language support, where guaranteed tail call optimization is expected for correctness and performance.
Options under consideration:
- No action,
while
andswitch
combined with jump-threading are enough. - Just add
goto
(direct and indirect). - Add new control-flow primitives that address common patterns.
- Add signature-restricted Proper Tail Calls.
- Add proper tail call, expanding upon signature-restricted proper tail calls, and making it easier to support other languages, especially functional programming languages.
GC/DOM Integration
See GC.md.
Linear memory bigger than 4 GiB
The WebAssembly MVP will support the wasm32 mode of WebAssembly, with linear memory sizes up to 4 GiB using 32-bit linear memory indices. To support larger sizes, the wasm64 mode of WebAssembly will be added in the future, supporting much greater linear memory sizes using 64-bit linear memory indices. wasm32 and wasm64 are both just modes of WebAssembly, to be selected by a flag in a module header, and don’t imply any semantics differences outside of how linear memory is handled. Platforms will also have APIs for querying which of wasm32 and wasm64 are supported.
Of course, the ability to actually allocate this much memory will always be subject to dynamic resource availability.
It is likely that wasm64 will initially support only 64-bit linear memory indices, and wasm32 will leave 64-bit linear memory indices unsupported, so that implementations don’t have to support multiple index sizes in the same instance. However, operators with 32-bit indices and operators with 64-bit indices will be given separate names to leave open the possibility of supporting both in the same instance in the future.
Source maps integration
- Add a new source maps module section type.
- Either embed the source maps directly or just a URL from which source maps can be downloaded.
- Text source maps become intractably large for even moderate-sized compiled codes, so probably need to define new binary format for source maps.
- Gestate ideas and start discussions at the Source Map RFC repository
Coroutines
Coroutines will eventually be part of C++ and is already popular in other programming languages that WebAssembly will support.
Signature-restricted Proper Tail Calls
See the asm.js RFC for a full description of signature-restricted Proper Tail Calls (PTC).
Useful properties of signature-restricted PTCs:
- In most cases, can be compiled to a single jump.
- Can express indirect
goto
via function-pointer calls. - Can be used as a compile target for languages with unrestricted PTCs; the code generator can use a stack in the linear memory to effectively implement a custom call ABI on top of signature-restricted PTCs.
- An engine that wishes to perform aggressive optimization can fuse a graph of PTCs into a single function.
- To reduce compile time, a code generator can use PTCs to break up ultra-large functions into smaller functions at low overhead using PTCs.
A compiler can exert some amount of control over register allocation via the ordering of arguments in the PTC signature.
General-purpose Proper Tail Calls
General-purpose Proper Tail Calls would have no signature restrictions, and therefore be more broadly usable than Signature-restricted Proper Tail Calls, though there would be some different performance characteristics.
Asynchronous Signals
TODO
“Long SIMD”
The initial SIMD API will be a “short SIMD” API, centered around fixed-width 128-bit types and explicit SIMD operators. This is quite portable and useful, but it won’t be able to deliver the full performance capabilities of some of today’s popular hardware. There is a proposal in the SIMD.js repository for a “long SIMD” model which generalizes to wider hardware vector lengths, making more natural use of advanced features like vector lane predication, gather/scatter, and so on. Interesting questions to ask of such an model will include:
- How will this model map onto popular modern SIMD hardware architectures?
- What is this model’s relationship to other hardware parallelism features, such as GPUs and threads with shared memory?
- How will this model be used from higher-level programming languages? For example, the C++ committee is considering a wide variety of possible approaches; which of them might be supported by the model?
- What is the relationship to the “short SIMD” API? “None” may be an acceptable answer, but it’s something to think about.
- What nondeterminism does this model introduce into the overall platform?
What happens when code uses long SIMD on a hardware platform which doesn’t support it? Reasonable options may include emulating it without the benefit of hardware acceleration, or indicating a lack of support through feature tests.
Platform-independent Just-in-Time (JIT) compilation
WebAssembly is a new virtual ISA, and as such applications won’t be able to simply reuse their existing JIT-compiler backends. Applications will instead have to interface with WebAssembly’s instructions as if they were a new ISA.
Applications expect a wide variety of JIT-compilation capabilities. WebAssembly should support:
- Producing a dynamic library and loading it into the current WebAssembly module.
- Define lighter-weight mechanisms, such as the ability to add a function to an existing module.
- Support explicitly patchable constructs within functions to allow for very fine-grained JIT-compilation. This includes:
- Code patching for polymorphic inline caching;
- Call patching to chain JIT-compiled functions together;
- Temporary halt-insertion within functions, to trap if a function start executing while a JIT-compiler’s runtime is performing operators dangerous to that function.
- Provide JITs access to profile feedback for their JIT-compiled code.
- Code unloading capabilities, especially in the context of code garbage collection and defragmentation.
WebAssembly’s JIT interface would likely be fairly low-level. However, there are use cases for higher-level functionality and optimization too. One avenue for addressing these use cases is a JIT and Optimization library.
Multiprocess support
vfork
.- Inter-process communication.
- Inter-process
mmap
.
Trapping or non-trapping strategies.
Presently, when an instruction traps, the program is immediately terminated. This suits C/C++ code, where trapping conditions indicate Undefined Behavior at the source level, and it’s also nice for handwritten code, where trapping conditions typically indicate an instruction being asked to perform outside its supported range. However, the current facilities do not cover some interesting use cases:
- Not all likely-bug conditions are covered. For example, it would be very nice to have a signed-integer add which traps on overflow. Such a construct would add too much overhead on today’s popular hardware architectures to be used in general, however it may still be useful in some contexts.
- Some higher-level languages define their own semantics for conditions like division by zero and so on. It’s possible for compilers to add explicit checks and handle such cases manually, though more direct support from the platform could have advantages:
- Non-trapping versions of some operators, such as an integer division instruction that returns zero instead of trapping on division by zero, could potentially run faster on some platforms.
- The ability to recover gracefully from traps in some way could make many things possible. Possibly this could involve throwing or possibly by resuming execution at the trapping instruction with the execution state altered, if there can be a reasonable way to specify how that should work.
Additional integer operators
- The following operators can be built from other operators already present, however in doing so they read at least one non-constant input multiple times, breaking single-use expression tree formation.
i32.min_s
: signed minimumi32.max_s
: signed maximumi32.min_u
: unsigned minimumi32.max_u
: unsigned maximumi32.sext
: sign-agnosticsext(x, y)
isshr_s(shl(x,y),y)
i32.abs_s
: signed absolute value (traps onINT32_MIN
)i32.bswap
: sign-agnostic reverse bytes (endian conversion)i32.bswap16
: sign-agnostic,bswap16(x)
is((x>>8)&255)|((x&255)<<8)
- The following operators are just potentially interesting.
i32.clrs
: sign-agnostic count leading redundant sign bits (defined for all values, including 0)i32.floor_div_s
: signed division (result is floored)
- The following 64-bit-only operators are potentially interesting as well.
i64.mor
: sign-agnostic 8x8 bit-matrix multiply with ori64.mxor
: sign-agnostic 8x8 bit-matrix multiply with xor
Additional floating point operators
f32.minnum
: minimum; if exactly one operand is NaN, returns the other operandf32.maxnum
: maximum; if exactly one operand is NaN, returns the other operandf32.fma
: fused multiply-add (results always conforming to IEEE 754-2008)f64.minnum
: minimum; if exactly one operand is NaN, returns the other operandf64.maxnum
: maximum; if exactly one operand is NaN, returns the other operandf64.fma
: fused multiply-add (results always conforming to IEEE 754-2008)
minnum
and maxnum
operators would treat -0.0
as being effectively less than 0.0
. Also, it’s advisable to follow the IEEE 754-2018 draft, which has removed IEEE 754-2008’s minNum
and maxNum
(which return qNaN when either operand is sNaN) and replaced them with minimumNumber
and maximumNumber
, which prefer to return a number even when one operand is sNaN.
Note that some operators, like fma
, may not be available or may not perform well on all platforms. These should be guarded by feature tests so that if available, they behave consistently.
Floating point approximation operators
f32.reciprocal_approximation
: reciprocal approximationf64.reciprocal_approximation
: reciprocal approximationf32.reciprocal_sqrt_approximation
: reciprocal sqrt approximationf64.reciprocal_sqrt_approximation
: reciprocal sqrt approximation
These operators would not required to be fully precise, but the specifics would need clarification.
16-bit and 128-bit floating point support
For 16-bit floating point support, it may make sense to split the feature into two parts: support for just converting between 16-bit and 32-bit or 64-bit formats possibly folded into load and store operators, and full support for actual 16-bit arithmetic.
128-bit is an interesting question because hardware support for it is very rare, so it’s usually going to be implemented with software emulation anyway, so there’s nothing preventing WebAssembly applications from linking to an appropriate emulation library and getting similarly performant results. Emulation libraries would have more flexibility to offer approximation techniques such as double-double arithmetic. If we standardize 128-bit floating point in WebAssembly, it will probably be standard IEEE 754-2008 quadruple precision.
Full IEEE 754-2008 conformance
WebAssembly floating point conforms IEEE 754-2008 in most respects, but there are a few areas that are not yet covered.
To support exceptions and alternate rounding modes, one option is to define an alternate form for each of add
, sub
, mul
, div
, sqrt
, and fma
. These alternate forms would have extra operands for rounding mode, masked traps, and old flags, and an extra result for a new flags value. These operators would be fairly verbose, but it’s expected that their use cases will be specialized. This approach has the advantage of exposing no global (even if only per-thread) control and status registers to applications, and to avoid giving the common operators the possibility of having side effects.
Debugging techniques are also important, but they don’t necessarily need to be in the spec itself. Implementations are welcome (and encouraged) to support non-standard execution modes, enabled only from developer tools, such as modes with alternate rounding, or evaluation of floating point operators at greater precision, to support techniques for detecting numerical instability, or modes using alternate NaN bitpattern rules, to carry diagnostic information and help developers track down the sources of NaNs.
To help developers find the sources of floating point exceptions, implementations may wish to provide a mode where NaN values are produced with payloads containing identifiers helping programmers locate where the NaNs first appeared. Another option would be to offer another non-standard execution mode, enabled only from developer tools, that would enable traps on selected floating point exceptions, however care should be taken, since not all floating point exceptions indicate bugs.
Flushing Subnormal Values to Zero
Many popular CPUs have significant stalls when processing subnormal values, and support modes where subnormal values are flushed to zero which avoid these stalls. And, ARMv7 NEON has no support for subnormal values and always flushes them. A mode where floating point computations have subnormals flushed to zero in WebAssembly would address these two issues.
Integer Overflow Detection
There are two different use cases here, one where the application wishes to handle overflow locally, and one where it doesn’t.
When the application is prepared to handle overflow locally, it would be useful to have arithmetic operators which can indicate when overflow occurred. An example of this is the checked arithmetic builtins available in compilers such as clang and GCC. If WebAssembly is made to support nodes with multiple return values, that could be used instead of passing a pointer.
There are also several use cases where an application does not wish to handle overflow locally. One family of examples includes implementing optimized bignum arithmetic, or optimizing JavaScript Numbers to use int32 operators. Another family includes compiling code that doesn’t expect overflow to occur, but which wishes to have overflow detected and reported if it does happen. These use cases would ideally like to have overflow trap, and to allow them to handle trap specially. Following the rule that explicitly signed and unsigned operators trap whenever the result value can not be represented in the result type, it would be possible to add explicitly signed and unsigned versions of integer add
, sub
, and mul
, which would trap on overflow. The main reason we haven’t added these already is that they’re not efficient for general-purpose use on several of today’s popular hardware architectures.
Better feature testing support
The MVP feature testing situation could be improved by allowing unknown/unsupported instructions to decode and validate. The runtime semantics of these unknown instructions could either be to trap or call a same-signature module-defined polyfill function. This feature could provide a lighter-weight alternative to load-time polyfilling (approach 2 in FeatureTest.md), especially if the specific layer were to be standardized and performed natively such that no user-space translation pass was otherwise necessary.
Array globals
If globals are allowed array types, significant portions of memory could be moved out of linear memory which could reduce fragmentation issues. Languages like Fortran which limit aliasing would be one use case. C/C++ compilers could also determine that some global variables never have their address taken.
Multiple Return
The stack based nature of WebAssembly lends itself to the possibility of supporting multiple return values from blocks / functions.
Multiple Tables and Memories
The MVP limits modules to at most one memory and at most one table (the default ones) and there are only operators for accessing the default table and memory.
After the MVP and after GC reference types have been added, the default limitation can be relaxed so that any number of tables and memories could be imported or internally defined and memories/tables could be passed around as parameters, return values and locals. New variants of load
, store
and call_indirect
would then be added which took an additional memory/table reference operand.
To access an imported or internally-defined non-default table or memory, a new address_of
operator could be added which, given an index immediate, would return a first-class reference. Beyond tables and memories, this could also be used for function definitions to get a reference to a function (which, since opaque, could be implemented as a raw function pointer).
More Table Operators and Types
In the MVP, WebAssembly has limited functionality for operating on tables and the host-environment can do much more (e.g., see JavaScript’s WebAssembly.Table
API). It would be useful to be able to do everything from within WebAssembly so, e.g., it was possible to write a WebAssembly dynamic loader in WebAssembly. As a prerequisite, WebAssembly would need first-class support for GC references on the stack and in locals. Given that, the following could be added:
get_table
/set_table
: get or set the table element at a given dynamic index; the got/set value would have a GC reference typegrow_table
: grow the current table (up to the optional maximum), similar togrow_memory
current_table_length
: likecurrent_memory
.
Additionally, in the MVP, the only allowed element type of tables is a generic “anyfunc” type which simply means the element can be called but there is no static signature validation check. This could be improved by allowing:
- functions with a particular signature, allowing wasm generators to use multiple homogeneously-typed function tables (instead of a single heterogeneous function table) which eliminates the implied dynamic signature check of a call to a heterogeneous table;
- any other specific GC reference type, effectively allowing WebAssembly code to implement a variety of rooting API schemes.
Memset and Memcpy Operators
Copying and clearing large memory regions is very common, and making these operations fast is architecture dependent. Although this can be done in the MVP via i32.load
and i32.store
, this requires more bytes of code and forces VMs to recognize the loops as well. The following operators can be added to improve performance:
move_memory
: Copy data from a source memory region to destination region; these regions may overlap: the copy is performed as if the source region was first copied to a temporary buffer, then the temporary buffer is copied to the destination regionset_memory
: Set all bytes in a memory region to a given byte
We expect that WebAssembly producers will use these operations when the region size is known to be large, and will use loads/stores otherwise.
TODO: determine how these operations interact w/ shared memory.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论