Zig is a general-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.
Backed by the Zig Software Foundation, the project is financially sustainable. These core team members are paid for their time:
Please consider a recurring donation to the ZSF to help us pay more contributors!
This release features 8 months of work: changes from 269 different contributors, spread among 4457 commits. It is the début of Package Management.
parse
replaced by parseFromSlice
or other parseFrom*
Parser.parse
replaced by parseFromSlice
into Value
writeStream
API simplification
StringifyOptions
overhauledTokenStream
replaced by Scanner
StreamingParser
replaced by Reader
union
typesA green check mark (✅) indicates the target meets all the requirements for the support tier. The other icons indicate what is preventing the target from reaching the support tier. In other words, the icons are to-do items. If you find any wrong data here please submit a pull request!
freestanding | Linux 3.16+ | macOS 11+ | Windows 10+ | WASI | |
---|---|---|---|---|---|
x86_64 | ✅ | ✅ | ✅ | ✅ | N/A |
x86 | ✅ | #1929 🐛 | 💀 | #537 🐛 | N/A |
aarch64 | ✅ | #2443 🐛 | ✅ | #16665 🐛 | N/A |
arm | ✅ | #3174 🐛 | 💀 | 🐛📦🧪 | N/A |
mips | ✅ | #3345 🐛📦 | N/A | N/A | N/A |
riscv64 | ✅ | #4456 🐛 | N/A | N/A | N/A |
sparc64 | ✅ | #4931 🐛📦🧪 | N/A | N/A | N/A |
powerpc64 | ✅ | 🐛 | N/A | N/A | N/A |
powerpc | ✅ | 🐛 | N/A | N/A | N/A |
wasm32 | ✅ | N/A | N/A | N/A | ✅ |
free standing | Linux 3.16+ | macOS 11+ | Windows 10+ | FreeBSD 12.0+ | NetBSD 8.0+ | Dragon FlyBSD 5.8+ | OpenBSD 7.3+ | UEFI | |
---|---|---|---|---|---|---|---|---|---|
x86_64 | Tier 1 | Tier 1 | Tier 1 | Tier 1 | ✅ | ✅ | ✅ | ✅ | ✅ |
x86 | Tier 1 | ✅ | 💀 | ✅ | 🔍 | 🔍 | N/A | 🔍 | ✅ |
aarch64 | Tier 1 | ✅ | Tier 1 | ✅ | 🔍 | 🔍 | N/A | 🔍 | 🔍 |
arm | Tier 1 | ✅ | 💀 | 🔍 | 🔍 | 🔍 | N/A | 🔍 | 🔍 |
mips64 | ✅ | ✅ | N/A | N/A | 🔍 | 🔍 | N/A | 🔍 | N/A |
mips | Tier 1 | ✅ | N/A | N/A | 🔍 | 🔍 | N/A | 🔍 | N/A |
powerpc64 | Tier 1 | ✅ | 💀 | N/A | 🔍 | 🔍 | N/A | 🔍 | N/A |
powerpc | Tier 1 | ✅ | 💀 | N/A | 🔍 | 🔍 | N/A | 🔍 | N/A |
riscv64 | Tier 1 | ✅ | N/A | N/A | 🔍 | 🔍 | N/A | 🔍 | 🔍 |
sparc64 | Tier 1 | ✅ | N/A | N/A | 🔍 | 🔍 | N/A | 🔍 | N/A |
zig targets
is guaranteed to include this target.freestanding | Linux 3.16+ | Windows 10+ | FreeBSD 12.0+ | NetBSD 8.0+ | UEFI | |
---|---|---|---|---|---|---|
x86_64 | Tier 1 | Tier 1 | Tier 1 | Tier 2 | Tier 2 | Tier 2 |
x86 | Tier 1 | Tier 2 | Tier 2 | ✅ | ✅ | Tier 2 |
aarch64 | Tier 1 | Tier 2 | Tier 2 | ✅ | ✅ | ✅ |
arm | Tier 1 | Tier 2 | ✅ | ✅ | ✅ | ✅ |
mips64 | Tier 2 | Tier 2 | N/A | ✅ | ✅ | N/A |
mips | Tier 1 | Tier 2 | N/A | ✅ | ✅ | N/A |
riscv64 | Tier 1 | Tier 2 | N/A | ✅ | ✅ | ✅ |
powerpc32 | Tier 2 | Tier 2 | N/A | ✅ | ✅ | N/A |
powerpc64 | Tier 2 | Tier 2 | N/A | ✅ | ✅ | N/A |
bpf | ✅ | ✅ | N/A | ✅ | ✅ | N/A |
hexagon | ✅ | ✅ | N/A | ✅ | ✅ | N/A |
amdgcn | ✅ | ✅ | N/A | ✅ | ✅ | N/A |
sparc | ✅ | ✅ | N/A | ✅ | ✅ | N/A |
s390x | ✅ | ✅ | N/A | ✅ | ✅ | N/A |
lanai | ✅ | ✅ | N/A | ✅ | ✅ | N/A |
csky | ✅ | ✅ | N/A | ✅ | ✅ | N/A |
freestanding | emscripten | |
---|---|---|
wasm32 | Tier 1 | ✅ |
zig targets
will display the target if it is available.-femit-asm
and cannot emit
object files, in which case -fno-emit-bin
is enabled by
default and cannot be overridden.Tier 4 targets:
The baseline value used for "i386" was a pentium4 CPU model, which is actually i686. It is also possible to target a more bare bones CPU than pentium4. Therefore it is more correct to use "x86" rather than "i386" for this CPU architecture. This architecture has been renamed in the CLI and Standard Library APIs (#4663).
Removed the always-single-threaded limitation for libc++ (#6573).
Fixed Bootstrapping on this host.
Development builds of Zig are now available on the download page with every successful CI run.
Luuk de Gram writes:
Starting from this release, Zig no longer unconditionally passes
--allow-undefined
to the Linker. By removing this flag, the user
will now be faced with an error during the linking stage rather than a panic
during runtime for undefined functions. If your project requires such
behavior, the flag import-symbols
can be used, which will allow
undefined symbols during linking.
For this change we also had to update the
strategy of exporting all symbols to the host. We no longer unconditionally
export all symbols to the host. Previously this would result in unwanted
symbols existing in the final binary. By default we now only export the symbol
to the linker, meaning they will only be visible to other object files so they
can be resolved correctly. If you wish to export a symbol to the host
environment, the flag --export=[name]
can be used. Alternatively,
the flag -rdynamic
can be used to export all visible symbols to
the host environment. By setting the visibility
field to
.hidden
on std.builtin.ExportOptions
a symbol will
remain only visible to the linker and not be exported to the host. With this
breaking change, the linker will behave the same whether a user is using
zig cc or Clang directly.
The Standard Library gained std.heap.wasm_allocator
, a WebAssembly-only simple, fast, and small allocator (#13513). It is able to simultaneously achieve all three
of these things thanks to the new Memory Allocation API of Zig 0.11.0 which
allows shrink to fail.
Compiled with -target wasm32-freestanding -OReleaseSmall
, this example
produces a 1 KB wasm object file. It is used as the memory allocator when
Bootstrapping Zig.
A couple enhancements to zig cc and compiler-rt.
AVR remains an experimental target, however, in this release cycle, the Compiler implements AVR address spaces, and places functions on AVR in the flash address space.
Robin "Snektron" Voetter writes:
Three new built-in functions are added to aid with writing GPGPU kernels in Zig: @workGroupId
, @workItemId
, and @workGroupSize
. These are respectively used to query the index of the work group of the current thread in a kernel invocation, the size of a work group in threads, and the thread index in the current work group. For now, these are only wired up to work when compiling Zig to AMD GCN machine code via LLVM, that can be used with ROCm. In the future they will be added to the LLVM-based NVPTX and self-hosted SPIR-V backends as well.
For example, the following Zig GPU kernel performs a simple reduction on its inputs:
This kernel can be compiled to a HIP module for use with ROCm using Zig and clang-offload-bundler:
The resulting module can be loaded directly using hipModuleLoadData
and executed using hipModuleLaunchKernel
.
The officially supported minimum version of Windows is now 10, because Microsoft dropped LTS support for 8.1 in January. Patches to Zig supporting for older versions of Windows are still accepted into the codebase, but they are not regularly tested, not part of the Bug Stability Program, and not covered by the Tier System.
spawnWindows
: Improve worst-case performance considerably + tests (#13993)ChildProcess.spawnWindows
: PATH
search fixes + optimizations (#13983)Zig now recognizes the .res
extension and links it as if it were an object file (#6488).
.res
files are compiled Windows resource files that get linked into executables/libraries. The linker knows what to do with them, but previously you had to trick Zig into thinking it was an object file (by renaming it to have the .obj extension, for example).
Now, the following works:
zig build-exe main.zig resource.res
or, in build.zig:
exe.addObjectFile("resource.res");
Windows: Support UNC, rooted, drive relative, and namespaced/device paths
In this release, 64-bit ARM (aarch64) Windows becomes a Tier 2 Support target. Zip files are available on the download page and this target is tested with every commit to source control. However, there are known bugs preventing this target from reaching Tier 1 Support.
EL1
system ID registers. The values are mapped
as CP 40xx
registry keys which we now pull and parse for CPU feature information.system/arm.zig
containing CPU model table. Now we can reuse the table between CPU model parsers on
Linux and Windows. Catalina (version 10.15) is unsupported by Apple as of November 30, 2022. Likewise, Zig 0.11.0 drops support for this version.
ptrace
syscall with errno handlingreadMachODebugInfo
panicking during panic - this fixes a class of bugs on macOS
where a segfault happening in a loaded dylib with no debug info would cause a panic in the panic handler instead
of simply noting that the dylib has no valid debug info via error.MissingDebugInfo
. An example could be code
linking some system dylib and causing some routine to segfault on say invalid pointer value, which should normally
cause Zig to print an incomplete stack trace anchored at the currently loaded image and backtrace all the way back
to the Zig binary with valid debug info. Currently, in a situation like this we would trigger a panic within a
panic.libSystem.tbd
to macOS 13std.SemanticVersion
copy_file_range
Luuk de Gram writes:
In this release-cycle the Standard Library gained experimental
support for WASI-threads. This means it will be possible to create a
multi-threaded application when targeting WASI without having to change your
codebase. Keep in mind that the feature is still in proposal phase 1 of the
WASI specification, so support within the standard library is still
experimental and bugs are to be expected. As the feature is still experimental,
we still default to single-threaded builds when targeting WebAssembly. To
disable this, one can pass -fno-single-threaded
in combination
with the --shared-memory
flags. This also requires the CPU
features atomics
and bulk-memory
to be enabled.
The same flags will also work for freestanding WebAssembly modules, allowing a user to build a multi-threaded WebAssembly module for other runtimes. Be aware that the threads in the standard library are only available for WASI. For freestanding, the user must implement their own such as Web Workers when building a WebAssembly module for the browser.
Zig now gives +x
to the .wasm
file if it is an executable and the OS is WASI.
Some systems may be configured to execute such binaries directly. Even
if that is not the case, it means we will get "exec format error" when
trying to run it rather than "access denied", and then can react to that
in the same way as trying to run an ELF file from a foreign CPU
architecture.
This is part of the strategy for Foreign Target Execution and Testing.
Jacob G-W writes:
During this release cycle, the Plan 9 Linker backend has been updated so that it can link most code from the x86 Backend. The Standard Library has also been improved, broadening its support for Plan 9's features:
page_allocator
implementation for Plan 9,
employing the SbrkAllocator
available in the standard library.
This addition now permits Memory Allocation on Plan 9.
std.fs
works. This is a crucial improvement, as
Plan 9 heavily utilizes filesystem interfaces for system interactions.
std.os.plan9.errstr
has been implemented, enabling
users to read error messages from system calls that return -1
.
However, as error messages in Plan 9 are string-based, additional efforts
will be needed to make these errors interface with Zig errors.
Minor changes and upkeep to the language reference. Nothing major with this release.
This feature is still experimental.
Loris Cro writes:
Thank you to all contributors who helped with Autodoc in this release cycle.
In particular welcome to two new Autodoc contributors:
And renewed thanks to long term Autodoc contributor Krzysztof Wolicki.
When searching, right below the search box, you will now see a new expandable help section that explains how to use the new search system more effectively (#15475). The text is reported here:
ArrayListUnmanaged
:array
list
unmanaged
stun
ray
managed
HashMapUnmanaged
also produces MapUnmanaged
and Unmanaged
, same with snake_case and camelCase names).AutoHashMap
).std.json.parse
), in which case the order of the terms will determine whether the match goes above or below the "other results" line.std.fs
, while still showing (but below the line) matches from std.Build
.std.fs
, while "windows fs" will prioritize "fs"-related results in std.windows
.Added missing support for the following language features:
//!
)usingnamespace
_
function parameter namesDoctests are now supported!
Doctests are tests that are meant to be part of the documentation. You can create a doc test by giving it the name of a decl like so:
Check out std.json
for some more examples.
/
will focus the searchbar on all browsers except Firefox.
You can use the new Autodoc preferences (p
) menu to enable it also on Firefox.builtin.Type.Int
and builtin.Type.Float
to a
u16
instead of comptime_int
.builtin.Version
with SemanticVersion
.The Peer Type Resolution algorithm has been improved to resolve more types and in a more consistent manner. Below are a few examples which did not resolve correctly in 0.10 but do now.
Peer Types | Resolved Type |
---|---|
[:s]const T , []T |
[]const T |
E!*T , ?*T |
E!?*T |
[*c]T , @TypeOf(null) |
[*c]T |
?u32 , u8 |
?u32 |
[2]u32 , struct { u32, u32 } |
[2]u32 |
*const @TypeOf(.{}) , []const u8 |
[]const u8 |
This release cycle introduces multi-object
for
loops into the Zig language. This is a new construct providing a
way to cleanly iterate over multiple sequences of the same length.
Consider the case of mapping a function over an array. Previously, your code may have looked like this:
This code has an unfortunate property: in the loop, we had to make the arbitrary choice to
iterate over input
, using the captured index i
to access the corresponding element in output
. With multi-object for
loops, this code becomes much cleaner:
In this new code, we use the new for loop syntax to iterate over both slices simultaneously, capturing the output element by reference so we can write to it. The index capture is no longer necessary in this case. Note that this is not limited to two operands: arbitrarily many slices can be passed to the loop provided each has a corresponding capture. The language asserts that all passed slices have the same length: if they do not, this is safety-checked Undefined Behavior.
Previously, index captures were implicitly provided if you added a second identifier to the
loop's captures. With the new multi-object loops, this has changed. As well as standard
expressions, the operand passed to a for loop can also be a range. These take the form
a..
or a..b
, with the latter form being exclusive
on the upper bound. If an upper bound is provided, b - a
must match the
length of any given slices (or other bounded ranges). If no upper bound is provided, the loop is
bounded based on other range or slice operands. All for loops must be bounded (i.e. you cannot
iterate over only an unbounded range). The old behavior is equivalent to adding a trailing
0..
operand to the loop.
The lower and upper bounds of ranges are of type usize
, as is the
capture, since this feature is primarily intended for iterating over data in memory. Note that
it is valid to loop over only a range:
To automatically migrate old code, zig fmt
automatically adds
0..
operands to loops with an index capture and no corresponding operand.
The behavior of pointer captures in for loops has changed slightly. Previously, the following code was valid, but now it emits a compile error:
This code previously worked because the language implicitly took a reference to
arr
. This no longer happens: if you use a pointer capture, the
corresponding iterable must be a pointer or slice. In this case, the fix - as suggested by the
error note - is simply to take a reference to the array.
0.11.0 changes the usage of the builtins @memcpy
and
@memset
to make them more useful.
@memcpy
now takes two parameters. The first is the destination, and the
second is the source. The builtin copies values from the source address to the destination
address. Both parameters may be a slice or many-pointer of any element type; the destination
parameter must be mutable in either case. At least one of the parameters must be a slice; if
both parameters are a slice, then the two slices must be of equal length.
The source and destination memory must not overlap (overlap is considered safety-checked Undefined Behavior). This is one of the key motivators for using this builtin over the standard library.
Since this builtin now encompasses the most common use case of
std.mem.copy
, that function has been renamed to
std.mem.copyForwards
. Like copyBackwards
, the only
use case for that function is when the source and destination slices overlap, meaning elements
must be copied in a particular order. When migrating code, it is safe to replace all uses of
copy
with copyForwards
, but potentially more
optimal and clearer to instead use @memcpy
provided the slices are
guaranteed not to overlap.
@memset
has also changed signature. It takes two parameters: the first is
a mutable slice of any element type, and the second is a value which is coerced to that element
type. All values referenced by the destination slice are set the provided value.
This builtin now precisely encompasses the former use cases of
std.mem.set
. Therefore, this standard library function has been removed
in favor of the builtin.
The builtins @min
and @max
have undergone two key
changes. The first is that they now take arbitrarily many arguments, finding the minimum/maximum
value across all arguments: for instance, @min(2, 1, 3) == 1
. The
second change relates to the type returned by these operations. Previously,
Peer Type Resolution was used to unify
the operand types. However, this sometimes led to redundant uses of
@intCast
: for instance @min(some_u16, 255)
can
always fit in a u8
. To avoid this, when these operations are performed on
integers (or vectors thereof), the compiler will now notice comptime-known bounds of the result
(based on either comptime-known operands or on differing operand types) and refine the result
type as tightly as possible.
This is a breaking change, as any usage of these values without an explicit type annotation may
now result in overflow: for instance, @min(my_u32, 255) + 1
used to be
always valid but may now overflow. This is solved with explicit type annotations, either with
@as
or using an intermediate const
.
Since these changes have been applied to the builtin functions, several standard library functions are now redundant. Therefore, the following functions have been deprecated:
std.math.min
std.math.max
std.math.min3
std.math.max3
For more information on these changes, see the proposal and the PR implementing it.
New builtin:
@trap() noreturn
This function inserts a platform-specific trap/jam instruction which can be
used to exit the program abnormally.
This may be implemented by explicitly emitting an invalid instruction which
may cause an illegal instruction exception of some sort.
Unlike @breakpoint
, execution does not continue afterwards:
A new builtin, @inComptime()
, has been introduced. This builtin returns a
bool
indicating whether or not it was evaluated in a
comptime
scope.
Use @constCast
instead to fix the error.
An accepted proposal has been
implemented to rename all casting builtins of the form @xToY
to
@yFromX
. The goal of this change is to make code more readable by
ensuring information flows in a consistent direction (right-to-left) through function-call-like
expressions.
The full list of affected builtins is as follows:
old name | new name |
---|---|
@boolToInt |
@intFromBool |
@enumToInt |
@intFromEnum |
@errorToInt |
@intFromError |
@floatToInt |
@intFromFloat |
@intToEnum |
@enumFromInt |
@intToError |
@errorFromInt |
@intToFloat |
@floatFromInt |
@intToPtr |
@ptrFromInt |
@ptrToInt |
@intFromPtr |
zig fmt
will automatically update usages of the old builtin names in your code.
Zig 0.11.0 implements an accepted proposal
which changes how "casting" builtins (e.g. @intCast
,
@enumFromInt
) behave. The goal of this change is to improve readability
and safety.
In previous versions of Zig, casting builtins took as a parameter the destination type of
the cast, for instance @intCast(u8, x)
. This was easy to understand, but
can lead to code duplication where a type must be repeated at the usage site despite already
being specified as, for instance, a parameter type or field type.
As a motivating example, consider a function parameter of type u16
which
you are passing a u64
. You need to use @intCast
to
convert your value to the correct type. Now suppose that down the line, you find out the
parameter needs to be a u32
so you can pass in larger values. There is now
a footgun here: if you don't change every @intCast
to cast to the correct
type, you have a silent bug in your program which may not cause a problem for a while, making it
hard to spot.
This is the basic pattern motivating this change. The idea is that instead of writing
f(@intCast(u16, x))
, you instead write
f(@intCast(x))
, and the destination type of the cast is inferred based on
the type. This is not just about function parameters: it is also applicable to struct
initializations, return values, and more.
This language change removes the destination type parameter from all cast builtins. Instead, these builtins now use Result Location Semantics to infer the result type of the cast from the expression's "result type". In essence, this means type inference is used. Most expressions which have a known concrete type for their operand will provide a result type. For instance:
const x: T = e
gives e
a result type of T
@as(T, e)
gives e
a result type of T
return e
gives e
a result type of the function's return typeS{ .f = e }
gives e
a result type of the type of the field S.f
f(e)
gives e
a result type of the first parameter type of f
The full list of affected cast builtins is as follows:
@addrSpaceCast
, @alignCast
, @ptrCast
@errSetCast
, @floatCast
, @intCast
@intFromFloat
, @enumFromInt
, @floatFromInt
, @ptrFromInt
@truncate
, @bitCast
Using these builtins in an expression with no result type will give a compile error:
This error indicates one possible method of providing an explicit result type: using
@as
. This will always work, however it is usually not necessary. Instead,
result types are normally inferred from type annotations, struct/array initialization expressions,
parameter types, and so on.
Where possible, zig fmt
has been made to automatically migrate uses of the old builtins,
using a naive translation based on @as
. Most builtins can be automatically updated
correctly, but there are a few exceptions.
@addrSpaceCast
and @alignCast
cannot be translated as the old usage does not provide the full result type. zig fmt
will not modify it.@ptrCast
may sometimes decrease alignment where it previously did not, potentially triggering compile errors. This can be fixed by modifying the type to have the correct alignment.@truncate
will be translated incorrectly for vectors, causing a compile error. This can be fixed by changing the scalar type T
to the vector type @Vector(n, T)
.@splat
cannot be translated as the old usage does not provide the full result type. zig fmt
will not modify it.
The builtins @addrSpaceCast
and @alignCast
would
become quite cumbersome to use under this system as described, since you would now have to specify
the full intermediate pointer types. Instead, pointer casts (those two builtins and @ptrCast
)
are special. They combine into a single logical operation, with each builtin effectively
"allowing" a particular component of the pointer to be cast rather than "performing" it. (Indeed,
this may be a helpful mental model for the new cast builtins more generally.) This means any
sequence of nested pointer cast builtins requires only one result type, rather than one at every
intermediate computation.
The @splat
builtin has undergone a similar change. It no longer has a
parameter to indicate the length of the resulting vector, instead using the expression's result
type to infer this and the type of its operand.
Tuple types can now be declared using struct declaration syntax without the field types (#4335):
Packed and extern tuples are forbidden (#16551).
This can be a nice tool when writing Crypto code, and indeed is used extensively by the Standard Library to avoid heap Memory Allocation in the new TLS Client.
Zig allows you to directly index pointers to arrays like plain arrays, which transparently dereferences the pointer as required. For consistency, this is now additionally allowed for pointers to tuples and vectors (the other non-pointer indexable types).
Now that we have started to get into writing our own Code Generation and not relying exclusively on LLVM, the flaw with the previous API becomes clear: writing the result through a pointer parameter makes it too hard to use a special value returned from the builtin and detect the pattern that allows lowering to the efficient code.
Furthermore, the result pointer is incompatible with SIMD vectors (related: Cast Inference).
Arithmetic overflow functions now return a tuple, like this:
@addWithOverflow(a: T, b: T) struct {T, u1}
@addWithOverflow(a: @Vector(T, N), b: @Vector(T, N)) struct {@Vector(T, N), @Vector(u1, N)}
If #498 were implemented,
parseInt
would look like this:
fn parseInt(comptime T: type, buf: []const u8, radix: u8) !T {
var x: T = 0;
for (buf) |c| {
const digit = switch (c) {
'0'...'9' => c - '0',
'A'...'Z' => c - 'A' + 10,
'a'...'z' => c - 'a' + 10,
else => return error.InvalidCharacter,
};
x, const mul_overflow = @mulWithOverflow(x, radix);
if (mul_overflow != 0) return error.Overflow;
x, const add_overflow = @addWithOverflow(x, digit);
if (add_overflow != 0) return error.Overflow;
}
return x;
}
However #498 is neither implemented nor accepted yet, so actual usage must do this:
const std = @import("std");
fn parseInt(comptime T: type, buf: []const u8, radix: u8) !T {
var x: T = 0;
for (buf) |c| {
const digit = switch (c) {
'0'...'9' => c - '0',
'A'...'Z' => c - 'A' + 10,
'a'...'z' => c - 'a' + 10,
else => return error.InvalidCharacter,
};
const mul_result = @mulWithOverflow(x, radix);
x = mul_result[0];
const mul_overflow = mul_result[1];
if (mul_overflow != 0) return error.Overflow;
const add_result = @addWithOverflow(x, digit);
x = add_result[0];
const add_overflow = add_result[1];
if (add_overflow != 0) return error.Overflow;
}
return x;
}
More details: #10248
This is technically not a change to the language, however, it bears mentioning in the language changes section, because it makes a particular idiom be even more idiomatic, by recognizing the pattern directly in the Compiler.
This pattern is extremely common:
The pattern is useful because it is effectively a slice-by-length rather than
slice by end index. With this pattern, when len
is compile-time known,
the expression will be a pointer to an array rather than a slice type, which is generally
a preferable type.
The actual language change here is that this is now supported for many-ptrs. Where previously
you had to write (ptr + off)[0..len]
, you can now instead write
ptr[off..][0..len]
. Note that in general, unbounded slicing of
many-pointers is still not permitted, requiring pointer arithmetic: only this "slicing by
length" pattern is allowed.
Zig 0.11.0 now detects this pattern and generates more efficient code.
You can think of Zig as having both slice-by-end and slice-by-len syntax, it's just that one of them is expressed in terms of the other.
More details: #15482
In this example, there is no compile error because the comptime-ness of the
arguments is propagated to the return value of the inlined function. However,
as demonstrated by the call_count
global variable, runtime side-effects of
the inlined function still occur.
The inline
keyword in Zig is an extremely powerful tool that
should not be used lightly. It's best to let the compiler decide when to
inline a function, except for these scenarios:
Generally we don't want Zig programmers to use C-style variadic functions. But sometimes you have to interface with C code.
Here are two use cases for it:
Only some targets support this new feature:
That makes this feature experimental because it does not disqualify a target from Tier 1 Support if it does not support C-style var args.
More information: #515
This is strictly for C ABI Compatibility and should only be used when it is required by the ABI.
See #875 for more details.
Previously, comptime
blocks in runtime code worked in a highly unintuitive
way: they did not actually enforce compile-time
evaluation of their bodies. This has been resolved in 0.11.0. The entire body of a comptime
block will now be evaluated at compile time, and a compile error is triggered if this is not possible.
This change has one particularly notable consequence. Previously, it was allowed to return
from a runtime function within a comptime
block. However, this is illogical:
the return cannot actually happen at comptime, since this function is being called at runtime. Therefore, this is
now illegal.
The workaround for this issue is to compute the return value at comptime, but return it at runtime:
This change similarly disallows comptime try
from within a runtime function,
since on error this attempts to return a value at compile time. To retain the old behavior, this
sequence should be replaced with try comptime
.
The @intFromBool
builtin (previously called
@boolToInt
) previously returned either a u1
or a
comptime_int
, depending on whether or not it was evaluated at
comptime
. It has since been changed to always return a
u1
to improve consistency between code running at runtime and comptime.
It already worked on structs; there was no reason for it to not work on unions (#6611).
Calling @fieldParentPtr
on a pointer that is not actually
a field of the parent type is currently unchecked illegal behavior,
however there is an accepted proposal to add a safety check:
add safety checks for pointer casting
It was a bug that private declarations were included
in the result of @typeInfo
(#10731).
The is_pub
field has been removed from
std.builtin.Type.Declaration
.
Zero-sized fields are now allowed in extern struct
types, because
they do not compromise the well-defined memory layout (#16404).
This change allows the following types to appear in extern structs:
void
Note that packed structs are already allowed in extern structs, provided that their backing integer is allowed.
Did you know Zig had bound functions?
No? I rest my case. Good riddance!
The following code was valid in 0.10, but is not any more:
Method calls are now restricted to the exact syntactic form a.b(args)
.
Any deviation from this syntax - for instance, extra parentheses as in
(a.b)(args)
- will be treated as a field access.
The stack
option has been removed from @call
(#13907).
There is no upgrade path for this one, I'm afraid. This feature has proven difficult to implement in the LLVM Backend.
More investigation will be needed to see if something that solves the use case of switching call stacks can be brought back to the language before Zig reaches 1.0.
Previously, comparing an integer to a comptime-known value required that value to fit in the
integer type. For instance, comparing a u8
to 500
was a compile error. However, such comparisons can be useful when writing generic or future-proof
code.
As such, comparisons of this form are now allowed. However, since these comparisons are
tautological, they do not cause any runtime checks: instead, the result is comptime-known based
on the type. For instance, my_u8 == 500
is comptime-known
false
, even if my_u8
is not itself comptime-known.
A Zig module (previously known as "package") is a collection of source files, with a single root
source file, which can be imported in your code by name. For instance, std
is a module. An interesting case comes up when two modules attempt to
@import
the same source file.
Previously, when this happened, the source file became "owned" by whichever import the compiler happened to reach first. This was a problem, because it could lead to inconsistent behavior in the compiler based on a race condition. This could be fixed by having the compiler analyzing the files multiple times - once for each module they're imported from - however, this could lead to slowdowns in compile times, and generally this kind of structure is indicative of a mistake anyway.
Therefore, another solution was chosen: having a single source file within multiple modules is now illegal. When a source file is encountered in two different modules, an error like the following will be emitted:
The correct way to resolve this error is usually to factor the shared file out into its own
module, which other modules can then import. This can be done in the Build System using
std.Build.addModule
.
In general, it is intended for single-item array pointers to act equivalently to a slice. That
is, *const [5]u8
is essentially equivalent to
[]const u8
but with a comptime-known length.
Previously, the ptr
field on slices was an exception to this rule, as it
did not exist on single-item array pointers. This field
has been added, and is equivalent to
simple coercion from *[N]T
to [*]T
.
Method call syntax object.method(args)
only works when the first
parameter of method
has a specific type: previously, this was either the
type containing the method, or a pointer to it. It is now additionally allowed for this type to
be an optional pointer. The value the method call is performed on must still be a non-optional
pointer, but it is coerced to an optional pointer for the method call.
There has been an open issue for several years about the fact that Zig will emit all referenced functions to a binary, even if the function is only used at compile-time. This can cause binary bloat, as well as potentially triggering false positive compile errors if a function is intended to only be used at compile-time.
This issue has been resolved in this release cycle. Zig will now only emit a runtime version of a function to the binary if one of the following conditions holds:
As well as avoiding potential false positive compile errors, this change leads to a slight
decrease in binary sizes, and may slightly speed up compilation in some cases. Note that as a
consequence of this change, it is no longer sufficient to write
comptime { _ = f; }
to force a function to be analyzed and emitted to the
binary. Instead, you must write comptime { _ = &f; }
.
Prior to 0.11.0, when a switch
prong captured a union payload, all
payloads were required to have the exact same type. This has been changed so that
Peer Type Resolution is used to
combine the payload types, allowing distinct but compatible types to be captured together.
Pointer captures also make use of peer type resolution, but are more limited: the payload types must all have the same in-memory representation so that the payload pointer can be safely cast.
0.10 had some arbitrary restrictions on the types of function parameters and their return types:
they were not permitted to be @TypeOf(null)
or
@TypeOf(undefined)
. While these types are rarely useful in this
context, they are still completely normal comptime-only types, so this restriction on their
usage was needless. As such, they are now allowed as parameter and return types.
Sometimes the language design drives the Compiler development, but sometimes it's the other way around, as we discover through trial and error what fundamental simplicity looks like.
In this case, generic functions and inferred error sets have been reworked for a few reasons:
Things are mostly the same, except there may be two kinds of breakages caused by it. Firstly, type declarations are evaluated for every generic function call:
With Zig 0.10.x, this test passed. With 0.11.0, it fails. Which behavior Zig will have at 1.0 is yet to be determined. In the meantime, it is best not to rely on type equality in this case.
Suggested workaround is to make a function that returns the type:
The second fallout from this change is mutually recursive functions with inferred error sets:
Suggested workaround is to introduce an explicit error set:
More information: #16318
Some things that used to be allowed in callconv(.Naked)
functions are now compile errors:
Runtime calls are disallowed because it is not possible to know the current stack alignment in order to follow the proper ABI to automatically compile a call. Explicit returns are disallowed because on some targets, it is not mandated for the return address to be stored in a consistent place.
The most common kind of upgrade that needs to be performed is:
As the note indicates, an explicit unreachable
is not needed at the end of a naked function anymore. Since explicit returns
are no longer allowed, it will just be assumed to be unreachable. Therefore, all that needs to be done is to delete the
unreachable
statement, which works even when the return type of the function is not unreachable
.
In general, naked functions should only contain comptime logic and asm volatile
statements, which allows any required
target-specific runtime calls and returns to be constructed.
The @embedFile
builtin, as well as literal file paths, now supports
module-mapped names, like @import
.
The Zig standard library is still unstable and mainly serves as a testbed for the language. After there are no more planned Language Changes, it will be time to start working on stabilizing the standard library. Until then, experimentation and breakage without warning is allowed.
comptime
collect all options under one namespace
The Allocator interface now allows implementations to refuse to shrink (#13666). This makes ArrayList more efficient because it avoids copying allocated but unused bytes by attempting a resize in place, and falling back to allocating a new buffer and doing its own copy. With a realloc() call, the allocator implementation would pointlessly copy the extra capacity:
It also enabled implementing WasmAllocator which was not possible with the previous interface requirements.
Isaac Freund writes:
These functions were footgunny when working with pointers to arrays and slices. They just returned the stated length of the array/slice without iterating and looking for the first sentinel, even if the array/slice is a sentinel-terminated type.
From looking at the quite small list of places in the Standard Library and Compiler that this change breaks existing code, the new code looks to be more readable in all cases.
The usage of std.mem.span/len was totally unneeded in most of the cases affected by this breaking change.
We could remove these functions entirely in favor of other existing functions in std.mem such as std.mem.sliceTo(), but that would be a somewhat nasty breaking change as std.mem.span() is very widely used for converting sentinel terminated pointers to slices. It is however not at all widely used for anything else.
Therefore I think it is better to break these few non-standard and potentially incorrect usages of these functions now and at some later time, if deemed worthwhile, finally remove these functions.
If we wait for at least a full release cycle so that everyone adapts to this change first, updating for the removal could be a simple find and replace without needing to worry about the semantics.
@log
fn eql(self: Self, other: Self) bool
fn subsetOf(self: Self, other: Self) bool
fn supersetOf(self: Self, other: Self) bool
fn complement(self: Self) Self
fn unionWith(self: Self, other: Self) Self
fn intersectWith(self: Self, other: Self) Self
fn xorWith(self: Self, other: Self) Self
fn differenceWith(self: Self, other: Self) Self
Sorting is now split into two categories: stable and unstable. Generally, it's best to use unstable if you can, but stable is a more conservative choice. Zig's stable sort remains a blocksort implementation, while unstable sort is a new pdqsort implementation. heapsort is also available in the standard library (#15412).
Now, debug builds have assertions to ensure that the comparator function
(lessThan
) does not return conflicting results (#16183).
std.sort.binarySearch: relax requirements to support both homogeneous and heterogeneous keys (#12727).
Frank Denis writes:
std.crypto.auth.aegis
) - AEGIS can be used as a high-performance MAC on systems with hardware AES support. Note that this is not a hash function; a secret key is absolutely required in order to authenticate untrusted messages.rejectLowOrder()
function was added to quickly reject low-order points.extractInit()
, a PRK can now be initialized with only a salt, the keying material being added later, possibly as multiple chunks.finalResult()
function that returns the digest as an array, as well as a peek()
function that returns it without changing the state.crypto.auth.cmac
.std.crypto.ecc
: the isOdd()
function was added to return the parity of a field element.bcrypt
: bcrypt has a slightly annoying limitation: passwords are limited to 72 bytes, and additional bytes are silently ignored. A new option, silently_truncate_password
, can be set to true
to transparently pre-hash the passwords and overcome this limitation.crypto.Hmac*.key_size
was previously set to 256 bits for general guidance. This has been changed to match the actual security level of each function.secp256k1
: the mulPublic()
and verify()
functions can now return a NonCanonicalError
in addition to existing errors.Ed25519
: the top-level Ed25519.sign()
, Ed25519.verify()
, key_blinding.sign()
and key_blinding.unblindPublicKey()
functions, that were already deprecated in version 0.10.0, have been removed. For consistency with other signature schemes, these functions have been moved to the KeyPair
, PublicKey
, BlindKeyPair
and BlindPublicKey
structures.The Keccak permutation was only used internally for sha3
. It was completely revamped and has now its dedicated public interface in crypto.core.keccak
.
keccak.KeccakF
is the permutation itself, which now supports sizes between 200 and 1600 bits, as well as a configurable number of rounds. And keccak.State
offers an API for standard sponge-based constructions.
Taking advantage of this, the SHAKE extendable output function (XOF) has been added, and can be found in std.crypto.hash.sha3.Shake128
and std.crypto.hash.sha3.Sha256
. SHAKE is based on SHA-3, NIST-approved, and the output of can be of any length, which has many applications and is something we were missing in the standard library.
The more recent TurboSHAKE variant is also available, as crypto.hash.sha3.TurboShake128
and crypto.hash.sha3.TurboShake256
. TurboSHAKE benefits from the extensive analysis of SHA-3, its output can also be of any length, and it has good performance across all platforms. In fact, on CPUs without SHA-256 acceleration, and when using WebAssembly, TurboSHAKE is the fastest function we have in the standard library. If you need a modern, portable, secure, overall fast hash function / XOF, that is not vulnerable to length-extension attacks (unlike SHA-256), TurboSHAKE should be your go-to choice.
Kyber is a post-quantum public key encryption and key exchange mechanism. It was selected by NIST for the first post-quantum cryptography standard.
It is available in the standard library, in the std.crypto.kem
namespace, making Zig the first language with post-quantum cryptography available right in the standard library.
Kyber512
, Kyber768
and Kyber1024
, as specified in the current draft, are supported.
The TLS Client also supports the hybrid X25519Kyber768
post-quantum key agreement mechanism by default.
Thanks a lot to Bas Westerbaan for contributing this!
Cryptography frequently requires computations over arbitrary finite fields.
This is why a new namespace made its appearance: std.crypto.ff
.
Functions from this namespace never require dynamic allocations, are designed to run in constant time, and transparently perform conversions from/to the Montgomery domain.
This allowed us to implement RSA verification without using any allocators.
Side channels in cryptographic code can be exploited to leak secrets.
And mitigations are useful but also come with a performance hit.
For some applications, performance is critical, and side channels may not be part of the threat model. For other applications, hardware-based attacks is a concern, and mitigations should go beyond constant-time code.
Zig 0.11 introduces the std_options.side_channels_mitigations
global setting to accomodate the different use cases.
It can have 4 different values:
none
: which doesn't enable additional mitigations. "Additional", because it only disables mitigations that don't have a big performance cost. For example, checking authentication tags will still be done in constant time.basic
: which enables mitigations protecting against attacks in a common scenario, where an attacker doesn't have physical access to the device, cannot run arbitrary code on the same thread, and cannot conduct brute-force attacks without being throttled.
medium
: which enables additional mitigations, targeting protection against practical attacks even in a shared environment.
full
: which enables all the available mitigations, going beyond ensuring that the code runs in constant time.
The more mitigations are enabled, the bigger the performance hit will be. But this lets applications choose what's best for their use case.
medium
is the default.
Gimli and Xoodoo have been removed from the standard library, in favor of Ascon.
These are great permutations, and there's nothing wrong with them from a practical security perspective.
However, both were competing in the NIST lightweight crypto competition.
Gimli didn't pass the 3rd selection round, and was not used much in the wild besides Zig and libhydrogen. It will never be standardized and is unlikely to get more traction in the future.
The Xoodyak mode, that Xoodoo is the permutation of, brought some really nice ideas. There are discussions to standardize a Xoodyak-like mode, but without Xoodoo.
So, the Zig implementations of these permutations are better maintained outside the standard library.
For lightweight crypto, Ascon is the one that we know NIST will standardize and that we can safely rely on from a usage perspective.
So, Ascon was added instead (in crypto.core.Ascon
). We support the 128
and 128a
variants, both with Little-Endian or Big-Endian encoding.
Note that we currently only ship the permutation itself, as the actual constructions are very likely to change a lot during the ongoing standardization process.
The default CSPRNG (std.rand.DefaultCsprng
) used to be based on Xoodoo. It was replaced by a traditional ChaCha-based random number generator, that also improves performance on most platforms.
For constrained environments, std.rand.Ascon
is also available as an alternative. As the name suggest, it's based on Ascon, and has very low memory requirements.
<hash size> * 255
bytes not an error any more.ReleaseSmall
mode.0
used to be rejected with an IdentityElement
error. These points were also not properly serialized. That has been fixed.EOR3
instruction on Apple Silicon for a slight performance boost.For a few releases, there was a std.x
namespace which was a playground
for some contributors to experiment with networking. In Zig 0.11, networking is no longer
experimental; it is part of the Package Management strategy. However, networking
is still immature and buggy, so use it at your own risk.
res
nullable in std.c.getaddrinfo
Zig 0.11.0 now provides a client implementation of Transport Layer Security v1.3
Thanks to Zig's excellent Crypto, the
implementation
came out lovely. Search for ++
if you want to see a nice
demonstration of Concatenation of Arrays and Tuples. This is also a nice showcase
of inline switch cases.
As lovely as it may be, there is not yet a TLS server implementation and so this code has not been fuzz-tested. In fact there is not yet any automated testing for this API, so use it at your own risk.
I want to recognize Shigueredo whose TLSv1.3 implementation I took inspiration from. For sponsoring us and for allowing us to copy their RSA implementation until Frank Denis implemented Constant-Time, Allocation-Free Field Arithmetic, 本当にありがとうございました!
The TLS client is a dependency of the HTTP Client which is a dependency of Package Management.
Open follow-up issues:
There is now an HTTP client (#15123). It is used by the Compiler to fetch URLs as part of Package Management.
It supports some basic features such as:
It is still very immature and not yet tested in a robust manner. Please use it at your own risk.
For more information, please refer to this article written by the maintainer of std.http: Coming Soon to a Zig Near You: HTTP Client
Start code now tells the kernel to ignore SIGPIPE before calling main (#11982). This can be disabled by adding this to the root module:
pub const keep_sigpipe = true;
Adjust this for Compile-Time Configuration Consolidated.
`SIGPIPE` is triggered when a process attempts to write to a broken pipe. By default, SIGPIPE will terminate the process without giving the program an opportunity to handle the situation. Unlike a segfault, it doesn't trigger the panic handler so all the developer sees is that the program terminated with no indication as to why.
By telling the kernel to instead ignore SIGPIPE, writes to broken pipes will return the EPIPE error (error.BrokenPipe) and the program can handle them like any other error.
DW_AT_ranges
in subprograms. Some DWARF5 subprograms have non-contiguous instruction ranges, which wasn't supported before. An example of such a function is puts
in Ubuntu's libc. Stack traces now include the names of these functions.-gdwarf
when compiling C/C++ code for Windows.Casey Banner writes:
When something goes wrong in your program, at the very least you expect it to output a stack trace. In many cases, upon seeing the stack trace, the error is obvious and can be fixed without needing to attach a debugger. If you are a project maintainer, having correct stack trace output is a necessity for your users to be able to provide actionable bug reports when something goes wrong.
In order to print a stack trace, the panic (or segfault) handler needs to unwind the stack, by traversing back through the stack frames starting at the crash site. Up until now, this was done strictly by utilizing the frame pointer. This method of stack unwinding works assuming that a frame pointer is available, which isn't the case if the code is compiled without one - ie. if -fomit-frame-pointer
was used.
It can be beneficial for performance reasons to not use a frame pointer, since this frees up an additional register, so some software maintainers may choose to ship libraries compiled without it. One of the motivating reasons for this change was solving a bug where unwinding a stack trace that started in Ubuntu's libc wasn't working - and indeed it is compiled with -fomit-frame-pointer
.
Since #15823 was merged, the Standard Library stack unwinder (std.debug.StackIterator
) now supports unwinding stacks using both DWARF unwind tables, and MachO compact unwind information. These unwind tables encode how to unwind all the register state and recover the return address for any location in the program.
In order to save space, DWARF unwind tables aren't program-sized lookup tables, but instead sets of opcodes which run on a virtual machine inside the unwinder to build the lookup table dynamically. Additionally, these tables can define register values in terms of DWARF expressions, which is a separate stack-machine based bytecode. This is all supported in the new unwinder.
As an example of how this improves stack trace output, consider the following zig program and C library (which will be built with -fomit-frame-pointer
):
Before the stack unwinding changes, the user would see the following output:
Segmentation fault at address 0x1234
???:?:?: 0x7f71d9ec997d in ??? (???)
/home/user/kit/zig/build-stage3-release-linux/lib/zig/std/start.zig:608:37: 0x20a505 in main (main)
const result = root.main() catch |err| {
^
Aborted
With the new unwinder:
Segmentation fault at address 0x1234
../sysdeps/x86_64/multiarch/strlen-avx2.S:74:0: 0x7fefd03b297d in ??? (../sysdeps/x86_64/multiarch/strlen-avx2.S)
./libio/ioputs.c:35:16: 0x7fefd0295ee7 in _IO_puts (ioputs.c)
src/lib.c:8:5: 0x7fefd04484aa in add_mult3 (/home/user/temp/stack/src/lib.c)
puts((const char*)0x1234);
^
src/lib.c:13:12: 0x7fefd0448542 in add_mult2 (/home/user/temp/stack/src/lib.c)
return add_mult3(x, y, n);
^
src/lib.c:17:12: 0x7fefd0448572 in add_mult1 (/home/user/temp/stack/src/lib.c)
return add_mult2(x, y, n);
^
src/lib.c:21:12: 0x7fefd04485a2 in add_mult (/home/user/temp/stack/src/lib.c)
return add_mult1(x, y, n);
^
/home/user/temp/stack/src/main.zig:6:45: 0x2123b7 in main (main)
std.debug.print("add: {}\n", .{ add_mult(5, 3, null) });
^
/home/user/kit/zig/build-stage3-release-linux/lib/zig/std/start.zig:608:37: 0x2129b4 in main (main)
const result = root.main() catch |err| {
^
../sysdeps/nptl/libc_start_call_main.h:58:16: 0x7fefd023ed8f in __libc_start_call_main (../sysdeps/x86/libc-start.c)
../csu/libc-start.c:392:3: 0x7fefd023ee3f in __libc_start_main_impl (../sysdeps/x86/libc-start.c)
???:?:?: 0x212374 in ??? (???)
???:?:?: 0x0 in ??? (???)
Aborted
The unwind information for libc (which comes from a separate file, see below) was loaded and used to unwind the stack correctly, resulting in a much more useful stack trace.
If there is no unwind information available for a given frame, the unwinder will fall back to frame pointer unwinding for the rest of the stack trace. For example, if the above program is built for x86-linux-gnu on the same system (which only has x86_64 libc debug information installed), it results in the following output:
Segmentation fault at address 0x1234
???:?:?: 0xf7dc9555 in ??? (libc.so.6)
Unwind information for `libc.so.6:0xf7dc9555` was not available (error.MissingDebugInfo), trace may be incomplete
???:?:?: 0x0 in ??? (???)
Aborted
The user is notified that unwind information is missing, and they could choose to install it to enhance the stack trace output.
This system works for both panic traces as well as segfaults. In the case of a segfault, the OS will pass a context (containing the state of all the registers at the time of the segfault) to the handler, which will be used by the unwinder. In the case of a panic, the unwinder still needs a register context, so one is captured by the panic handler. If the program is linking libc, then libc's getcontext
is used, otherwise an implementation in std
is used if available. On platforms where getcontext
isn't available, the stack unwinder falls back to frame pointer based unwinding.
Implementations of getcontext
have been added for x86_64-linux
and x86-linux
.
The ELF format allows for splitting debug information sections into separate files. If the user of the software does not typically need to debug it, then debug info can be shipped an as optional dependency to reduce the size of the installation. A primary use case for this feature is libc debug information, which can be quite large. Some distributions have a separate package that contains only the debug info for their libc, which can be installed separately.
These extra files are simply additional ELF files that contain only the debug info sections. As an additional space-saving measure, these sections can also be compressed. For example, in the libc stack traces above the debug information came from /usr/lib/debug/.build-id/69/389d485a9793dbe873f0ea2c93e02efaa9aa3d.debug
, not libc.so.6
.
Support for reading external debug information has been added, with this set of changes:
.gnu_debuglink
section to look up external debug infoJosh Wolfe writes:
New std.json
features:
json.reader
. See json.Token
for fine control over the interaction between large tokens, streaming input, and allocators.{}
and []
nesting depth subject to allocator limitations.json.Value
tree/array union, you can now call json.parseFromValue*
to parse that into a static type T
following essentially the same semantics as parsing directly.json.parseFrom*
is customizable by implementing pub fn jsonParse
and/or pub fn jsonParseFromValue
in your struct
, enum
, or union(enum)
. This mirrors the customizable pub fn jsonStringify
functionality.json.HashMap(T)
container for serializing/deserializing objects with arbitrary string fields. It's a thin wrapper around StringArrayHashMapUnmanaged
that implements jsonParse
, jsonParseFromValue
, and jsonStringify
.json.WriteStream
features now available to custom pub fn jsonStringify
implementations due to the unification of json.stringify
and json.WriteStream
.endArray
matches a corresponding beginArray
.@Vector
support for std.json.parseHere is an upgrade guide:
These instructions include the breaking changes from #15602, #15705, #15981, and #16405.
parse
replaced by parseFromSlice
or other parseFrom*
§For code that used to look like this:
Now do this:
parseFree
replaced by Parsed(T).deinit()
§See above. Alternatively, use the parseFrom*Leaky
variants and manage your own arena.
Parser.parse
replaced by parseFromSlice
into Value
§For code that used to look like this:
Now do this:
ValueTree
replaced by Parsed(Value)
§The result of json.parseFrom*(T, ...)
(except for json.parseFrom*Leaky(T, ...)
) is json.Parsed(T)
, which replaces the old ValueTree
.
writeStream
API simplification §The default max depth for writeStream
is now 256. To specify a deeper max depth, use writeStreamMaxDepth
.
You don't need to call arrayElem()
anymore.
All the emit*()
methods (emitNumber
, emitString
, emitBool
, emitNull
, emitJson
) replaced by the generic write()
method, which takes anytype
. The generic json.stringify
functionality for structs, unions, etc. is also available in WriteStream.write()
(the implementation of stringify
now uses WriteStream.write
.).
jsonStringify
signature changed §Instead of pub fn jsonStringify(self: *@This(), options: json.StringifyOptions, out_stream: anytype) !void
, use pub fn jsonStringify(self: *@This(), jw: anytype) !void
, where jw
is a mutable pointer to a WriteStream
. Instead of writing bytes to the out_stream
, you should call write()
and beginObject
and such on the WriteStream
.
stringify
limits nesting to 256 by default §The depth of {
/[
nesting in the output of json.stringify
is now 256. Now that the implementation of stringify
uses a WriteStream
, we have safety checks for matching endArray
to beginArray
and such, which requires memory: 1 bit per nesting level. To disable syntax checks and save that memory, use stringifyMaxDepth(..., null)
. To make syntax checks available to custom pub fn jsonStringify
implementations to arbitrary nesting depth, use stringifyArbitraryDepth
and provide an allocator.
StringifyOptions
overhauled §escape_unicode
moved to the top level.escape_solidus
removed.string = .Array
replaced by .emit_strings_as_arrays = true
.whitespace.indent_level
removed.whitespace.separator
and whitespace.indent
combined into .whitespace = .minified
vs .whitespace = .indent_2
etc.The default whitespace in all contexts is now .minified
. This is changed from the old WriteStream
having effectively .indent_1
, and the old StringifyOptions{ .whitespace = .{} }
having effectively .indent_4
.
TokenStream
replaced by Scanner
§For code that used to look like this:
Now do this:
See json.Token
for more info.
StreamingParser
replaced by Reader
§For code that used to look like this:
Now do this:
See json.Token
for more info.
union
types §Parsing and stringifying union(enum)
types works differently now by default. For const T = union(enum) { s: []const u8, i: i32};
, the old behavior used to accept documents in the form "abc"
or 123
to parse into .{.s="abc"}
or .{.i=123}
respectively; the new behavior accepts documents in the form {"s":"abc"}
or {"i":123}
instead. Stringifying is updated as well to maintain the bijection.
The dynamic value json.Value
can be used for simple int-or-string cases. For more complex cases, you can implement jsonParse
, jsonParseFromValue
, and jsonStringify
as needed.
Sorry for the inconvenience. There are some ideas to restore support for allocatorless json parsing, but for now, you always must use an allocator. At a minimum it is used for tracking the {}
vs []
nesting depth, and possibly other uses depending on what std.json
API is being called.
If you use a std.FixedBufferAllocator
, you can make parsing json work at comptime.
posix_spawn is trash. It's actually implemented on top of fork/exec inside of libc (or libSystem in the case of macOS).
So, anything posix_spawn can do, we can do better. In particular, what we can do better is handle spawning of child processes that are potentially foreign binaries. If you try to spawn a wasm binary, for example, posix spawn does the following:
This behavior is indistinguishable from the binary being successfully spawned, and then printing to stderr, and exiting with a failure - something that is an extremely common occurrence.
Meanwhile, using the lower level fork/exec will simply return ENOEXEC code from the execve syscall (which is mapped to zig error.InvalidExe).
The posix_spawn behavior means the zig build runner can't tell the difference between a failure to run a foreign binary, and a binary that did run, but failed in some other fashion. This is unacceptable, because attempting to execve is the proper way to support things like Rosetta.
Therefore, use of posix_spawn is eliminated from the standard library, in order to facilitate Foreign Target Execution and Testing.
With this release, the Zig Build System is no longer experimental. It is the début of Package Management.
std.Build.FooStep
to std.Build.Step.Foo
(#14947).WriteFile.getFileSource
failure on Windows (#15730).The introduction of Package Management required some terminology in the Zig Build System to be changed.
Previously, a directory of Zig source files with one root source file which could be imported by name was known as a package. It is now instead called a module.
This frees up the term package to be used in the context of package management. A package is a directory of files, uniquely identified by a hash of all files, which can export any number of compilation artifacts and modules.
A large amount of types and functions in the build system have been renamed in this release cycle.
std.build
and std.build.Builder
combined into std.Build
LibExeObjStep
to CompileStep
InstallRawStep
to ObjCopyStep
std.Build.FooStep
to std.Build.Step.Foo
(e.g. std.Build.Step.Compile
)std.Build.FileSource
to std.Build.LazyPath
(#16446)LazyPath
in the name has been renamed, generally replacing it with either File
or Path
std.builtin.Mode
to std.builtin.OptimizeMode
, and all references to "build mode" changed to "optimize mode"std.Build.Pkg
to std.Build.Module
The target and optimization level for std.Build.Step.Compile
is no longer set
separately using setter methods (previously setTarget
and
setBuildMode
). Instead, they are provided at the time of step creation, for
instance to std.Build.addExecutable
.
Zig 0.11 is the début of the official package manager. The package manager is still in its early stages, but is mature enough to use in many cases. There is no "official" package repository: packages are simply arbitrary directory trees which can be local directories or archives from the Internet.
Package information is declared in a file named build.zig.zon
. ZON (Zig Object
Notation) is a simple data interchange format introduced in this release cycle, which uses Zig's
anonymous struct and array initialization syntax to declare objects in a manner similar to other
formats such as JSON. The build.zig.zon
file for a package should look like this:
The information provided is the package name and version, and a list of dependencies, each of
which has a name, a URL to an archive, and a hash. The hash is not of the archive itself, but of
its contents. In order to find it, it can be omitted from the file, and zig build
will emit an error containing the expected hash. There will be tooling in future to make this
file easier to modify.
So far, tar.gz and tar.xz formats are supported, with more planned, as well as a plan for custom fetch plugins.
This information is provided in a separate file (rather than declared in the
build.zig
script) to speed up the package manager by allowing package fetching to
happen without the need to build and run the build script. This also allows tooling to observe
dependency graphs without having to execute potentially dangerous code.
Every dependency can expose a collection of binary artifacts and Zig modules from
itself. The std.Build.addModule
function creates a new Zig module which is publicly
exposed from your package; i.e. one which dependant packages can use. (To create a private
module, instead use std.Build.createModule
.) Regarding binary artifacts, any
artifact which is installed (for instance, via std.Build.installArtifact
) is
exposed to dependant packages.
In the build script, dependencies can be referenced using the std.Build.dependency
function. This takes the name of a dependency (as given in build.zig.zon
) and
returns a *std.Build.Dependency
. You can then use the
artifact
and module
methods on this object to get binary artifacts and
Zig modules exposed by the dependency.
If you wish to vendor a dependency rather than fetch it from the Internet, you can use the
std.Build.anonymousDependency
function, which takes as arguments a path to the
package's build root, and an @import
of its build script.
Both dependency
and anonymousDependency
take a parameter
args
. This is an anonymous struct containing arbitrary arguments to pass to the
build script, which it can access as if they were passed to the script through -D
flags (through std.Build.option
. Options from the current package are not
implicitly provided to dependencies, and must be explicitly forwarded where required.
It is standard for packages to use std.Build.standardOptimizeOption
and
std.Build.standardTargetOptions
when they need an optimization level and/or target
from their dependant. This allows the dependant to simply forward these values with the names
optimize
and target
.
Explicitly passing the target and optimization level like this allows a build script to build some binaries for different targets or at different optimization levels, which can, for instance, be useful when interfacing with WebAssembly .
Every package uses a separate instance of std.Build
, managed by the build system.
It is important to perform operations on the correct instance. This will always be the one
passed as a parameter to the build
function in your build script.
The package manager is in its early stages, and will likely undergo significant changes
before 1.0. Some planned features include optional dependencies, better support for binary
dependencies, the ability to construct a LazyPath
from an arbitrary file from a
dependency, improved tooling, and more. However, the package manager is in a state where it is
usable for some projects, particularly simple pure-Zig projects.
The build system supports adding steps to install and run compiled executables. This was
previously done using the install
and run
methods
on std.Build.Step.Compile
. However, this leads to ambiguities about the
owner package in the presence of package management. Therefore, these operations must now be
done with these functions:
b.installArtifact(exe)
creates an install step for
exe
and adds it as a dependency of b
's
top-level install stepb.addRunArtifact(exe)
creates and returns a run artifact for
exe
Example use case unlocked by this change: depending on nasm and using it to produce object files
Previously, when the build system invoked the Zig compiler, it simply forwarded stderr to the terminal, so the user could see any errors. This solution limits the possibility of integration between the build system and the compiler. Therefore, the build system now communicates information to the compiler using a binary protocol.
This protocol will likely not be used by end users, but it is enabled using the
--listen
argument to the compiler, and communicates over TCP (with default port
14735) or stdio. The types in std.zig.Server
are used by the protocol, and a usage
example can be found in std.Build.Step.evalZigProcess
.
The usage of this compiler protocol does mean there can be a small time delay between something
like a compilation error occurring and it being reported by zig build
, however it
has the advantage of allowing the build system to receive much more detailed information about
the build, allowing for functionality like the Build Summary.
zig build
will now print a summary of all build steps after completing. This
summary includes information on which steps succeeded, which failed, and why. The
--summary
option controls what information is printed:
Please note that the output from this option is currently not color-blind friendly. This will be improved in the future.
Here is example output from running zig build test-behavior -fqemu -fwasmtime --summary all
in zig's codebase:
Build Summary: 67/80 steps succeeded; 13 skipped; 36653/39320 tests passed; 2667 skipped
test-behavior success
├─ run test behavior-native-Debug cached
│ └─ zig test Debug native cached 21s MaxRSS:52M
├─ run test behavior-native-Debug-libc cached
│ └─ zig test Debug native cached 21s MaxRSS:52M
├─ run test behavior-native-Debug-single cached
│ └─ zig test Debug native cached 20s MaxRSS:52M
├─ run test behavior-native-Debug-libc-cbe 1666 passed 113 skipped 16ms MaxRSS:20M
│ └─ zig build-exe behavior-native-Debug-libc-cbe Debug native success 16s MaxRSS:731M
│ └─ zig test Debug native success 21s MaxRSS:134M
├─ run test behavior-x86_64-linux-none-Debug-selfhosted 1488 passed 291 skipped 29ms MaxRSS:17M
│ └─ zig test Debug x86_64-linux-none success 1s MaxRSS:115M
├─ run test behavior-wasm32-wasi-Debug-selfhosted 1441 passed 342 skipped 639ms MaxRSS:51M
│ └─ zig test Debug wasm32-wasi success 718ms MaxRSS:115M
├─ run test behavior-x86_64-macos-none-Debug-selfhosted skipped
│ └─ zig test Debug x86_64-macos-none success 21s MaxRSS:121M
├─ run test behavior-x86_64-windows-gnu-Debug-selfhosted skipped
│ └─ zig test Debug x86_64-windows-gnu success 2s MaxRSS:114M
├─ run test behavior-wasm32-wasi-Debug 1674 passed 109 skipped 2s MaxRSS:83M
│ └─ zig test Debug wasm32-wasi cached 20ms MaxRSS:51M
├─ run test behavior-wasm32-wasi-Debug-libc 1674 passed 109 skipped 1s MaxRSS:93M
│ └─ zig test Debug wasm32-wasi cached 8ms MaxRSS:51M
├─ run test behavior-x86_64-linux-none-Debug cached
│ └─ zig test Debug x86_64-linux-none cached 24ms MaxRSS:52M
├─ run test behavior-x86_64-linux-gnu-Debug-libc skipped
│ └─ zig test Debug x86_64-linux-gnu success 13s MaxRSS:440M
├─ run test behavior-x86_64-linux-musl-Debug-libc 1698 passed 91 skipped 353ms MaxRSS:17M
│ └─ zig test Debug x86_64-linux-musl success 13s MaxRSS:439M
├─ run test behavior-x86-linux-none-Debug 1693 passed 96 skipped 20ms MaxRSS:20M
│ └─ zig test Debug x86-linux-none success 21s MaxRSS:436M
├─ run test behavior-x86-linux-musl-Debug-libc 1693 passed 96 skipped 26ms MaxRSS:19M
│ └─ zig test Debug x86-linux-musl success 20s MaxRSS:454M
├─ run test behavior-x86-linux-gnu-Debug-libc skipped
│ └─ zig test Debug x86-linux-gnu success 21s MaxRSS:462M
├─ run test behavior-aarch64-linux-none-Debug 1687 passed 102 skipped 2s MaxRSS:31M
│ └─ zig test Debug aarch64-linux-none success 16s MaxRSS:449M
├─ run test behavior-aarch64-linux-musl-Debug-libc 1687 passed 102 skipped 1s MaxRSS:34M
│ └─ zig test Debug aarch64-linux-musl success 17s MaxRSS:457M
├─ run test behavior-aarch64-linux-gnu-Debug-libc skipped
│ └─ zig test Debug aarch64-linux-gnu success 14s MaxRSS:457M
├─ run test behavior-aarch64-windows-gnu-Debug-libc skipped
│ └─ zig test Debug aarch64-windows-gnu success 14s MaxRSS:402M
├─ run test behavior-arm-linux-none-Debug 1686 passed 103 skipped 737ms MaxRSS:29M
│ └─ zig test Debug arm-linux-none cached 12ms MaxRSS:51M
├─ run test behavior-arm-linux-musleabihf-Debug-libc 1686 passed 103 skipped 768ms MaxRSS:31M
│ └─ zig test Debug arm-linux-musleabihf cached 12ms MaxRSS:52M
├─ run test behavior-mips-linux-none-Debug 1686 passed 103 skipped 447ms MaxRSS:36M
│ └─ zig test Debug mips-linux-none success 25s MaxRSS:454M
├─ run test behavior-mips-linux-musl-Debug-libc 1686 passed 103 skipped 417ms MaxRSS:39M
│ └─ zig test Debug mips-linux-musl success 28s MaxRSS:471M
├─ run test behavior-mipsel-linux-none-Debug 1688 passed 101 skipped 406ms MaxRSS:34M
│ └─ zig test Debug mipsel-linux-none success 23s MaxRSS:454M
├─ run test behavior-mipsel-linux-musl-Debug-libc 1688 passed 101 skipped 751ms MaxRSS:37M
│ └─ zig test Debug mipsel-linux-musl cached 14ms MaxRSS:53M
├─ run test behavior-powerpc-linux-none-Debug 1687 passed 102 skipped 849ms MaxRSS:31M
│ └─ zig test Debug powerpc-linux-none cached 13ms MaxRSS:51M
├─ run test behavior-powerpc-linux-musl-Debug-libc 1687 passed 102 skipped 782ms MaxRSS:32M
│ └─ zig test Debug powerpc-linux-musl cached 8ms MaxRSS:51M
├─ run test behavior-powerpc64le-linux-none-Debug 1690 passed 99 skipped 758ms MaxRSS:31M
│ └─ zig test Debug powerpc64le-linux-none cached 12ms MaxRSS:51M
├─ run test behavior-powerpc64le-linux-musl-Debug-libc 1690 passed 99 skipped 542ms MaxRSS:31M
│ └─ zig test Debug powerpc64le-linux-musl cached 9ms MaxRSS:51M
├─ run test behavior-powerpc64le-linux-gnu-Debug-libc skipped
│ └─ zig test Debug powerpc64le-linux-gnu cached 11ms MaxRSS:51M
├─ run test behavior-riscv64-linux-none-Debug 1689 passed 100 skipped 669ms MaxRSS:28M
│ └─ zig test Debug riscv64-linux-none cached 7ms MaxRSS:49M
├─ run test behavior-riscv64-linux-musl-Debug-libc 1689 passed 100 skipped 711ms MaxRSS:30M
│ └─ zig test Debug riscv64-linux-musl cached 7ms MaxRSS:51M
├─ run test behavior-x86_64-macos-none-Debug skipped
│ └─ zig test Debug x86_64-macos-none cached 20s MaxRSS:51M
├─ run test behavior-aarch64-macos-none-Debug skipped
│ └─ zig test Debug aarch64-macos-none cached 7ms MaxRSS:49M
├─ run test behavior-x86-windows-msvc-Debug skipped
│ └─ zig test Debug x86-windows-msvc cached 7ms MaxRSS:50M
├─ run test behavior-x86_64-windows-msvc-Debug skipped
│ └─ zig test Debug x86_64-windows-msvc cached 21s MaxRSS:51M
├─ run test behavior-x86-windows-gnu-Debug-libc skipped
│ └─ zig test Debug x86-windows-gnu cached 7ms MaxRSS:50M
└─ run test behavior-x86_64-windows-gnu-Debug-libc skipped
└─ zig test Debug x86_64-windows-gnu cached 7ms MaxRSS:52M
Zig build scripts are, by default, run by build_runner.zig
, a program distributed
with Zig. In some cases, such as for custom tooling which wishes to observe the step graph, it
may be useful to override the build runner to a different Zig file. This is now possible using
the option --build-runner path/to/runner.zig
.
Moved the cache system from compiler to std lib and start using it in the build system
The Zig build system is now capable of running multiple build steps in parallel. The build
runner analyzes the build step graph, and runs steps in a thread pool, with a default thread
count corresponding to the number of CPU cores available for optimal CPU utilization. The number
of threads used can be changed with the -j
option.
This change can allow projects with many build steps to build significantly faster.
The build system contains a type called LazyPath
(formerly
FileSource
) which allows depending on a file or directory which
originates from one of many sources: an absolute path, a path relative to the build runner's
working directory, or a build artifact. The build system now makes extensive use of
LazyPath
anywhere we reference an arbitrary path.
This makes the build system more versatile by making it easier to use generated files in a
variety of contexts, since a LazyPath
may be created to reference the
result of any step emitting a file, such as a std.Build.Step.Compile
or
std.Build.Step.Run
.
The most notable change here is that Step.Compile
no longer has an
output_dir
field. Rather than depending on the location a binary is
emitted to, the LazyPath
abstraction must be used, for instance through
getEmittedBin
(formerly getOutputSource
). There
are also methods to get the paths corresponding to other compilation artifacts:
Step.Compile.getEmittedBin
Step.Compile.getEmittedImplib
Step.Compile.getEmittedH
Step.Compile.getEmittedPdb
Step.Compile.getEmittedDocs
Step.Compile.getEmittedAsm
Step.Compile.getEmittedLlvmIr
Step.Compile.getEmittedLlvmBc
Getting these files will cause the build system to automatically set the appropriate compiler
flags to generate them. As such, the old emit_X
fields have been removed.
Step.InstallDir
now uses a LazyPath
for its
source_dir
field, allowing installing a generated directory without a
known path. As a general rule, hardcoded paths outside of the installation directory should be
avoided where possible.
You can monitor and limit the peak memory usage for a given step which helps the build system avoid scheduling too many intensive tasks simultaneously, and also helps you detect when a process is starting to exceed reasonable resource usage.
The build system has these switches to enable cross-target testing:
-fdarling, -fno-darling Integration with system-installed Darling to
execute macOS programs on Linux hosts
(default: no)
-fqemu, -fno-qemu Integration with system-installed QEMU to execute
foreign-architecture programs on Linux hosts
(default: no)
--glibc-runtimes [path] Enhances QEMU integration by providing glibc built
for multiple foreign architectures, allowing
execution of non-native programs that link with glibc.
-frosetta, -fno-rosetta Rely on Rosetta to execute x86_64 programs on
ARM64 macOS hosts. (default: no)
-fwasmtime, -fno-wasmtime Integration with system-installed wasmtime to
execute WASI binaries. (default: no)
-fwine, -fno-wine Integration with system-installed Wine to execute
Windows programs on Linux hosts. (default: no)
However, there is even tighter integration with the system, if the system is configured for it. First, zig will try executing a given binary, without guessing whether the system will be able to run it. This takes advantage of binfmt_misc, for example.
Use skip_foreign_checks
if you want to prevent a cross-target failure
from failing the build.
This even integrates with the Compiler Protocol, allowing foreign executables to communicate metadata back to the build runner.
The build system has API to help you create C configuration header files from common formats, such as automake and CMake.
It is recommended to generally use Run steps instead of custom steps because it will properly integrate with the Cache System.
Added prefixed versions of addFileSource and addDirectorySource to Step.Run
Changed Step.Run's stdin to accept LazyPath (#16358).
Before, addTest created and ran a test. Now you need to use b.addRunArtifact to run your test executable.
%[ident:mod]
syntax. This allows requesting, e.g., the "w" (32-bit) vs. "x"
(64-bit) views of AArch64 registers.**
on number types (#13871).@export
with linksection
option (#14035). -fopt-bisect-limit
for debugging LLVM Backend miscompilations (#13826).--build-id
styles (#15459).usize
type inference in for
range start and end (#16311).@unionInit
when first parameter is not a type (#16384).writeToMemory
/readFromMemory
for pointers, optionals, and packed unions.@embedFile("")
not giving a proper error (#16480).@export
for arbitrary values.@extern
fixes.*T
to *[n]T
@Type
(#14712)for
/while
loops (#14684).void
(#14779)return x()
but not with return try x()
inside recursion (#15669).During this release cycle we worked towards Incremental Compilation and linking, but it is not ready to be enabled yet. We also worked towards Code Generation backends that compete with the LLVM Backend instead of depending on it, but those are also not ready to be enabled by default yet.
Those two efforts will yield drastic results. However, even without those done, this release of the compiler is generally expected to be a little bit faster and use a little bit less memory than 0.10.x releases.
Here are some performance data points, 0.10.1 vs this release:
zig build-exe
on
this Tetris demo: 14.5% ± 2.7% faster wall clock time, 7.6% ± 0.2% fewer bytes of peak memory usage (x86_64-linux, baseline).zig build-exe
on
someone's Advent-of-Code project: 6.3% slower wall clock time, 8.7% more bytes of peak memory usageNote that the compiler is doing more work in 0.11.0 for most builds (including "Hello, World!") due to the Standard Library having more advanced Debugging capabilities, such as Stack Unwinding. The long-term plan to address this is Incremental Compilation.
During this release cycle, the C++ implementation of Zig was deleted.
The -fstage1
flag is no longer a recognized command-line parameter.
Zig is now bootstrapped using a 2.4 MiB WebAssembly file and a C compiler. Please enjoy this blog post which goes into the details: Goodbye to the C++ Implementation of Zig
Thanks to improvements to the C Backend, it is now possible to bootstrap on Windows using MSVC.
Also fixed: bootstrapping the compiler on ARM and on mingw.
The logic for detecting MSVC installations on Windows has been ported from C++ to Zig (#15657). That was the last C++ source file; the compiler is now 100% Zig code, except for LLVM libraries.
According to Zig's build modes documentation:
-ODebug/
is not required to be reproducible (building the same zig source may output a different, semantically equivalent, binary)-OReleaseSafe
, -OReleaseSmall
and -OReleaseFast
are all required to be reproducible (building the same zig source outputs a
deterministic binary)Terminology:
src/stage1/*
, compiled with system toolchainsrc/*.zig
, compiled with stage1.src/*.zig
, compiled with stage2.src/*.zig
, compiled with stage3.In theory, stage3 and stage4 should be byte-for-byte identical when compiled in release mode. In practice, this was not true. However, this has been fixed in this release. They now produce byte-for-byte identical executable files.
This property is verified by CI checks for these targets:
double
types and add a lot of new
test coverage for them (#13376).To get a sense of Zig's C ABI compatibility, have a look at the target coverage and test cases.
isAnyopaque
.@constCast
and @volatileCast
to remove CV-qualifiers instead of converting a pointer to an int and then back to a pointer.The Zig compiler has several code backends. The primary one in usage today is the LLVM backend, which emits LLVM IR in order to emit highly optimized binaries. However, this release cycle also saw major improvements to many of our "self-hosted" backends, most notably the x86 Backend which is now passing the vast majority of behavior tests. Improvements to these backends is key to reaching the goal of Incremental Compilation.
Now passing 1652/1679 (98%) of the behavior tests, compared to the LLVM Backend.
The generated C code is now MSVC-compatible.
This backend is now used for Bootstrapping and is no longer considered experimental.
It has seen some optimizations to reduce the size of the outputted C code, such as reusing locals where possible. However, there are still many more optimizations that could be done to further reduce the size of the outputted C code.
Although the x86 backend is still considered experimental, it is now passing 1474/1679 (88%) of the behavior tests, compared to the LLVM Backend.
zig-dis-x86_64
This release did not see many user-facing features added to the WebAssembly backend. A few notable features are:
Besides those language features, the WebAssembly backend now also uses the
regular start.zig
logic as well as the standard test-runner. This is a big
step as the default test-runner logic uses a client-server architecture,
requiring a lot of the language to be implemented for it to work. This will
also help us further with completing the WebAssembly backend as the test-runner
provides us with more details about which test failed.
Lastly, a lot of bugs and miscompilations were fixed, passing more behavior tests. Although the WebAssembly backend is still considered experimental, it is now passing 1428/1657 (86%) of the behavior tests, compared to the LLVM Backend.
Robin "Snektron" Voetter writes:
This release cycle saw significant improvement of the self-hosted SPIR-V backend. SPIR-V is a bytecode representation for shaders and kernels that run on GPUs. For now, the SPIR-V backend of Zig is focused on generating code for OpenCL kernels, though Vulkan compatible shaders may see support in the future too.
The main contributions in this release cycle feature a crude assembler for SPIR-V inline assembly, which is useful for supporting fringe types and operations, and other features from SPIR-V that do not expose itself well from within Zig.
The backend also saw improvements to codegen, and is now able to compile and execute about 37% of the compiler behavior test suite on select OpenCL implementations. Unfortunately this does not yet include Rusticl, which is currently missing a few features that the Zig SPIR-V backend requires.
Currently, executing SPIR-V tests requires third-party implementations of the test runner and test executor. In the future, these will be integrated further with Zig.
During this release cycle, some progress was made on this experimental backend, but there is nothing user-facing to report. It has not yet graduated beyond the simplified start code routines, so we have no behavior test percentage to report.
This release cycle sees minor improvements to error return traces. These traces are created by Zig for binaries built in safe release modes, and report the source of an error which was not correctly handled where a simple stack trace would be less useful.
A bug involving incorrect frames from loop bodies appearing in traces has been fixed. The following test case now gives a correct error return trace:
Automatically optimize order of struct fields (#14336)
While still a highly WIP feature, this release cycle saw many improvements paving the way to incremental compilation capabilities in the compiler. One of the most significant was the InternPool changeset. This change is mostly invisible to Zig users, but brings many benefits to the compiler, amongst them being that we are now much closer to incremental compilation. This is because this changeset brings new in-memory representations to many internal compiler datastructures (most notably types and values) which are trivially serializable to disk, a requirement for incremental compilation.
This release additionally saw big improvements to Zig's native code generation and linkers, as well as beginning to move to emitting LLVM bitcode manually. The former will unlock extremely fast incremental compilations, and the latter is necessary for incremental compilation to work on the LLVM backend.
Incremental compilation will be a key focus of the 0.12.0 release cycle. The path to incremental compilation will roughly consist of the following steps:
While no guarantees can be made, it is possible that a basic form of incremental compilation will be usable in 0.12.0.
The method for specifying modules on the CLI has been changed to support recursive module
dependencies and shared dependencies. Previously, the --pkg-begin
and
--pkg-end
options were used to define modules in a "hierarchical" manner, nesting
dependencies inside their parent. This system was not compatible with shared dependencies or
recursive dependencies.
Modules are now specified using the --mod
and --deps
options, which
have the following syntax:
--mod
defines a module with a given name, dependency list, and root source file.
--deps
specifies the list of dependencies of the main module. These options are not
order-dependant. This defines modules in a "flat" manner, and specifies dependencies indirectly,
allowing dependency loops and shared dependencies. The name of a dependency can optionally be
overridden from the "default" name in the dependency string.
For instance, the following zig build-exe
invocation defines two modules,
foo
and bar
. foo
depends on bar
under the
name bar1
, bar
depends on itself (under the default name
bar
), and the main module depends on both foo
and bar
.
The Build System supports creating module dependency loops by manually modifying the
dependencies
of a std.Build.Module
. Note that this API (and the CLI
invocation) is likely to undergo further changes in the future.
Depending on the target, Zig will use LLD, or its own
linker implementation. In order to override the default, pass -fLLD
or -fno-LLD
.
Decl
from Atom
- now every linker is free to track Decl
however they like; the (in)famous change
that removed the dreaded link.File.allocateDeclIndexes
function-u
flagDW_FORM_block*
and DW_FORM_string
formsdSYM
bundle directly in the emit directory rather than local cache__DWARF
segment comes before __LINKEDIT
in dSYM
bundle which greatly simplifies incremental linking
of debug info on macOSCodeSignature.zig
, we can now reuse it in different places across the MachO linker. The parallel hasher is generic
over an actual hasher such as Sha256 or MD5.-install_name
and -undefined
-undefined error
flaglibstuff
library - ensures we catch
regression which may invalidate the binary when inspected by Apple tooling such as codesign
, etc.dyld
opcodes emitters - we now generate vastly compressed opcodes for the loader reducing overall binary
size and improve loading times__TEXT,__unwind_info
and __TEXT,
__eh_frame
sections making Zig linked binaries compatible with libunwind
MH_SUBSECTIONS_VIA_SYMBOLS
compatible, entry point may overlap with a section atom which is perfectly fine,
so don't panic in that caseTOOL=0x5
to mean Zig as the build tool in LC_BUILD_VERSION
_runtime.etext
) and beginning of read-only data (_runtime.rodata
).__TEXT,__stubs
entry)__TEXT,__eh_frame
sections emitted by Nix C++ compileraarch64-windows
targetTextBlock
into Atom
Luuk de Gram writes:
During this release, a lot of work has gone into the in-house WebAssembly
linker. The biggest feature it gained was the support of the shared
memory
feature. This allows multiple WebAssembly modules to access the
same memory. This feature opens support for multi-threading in WebAssembly.
This also required us to implement support for Thread-Local Storage. The linker is now fully
capable of linking with WASI-libc, also. Users can now make use of the in-house
linker by supplying the -fno-LLD
flag to your zig
build-{lib/exe}
CLI invocation.
We are closer than ever to replace LLVM's linker wasm-ld with our in-house linker. The last feature to implement for statically built WebAssembly modules is garbage collection. This ensures unreferenced symbols get removed from the final binary keeping the binaries small in disk size. Once implemented, we can make the in-house linker the default linker when building a WebAssembly module and gather feedback and fix any bugs that haven't been found yet. We can then start working on other features such as dynamic-linking support and any future proposals.
Additionally:
zig test behavior.zig
and is now debuggable in a debuggerDwarf.Atom
from linkers' Atom
sLibrary path resolution is now handled by the Zig frontend rather than the linker (LLD). Some compiler flags are introduced to control this behavior.
These arguments are stateful: they affect all subsequent libraries linked by name, such as by
the flags -l
, -weak-l
, and -needed-l
.
Error reporting for failure to find a system library is improved:
Previously, the Build System exposed -search_paths_first
and
-search_dylibs_first
from the zig build
command, which had the ability
to affect all libraries. Now, the build script instead explicitly chooses the search strategy
and preferred link mode for each library independently.
Full list of the 711 bug reports closed during this release cycle.
Many bugs were both introduced and resolved within this release cycle. Most bug fixes are omitted from these release notes for the sake of brevity.
Zig has known bugs and even some miscompilations.
Zig is immature. Even with Zig 0.11.0, working on a non-trivial project using Zig will likely require participating in the development process.
When Zig reaches 1.0.0, Tier 1 Support will gain a bug policy as an additional requirement.
A 0.11.1 release is planned. Please test your projects against 0.11.0 and report any problems on the issue tracker so that we can deliver a stable 0.11.1 release.
To be announced next week...
--test-runner
CLI
option, or the std.Build.test_runner_path
field (#6621).This release of Zig upgrades to LLVM 16.0.6.
During this release cycle, it has become a goal of the Zig project to eventually eliminate all dependencies on LLVM, LLD, and Clang libraries. There will still be an LLVM Backend, however it will directly output bitcode files rather than using LLVM C++ APIs.
Zig ships with the source code to musl. When the musl C ABI is selected, Zig builds static musl from source for the selected target. Zig also supports targeting dynamically linked musl which is useful for Linux distributions that use it as their system libc, such as Alpine Linux.
This release upgrades from v1.2.3 to v1.2.4.
Unfortunately, glibc is still stuck on 2.34. Users will need to wait until 0.12.0 for a glibc upgrade.
The only change:
Unfortunately, mingw-w64 is still stuck on 10.0.0. Users will need to wait until 0.12.0 for a mingw-w64 upgrade.
The only change:
Zig's wasi-libc is updated to 3189cd1ceec8771e8f27faab58ad05d4d6c369ef (#15817)
compiler-rt is the library that provides, for example, 64-bit integer multiplication for 32-bit architectures which do not have a machine code instruction for it. The GNU toolchain calls this library libgcc.
Unlike most compilers, which depend on a binary build of compiler-rt being installed alongside the compiler, Zig builds compiler-rt from source, lazily, for the target platform. It avoids repeating this work unnecessarily via the Cache System.
This release saw some improvements to Zig's compiler-rt implementation:
__udivei4
and __umodei4
for
dividing and formatting arbitrary-large unsigned integers (#14023).__ashlsi3
, __ashrsi3
, __lshrsi3
for
libgcc symbol compatibility.__divmodti4
for libgcc symbol compatibility (#14608).__powihf2
, __powisf2
, __powidf2
, __powitf2
, __powixf2
.f16
ABI on macOS with LLVM 16.__fixkfti
, __fixunskfti
,
__floattikf
, __negkf2
, __mulkc3
,
__divkc3
, and __powikf2
for PowerPC (#16057).When the following is specified
the resulting relocatable object file will have the compiler-rt unconditionally embedded inside:
zig cc
is Zig's drop-in C compiler tool. Enhancements in this release:
-z
stack-size arguments.-install_name
and -undefined
.-undefined error
(#14046).-x
to override the language.-r
(#11683).-###
(dry run) (#7170).-l :path/to/lib.so
(#15743).-version-script
.This feature is covered by our Bug Stability Program.
Before, zig cc
, when confronted with a linker argument it did
not understand, would skip the flag and emit a warning.
This caused headaches for people that build third-party software. Zig would seemingly build and link the final executable, only to have it segfault when executed.
If there are linker warnings when compiling software, the first thing we have to do is add support for ones linker is complaining, and only then go file issues. If Zig "successfully" (i.e. status code = 0) compiles a binary, there is instead a tendency to blame "Zig doing something weird". Adding the unsupported arguments is straightforward; see #11679, #11875, #11874 for examples.
With Zig 0.11.0, unrecognized linker arguments are hard errors.
zig c++
is equivalent to zig cc with an added -lc++
parameter, but I made a separate heading here because I realized that some people are
not aware that Zig supports compiling C++ code and providing libc++ too!
Cross-compiling too, of course:
One thing that trips people up when they use this feature is that the C++ ABI is not stable across compilers, so always remember the rule: You must use the same C++ compiler to compile all your objects and static libraries. This is an unfortunate limitation of C++ which Zig can never fix.
@"_"
to _
(#15617).This adds new behavior to zig fmt
which normalizes (renders a canonical form of) quoted identifiers like @"hi_mom"
to some extent. This can make codebases more consistent and searchable.
To prevent making the details of Unicode and UTF-8 dependencies of the Zig language, only bytes in the ASCII range are interpreted and normalized. Besides avoiding complexity, this means invalid UTF-8 strings cannot break zig fmt
.
Both the tokenizer and the new formatting logic may overlook certain errors in quoted identifiers, such as nonsense escape sequences like \m
. For now, those are ignored and we defer to existing later analysis to catch.
This change is not expected to break any existing code.
Below, "verbatim" means "as it was in the original source"; in other words, not altered.
\x
and \u
escapes are processed.formatEscapes
rules: either literally or \x
-escaped as needed.[A-Za-z_][A-Za-z0-9_]*
), it remains quoted.i33
, void
, type
, null
, _
, etc.), it is rendered unquoted if the syntactical context allows it (field access, container member), otherwise it remains quoted.(#166)
This is a new subcommand added in this release. Functionality is limited, but we can add features as needed. This subcommand has no dependency on LLVM.
The major themes of the 0.12.0 release cycle will be language changes, compilation speed, and package management.
Some upcoming milestones we will be working towards in the 0.12.0 release cycle:
Here are the steps for Zig to reach 1.0:
If you want more of a sense of the direction Zig is heading, you can look at the set of accepted proposals.
Here are all the people who landed at least one contribution into this release:
Special thanks to those who sponsor Zig. Because of recurring donations, Zig is driven by the open source community, rather than the goal of making profit. In particular, these fine folks sponsor Zig for $50/month or more: