Tuesday, March 31, 2026

Monday, March 30, 2026

Sunday, March 29, 2026

Saturday, March 28, 2026

Friday, March 27, 2026

Statement from the C++ Alliance on WG21 Committee Meeting SupportThe C++ Alliance is proud to support attendance at WG21 committee meetings. We believe that facilitating the attendance of domain experts produces better outcomes for C++ and for the broader ecosystem, and we are committed to making participation more accessible. We want to be unequivocally clear: the C++ Alliance does not, and will never, direct or compel attendees to vote in any particular way. Our support comes with no strings attached. Those who attend are free and encouraged to exercise their independent judgment on every proposal before the committee. The integrity of the WG21 standards process depends on the independence of its participants. We respect that process deeply, and any suggestion to the contrary does not reflect our values or our program. If you are interested in learning more about our attendance program, please reach out to us at info@cppalliance.org.📝The C++ Alliance

Thursday, March 26, 2026

Wednesday, March 25, 2026

Windows Store Deployment with windeployqtMicrosoft introduced the Windows Store with Windows 8 as central place to download and update software. To place software into the Microsoft Store, developers must sign it digitally. Microsoft checks the Software before it is published into the Microsoft Store. The AppxManifest.xml describes the packaging information for the Microsoft Store. The makeappx tool creates the appx installer, which is signed with the signtool from the Windows SDK. With Qt 6.11, windeployqt got extra command line arguments to create an AppxManifest.xml , namely those are the --appx and --appx-certificate arguments.📝Qt Blog
Fast Remote Desktop (RDP) from macOS to WindowsI regularly use Windows’ Remote Desktop Protocol (RDP) to connect from a non-Windows client to a Windows host machine (e.g., from my MacBook Pro to my CAD/Gaming Desktop Tower PC) to access software otherwise not available on Linux or macOS (mostly CAD/eCAD software like SolidWorks or Altium Designer). Unfortunately, by default the Remote Desktop client application from Micro$oft has terrible performance issues, especially when connecting from macOS to Windows. However, with a little bit of tweaking on the host and switching from the Remote Desktop Client (nowadays just called “Windows App”) to FreeRDP (or any alternative that is using FreeRDP under the hood) we can make the performance and visual fidelity bearable. Changes on the Host On the host machine, make sure to use an updated Windows. Open the “Group Policy Editor” and navigate to: Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host Under “Remote Session Environment” configure the following policies: “Use hardware graphics adapter for all Remote Desktop Service sessions” Enabled “Prioritize H.264/AVC 444 graphic mode for Remote Desktop Connections” Disabled “Configure H.264/AVC hardware encoding for Remote Desktop Connections” Enabled Under Connections configure the following policies: “Select RDP transport protocols” Enabled with Use either UDP or TCP Reboot the host machine. Changes on the Client Meanwhile, download, and install Royal TSx for macOS . Install the Remote Desktop plugin, and create a new connection. Make sure to set the following options: Under Display Options Set Colors to High Color (15 Bit) Uncheck Use full retina resolution Set Scale Factor to 100% Set Desktop Size to Auto Expand Set Resize Mode to Smart Reconnect Under Performance Chose LAN as Connection Speed Uncheck all but Graphics Pipeline and Font Smoothing Configure the remaining settings as you prefer. Now, you can connect to a Remote Desktop session that has acceptable performance and doesn’t look like Godzilla vomited all over your screen.📝Arvids Blog

Tuesday, March 24, 2026

Monday, March 23, 2026

From error-handling to structured concurrencyHow should we think about error-handling in concurrent programs? In single-threaded programs, we’ve mostly converged on a standard pattern, with a diverse zoo of implementations and concrete patterns. When an error occurs, it is propagated up the stack until we find a stack frame which is prepared to handle it. As we do so, we unwind the stack frames in-order, giving each frame the opportunity to clean up or destroy resources as appropriate.📝Posts on Made of Bugs
Everything old is new again: memory optimizationAt this point in history, AI sociopaths have purchased all the world's RAM in order to run their copyright infringement factories at full blast. Thus the amount of memory in consumer computers and phones seems to be going down. After decades of not having to care about memory usage, reducing it has very much become a thing. Relevant questions to this state of things include a) is it really worth it and b) what sort of improvements are even possible. The answers to these depend on the task and data set at hand. Let's examine one such case. It might be a bit contrived, unrepresentative and unfair, but on the other hand it's the one I already had available. Suppose you have to write script that opens a text file, parses it as UTF-8, splits it into words according to white space, counts the number of time each word appears and prints the words and counts in decreasing order (most common first). The Python baseline This sounds like a job for Python. Indeed, an implementation takes fewer than 30 lines of code. Its memory consumption on a small text file [update: repo's readme, which is 1.3k] looks like this. Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use. The native version A fully native C++ version using Pystd requires 60 lines of code to implement the same thing. If you ignore the boilerplate, the core functionality fits in 20 lines. The steps needed are straightforward: Mmap the input file to memory. Validate that it is utf-8 Convert raw data into a utf-8 view Split the view into words lazily Compute the result into a hash table whose keys are string views, not strings The main advantage of this is that there are no string objects. The only dynamic memory allocations are for the hash table and the final vector used for sorting and printing. All text operations use string views , which are basically just a pointer + size. In code this looks like the following: Its memory usage looks like this. Peak consumption is ~100 kB in this implementation. It uses only 7.7% of the amount of memory required by the Python version. Isn't this a bit unfair towards Python? In a way it is. The Python runtime has a hefty startup cost but in return you get a lot of functionality for free. But if you don't need said functionality, things start looking very different. But we can make this comparison even more unfair towards Python. If you look at the memory consumption graph you'll quite easily see that 70 kB is used by the C++ runtime. It reserves a bunch of memory up front so that it can do stack unwinding and exception handling even when the process is out of memory. It should be possible to build this code without exception support in which case the total memory usage would be a mere 21 kB. Such version would yield a 98.4% reduction in memory usage.📝Nibble Stew
Understanding Safety Levels in Physical Units LibrariesUnderstanding Safety Levels in Physical Units Libraries Physical quantities and units libraries exist primarily to prevent errors at compile time. However, not all libraries provide the same level of safety. Some focus only on dimensional analysis and unit conversions, while others go further to prevent representation errors, semantic misuse of same-dimension quantities, and even errors in the mathematical structure of equations. This article explores six distinct safety levels that a comprehensive quantities and units library can provide. We'll examine each level in detail with practical examples, then compare how leading C++ libraries and units libraries from other languages perform across these safety dimensions. Finally, we'll analyze the performance and memory costs associated with different approaches, helping you understand the trade-offs between safety guarantees and runtime efficiency. We'll pay particular attention to the upper safety levels—especially quantity kind safety (distinguishing dimensionally equivalent concepts such as work vs. torque, or Hz vs. Bq) and quantity safety (enforcing correct quantity hierarchies and scalar/vector/tensor mathematical rules)—which are well-established concepts in metrology and physics, yet remain widely overlooked in the C++ ecosystem. Most units library authors and users simply do not realize these guarantees are achievable, or how much they matter in practice. These levels go well beyond dimensional analysis, preventing subtle semantic errors that unit conversions alone cannot catch, and are essential for realizing truly strongly-typed numerics in C++.📝mp-units
What's new in zcov - March 2026Graph improvements I wanted zcov to quickly and naturally show the coverage status of a piece of code. Most of the time we’re interested in missing coverage as that is where we need to do work, either adding a test, improving the spec, or figuring out if there is a mismatch between the code as-is and then goals we’re trying to achieve. zcov can now colour code the blocks to show this, using a common red/yellow/green system. This works quite well; the red blocks immediately stand out. In fact, differently coloured blocks stand out, so if a function is mostly covered, missing coverage stands out, and vice versa. It comes with two modes, block and edge. The block mode is the most basic one - is the block visited or not, a direct improvement of the traditional line coverage red/green highlight. Here’s an example from GNU coreutils: The edge mode is slightly more sophisticated and will colour blocks green when all outgoing edges are taken, yellow when some are taken, and red when none are taken. This is the same function as before, but in edge mode, which colours block 2 yellow. What’s missing is a way to neatly show which edges are covered and which are missing. I tried colouring the edges using the same colour schema, but they actually became a lot harder to read that way. Colouring for MC/DC is not yet implemented, but shouldn’t be too far off. The colours used for missing/covered/partial will be configurable, too, both to mesh well with colour schemes, and to improve accessibility (colour blindness) I have also added labels to edges of conditions so it’s easy to tell which edge (and outcome) is the true and false one. This makes it a lot easier to understand the graph as the distance between the code in blocks makes it a lot harder to recognize which successor is the then and else. Sequence numbers in prime paths zcov has been able to highlight a single path for a long time by making the block borders and edges thicker, but it was very difficult to actually see where paths began, especially in tight loops. A prime path is a sequence of blocks, not simply a collection of blocks. Recently zcov will paint the sequence number in the block header when the path is selected. This is a necessity when interrogating paths in loops, as it would otherwise be impossible to tell which of the rotated paths we’re looking at. Here are two examples of a rotated path in the or function in GNU coreutils; the first path goes from block 4 through 4, the other from block 5 through 5. The list itself has been simplified to show the path number and the first .. last block. The actual sequence is hard to read out of a list, and sequences can be very long, and long sequences are easy to understand in graph form. Control panel for the function/graph view There is a new info- and control panel for the function/graph view. This new panel makes it easy to get an overview of which function we’re looking at, presenting its name (both mangled and demangled), the source file it is located in, the coverage as nice progress bars, and a control for selecting paths for highlighting. By default the uncovered prime paths will be shown as they are the most likely to be interesting. When working towards coverage, this will be the shrinking set, and could even function as a todo list. Two more path filters are currently supported, covered and all, as seen in this picture. Function filtering The function list was one of the first features I added to zcov, and I have added a filter-and-search box to it. As shown here it uses a regular expression search/filter to only show functions matching a pattern. This is a requirement for large projects. As a data point, GNU coreutils has more than 2500 functions, and it’s not really all that large. Closing remarks These features were made possible by an overhaul and, for the most part, a simplification of the zcov internals. The core engine is a lot more capable now than just a few months ago, and improving still. I plan to quite a few more features and polish over the coming months. I have also updated the software page with the new screenshots.📝patch – Blog

Sunday, March 22, 2026