Yay!! Again, some compiler decided to change template conversion operator rules.
And again, compilers won't agree with what the standard says or with each other since changing the rules may break users, even though they keep doing it (even in minor versions!) 🤦
This is weird. I am writing a #clang code for #sequencing analysis & one of the steps is a straightforward 2bit packing of the DNA alphabet. The code is really simple: iterate over the sequence, switch/case over letters and bitshift to get them in the right place. On my E-2697v4 Xeon, the coefficient of variation of the execution time is 15%. On my newer i7-11700 is less than 2%, on an Arm A53 is around 9%. In all three processors, the variation goes down with increasing workload. What gives?
The experience with #Copilot dramatically improved after I stopped asking it for novel code, & started using it as an autocomplete for #Clang. Code for functions with different prototypes but the same functionalities appear after I do the 1st. This plays nice with variadic macros.
On the #cpp#cplusplus side, it can use the STL rather well, saving me from having to look up things. I've observed that it will do better with higher level lang (e.g. #perl or templated C++) than C @Perl
@Perl An interesting application of these observations (at least for my beloved #perl) is that one can write very quickly large chunks of Inline #clang or #cplusplus code for @Perl . Again, it is important not to let it hallucinate (especially if you have a tendency to typedef the living daylights out of your code): just finish your function BY HAND and let it generate boiler plate to use it, variations with different signatures, and even XS glue to move data from C <-> Perl.
Reminder that the #programming languages useful for #applications may not be the same as those used for #libraries that the applications use, and this is just fine, e.g. performance often requires that one does not rediscover the wheel by recoding libraries in #clang#cplusplus and #fortran because they are, for whatever reason , resistant to use a proper #ffi (or an #IPC) #api.
The plot thickens. I ran 'top' for the two different builds, and there are some distinct differences. The slow clang instances spawned through Jenkins has high value for MSGSENT, MSGRECV and SYSMACH where the fast build has very low numbers.
So it seems the slow build is doing a lot more Mach communication for some reason.
Each clang instance also shows "1/1" threads in the fast build and "2/1" in the slow one. No idea what's up with that.
Vacuum welding suggests to me that the universe tracks objects via the equivalent of \0 terminated arrays on a 3D heap, and I find this genuinely scary
Thinking about just switching over to clang and abandoning msvc entirely and seeing how it goes. I feel like it's going to make true cross platform dreams a lot more feasible moving forward.
Anyone had experience with developing native C++ apps in Clang for windows in Visual Studio? Is this a terrible idea? How's the support
I just finished building #LLVM + #Clang on my #RISCV dev board! The entire process, from cloning the repo to building, linking, and installing, was all done natively on RISC-V!!
It took about 19 hours of work and 16 hours of having the puny quad-core in-order RISC-V CPU pinned at 100%, but the thing actually built successfully and is compiling and running programs. Insanely impressive stuff.
> Fixed a possible crash during the trophy cutscene that could happen if the stadium did not have a scheduled match and was not associated with an owning club.
languages that are involved in some sort of data analysis and processing (#sql, #clang /c++) are doing very well. Not sure what to make of #Python; are ppl in #AI seeing through the reality is a scripting over extremely performant c/c++ and that there are other lang that can glue as well? #golang & #Julia are ⬆️
Calls to Go functions from C (and to C functions from Go) are expensive, so I'm improving FrankenPHP's performance by grouping them (which in this case involves a bit of "unsafe" magic)!