01:18 < bridge> The quads HD bug is either a bug for not calling 'doRender' or this function missing the HD check for quads 01:22 < bridge> I am on vacation, has to wait until next week 06:52 < ws-client> @Assa you cant use work and vacation as excuse to not work on ddnet. If you keep slacking around like this you will not get a raise. 07:04 < bridge> I'm granting assa a 300% raise 07:22 < bridge> morning 07:50 < bridge> gm 10:26 < bridge> https://discord.com/channels/252358080522747904/293493549758939136/1386768846723219506 10:26 < bridge> 10:26 < bridge> currently I am both tbh xD 10:33 < bridge> morning bug writers and others 10:38 < bridge> gm 11:05 < bridge> ChillerDragon: got a fix for `2025-07-05 11:04:44 I mysql: can't free last result (free_result:stmt:5025): Statement has no result set` ? 11:05 < bridge> iirc you had a similar issue 11:05 < bridge> meron 11:38 < ws-client> @melon nope :c 11:39 < bridge> 11:39 < ws-client> have you tried using postgres? 11:39 < bridge> not for that usecase 11:39 < bridge> and i also dont want to migrate everything 11:40 < bridge> postgres is too powerful 11:45 < bridge> but isn’t ddnet just fixed to use mysql 11:47 < bridge> Fixed in the sense that you'd have to implement your own `IDbConnection` if you don't want to use MySQL or sqlite 11:52 < bridge> materialize backend is a funny idea… could fix db sync issues 11:55 < bridge> <0xdeen> I had it working for /maps/, but it would require a server with huge amount of RAM 11:55 < bridge> <0xdeen> 256-512 GB I guess 11:55 < bridge> <0xdeen> because with 128 GB it was swapping too much 11:56 < bridge> Yeah that is a lot 11:56 < bridge> <0xdeen> And the most finicky part is that we are on MariaDB, and Materialize can only ingest from MySQL 11:56 < bridge> <0xdeen> and MySQL had way worse performance for the queries we run from the game server 11:59 < bridge> rip. idk how far the two have diverged at this point but i wonder what stops MariaDB compatibility 12:00 < bridge> <0xdeen> MySQL has implemented a new replication protocol since they split, and it's the one Materialize supports 12:01 < bridge> ah 12:02 < bridge> <0xdeen> You'd need to support these mysql args in mariadb: 12:02 < bridge> <0xdeen> ``` 12:02 < bridge> <0xdeen> "--log-bin=mysql-bin", 12:02 < bridge> <0xdeen> "--gtid_mode=ON", 12:03 < bridge> <0xdeen> "--enforce_gtid_consistency=ON", 12:03 < bridge> <0xdeen> "--binlog-format=row", 12:03 < bridge> <0xdeen> "--log-slave-updates", 12:03 < bridge> <0xdeen> "--binlog-row-image=full", 12:03 < bridge> <0xdeen> ``` 12:03 < bridge> <0xdeen> (mariadb has a gtid mode, but it's an independent implementation, so totally different) 12:03 < bridge> <0xdeen> So you have to use Debezium and Kafka instead, which adds a bunch of extra complexity I don't want to deal with 12:05 < bridge> @0xdeen could you retest 12:07 < bridge> <0xdeen> Running now 12:09 < bridge> <0xdeen> ``` 12:09 < bridge> <0xdeen> #23 26.76 /ddnet-builder/ddnet-source/src/engine/client/notifications.cpp:13:10: fatal error: 'winrt/Windows.Data.Xml.Dom.h' file not found 12:09 < bridge> <0xdeen> #23 26.76 13 | #include 12:09 < bridge> <0xdeen> #23 26.76 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 12:09 < bridge> <0xdeen> #23 26.76 1 error generated. 12:09 < bridge> <0xdeen> ``` 12:09 < bridge> <0xdeen> For win-arm64 12:10 < bridge> yeah kebs i told u mingw isn’t gonna have winrt headers 12:11 < bridge> :tear: 12:11 < bridge> ill rewrite it 3rd time to not use winrt i guess 12:11 < bridge> rip 12:12 < bridge> Is that even possible for toast notifications? 12:12 < bridge> What if you'd build https://github.com/microsoft/cppwinrt with MinGW? :justatest: 12:12 < bridge> the wintoast uses wrl and imports most stuff 12:12 < bridge> would be best to check if the wintoast lib compiles 12:13 < bridge> 12:13 < bridge> Lots of MSVC isms like macros and typedefs that wouldn’t work. I also believe there are C++ runtime expectations 12:14 < bridge> Seems possible to do it without including winrt headers 12:14 < bridge> The terrible namespaces-separated-by-underscore C APIs would work if you had the header I think 12:14 < bridge> And yes, u can load it at runtime with ole 12:15 < bridge> i may be misinformed 12:15 < bridge> https://packages.msys2.org/base/mingw-w64-cppwinrt try it 😕 12:16 < bridge> ah yea didnt roby try with mingw and it worked 12:16 < bridge> But even WinToast contains the line `#include ` which doesn't compile due to case sensitive filenames 12:16 < bridge> but ig deen doesnt have headers] 12:16 < bridge> but ig deen doesnt have headers 12:16 < bridge> skeptical that’s in arch repos but the release scripts environment could use attention anyway 12:16 < bridge> this win 7 thing smells like fake news to me 12:17 < bridge> tbh its the first time i see Windows.h didnt compile 12:17 < bridge> or a temporary setback 12:17 < bridge> thought its exactly the same 12:17 < bridge> so many tutorials use the uppercase version 12:17 < bridge> Because most build Windows applications on Windows 12:18 < bridge> The real filename is `windows.h` 12:19 < bridge> (WinToast does this incorrectly for several Windows includes) 12:20 < bridge> lmfao 12:20 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1391000711642349568/image.png?ex=686a4ddd&is=6868fc5d&hm=064d61786317ab6e56d9b67427061cd752bb293b30db842f260ca7efde8a5931& 12:20 < bridge> nope 😂 12:20 < bridge> C:\Program Files (x86)\Windows Kits\10\Include\10.0.26100.0\um 12:20 < bridge> MinGW is lower case 12:20 < bridge> on deen’s it’s prob windows 12:20 < bridge> yea 12:21 < bridge> it’s different headers 12:21 < bridge> they gotta reimplement them things by hand 12:21 < bridge> They only made it confusing by changing the capitalization though, or what was the reason for that? 12:22 < bridge> it was always uppercase 12:22 < bridge> But why did MinGW change it? 12:22 < bridge> They use the correct capitalization for the winrt headers 12:23 < bridge> I checked and they also come from mingw, so it should be possible to compile them with mingw if they are available 12:23 < bridge> that specific header has probably been around since mingw pre mingw-w64, which is OKD 12:23 < bridge> that specific header has probably been around since mingw pre mingw-w64, which is OLD 12:23 < bridge> winrt way newer 12:24 < bridge> we got c++20 merged in already right 12:24 < bridge> yes 12:24 < bridge> good 12:24 < bridge> 12:24 < bridge> C++/WinRT requires C++20 for its coroutine support. (C++/WinRT officially supports C++17 but only with MSVC-specific coroutine extensions.) 12:25 < bridge> i have got to stop staying up so late 12:31 < bridge> There probably is a way to do toast notifications without winrt. But I doubt its well supported 12:42 < bridge> why dont we use ci for builds? 12:46 < bridge> macOS CI artifacts aren't even runnable AFAIK 12:48 < bridge> We could, but someone needs to develop a release workflow that builds all arches and os' properly and packages it all properly 12:58 < bridge> ```diff 12:58 < bridge> - SLabelProperties Props; 12:58 < bridge> - Props.m_MaxWidth = View.w; 12:58 < bridge> - Ui()->DoLabel(&View, aMessage, 10.0f, TEXTALIGN_ML, Props); 12:58 < bridge> + Ui()->DoLabel(&View, aMessage, 10.0f, TEXTALIGN_ML, {.m_MaxWidth = View.w}); 12:58 < bridge> ``` 12:58 < bridge> C++20 :poggers: 13:02 < bridge> 😂 13:40 < bridge> #10446 might be a good first issue :3 so far nobody said anything about it 13:40 < bridge> https://github.com/ddnet/ddnet/issues/10446 15:08 < bridge> pr time? 15:08 < bridge> WAIT WE CAN FINALLY DO THAT 15:08 < bridge> WOOO 15:09 < bridge> when address of temporary :P 15:09 < bridge> (i think youre meant to use & or by value, but we stil have alot of pointers as args fo rno reason) 15:10 < bridge> Teeworlds didn't use references basically 15:10 < bridge> So everything was passed by pointer 15:10 < bridge> when std::span instead of char array and size 15:11 < bridge> std::span is a bit special 15:11 < bridge> it cant do everything a char* and size_t can do 15:11 < bridge> why? 15:12 < bridge> wait can u ? 15:13 < bridge> it has .data and size 15:13 < bridge> to work like char*/size 15:14 < bridge> having a none option is either std::nullopt or size=0 whcih is somethingw e would have to decide 15:14 < bridge> (i prefer size = 0) 15:14 < bridge> span doesnt own the memory so you still have to store it somewhere or have it deallocate 15:14 < bridge> which is just as unsafe as char* 15:15 < bridge> yes it doesnt own the memory 15:15 < bridge> the benefit is not passing the size all the time and its compatible with c++ algorithms 15:16 < bridge> its possible to use c++ algos with chars* 15:16 < bridge> you just have to write ur own function 15:16 < bridge> to have it the other way around you would need to rewrite everything 15:16 < bridge> not really rewrite 15:17 < bridge> its compatible 15:17 < bridge> you would have to add .data and .size to everthing 15:17 < bridge> could* 15:17 < bridge> if you want raw pointer then yes data 15:18 < bridge> theres alot of raw string manipulation 15:18 < bridge> its just as verbose as char*/size i dont think theres much benefit 15:19 < bridge> we could have our own string class which inherits a span and manages the mem with unique ptr 15:19 < bridge> imo its better than array+size 16:07 < bridge> It's new, young people like new 16:24 < ws-client> @learath2 @robyt3 /rank does not work on ddnet zCatch 176.9.114.238:8400 16:24 < bridge> I didn't know zCatch had ranks 16:24 < ws-client> ddnet admin be like 16:25 < bridge> Chiller have you tested #10472 ? 16:25 < bridge> https://github.com/ddnet/ddnet/pull/10472 16:25 < ws-client> @learath2 yes as you can see by the checked checkbox 16:25 < bridge> Am ddnet admin, not zcatch admin 16:25 < ws-client> its hosted by ddnet @learath2 16:42 < IGROK12121212> hi 16:53 < bridge> @uwucon 18:52 < bridge> ddnet admin be like: what is zCatch 19:10 < bridge> anyone else has weird moments where players just randomly move without them doing anything 19:50 < ws-client> @Jupstar ✪ send new laptop pls i just was stuck for like 20 seconds in uploading map to GPU screen 20:21 < bridge> I still don't understand what this tries to solve. We can still kill some tees in a team and finish the rest. What makes crossing the startline different? 20:21 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1391121814079410176/d4ff69832be71534.png?ex=686abea6&is=68696d26&hm=68c4c155249e192f9e90b64a3d0a607396c20bc2aa52c4f998133ad4c394f5b5& 20:25 < bridge> @chillerdragon just hit the screen hard enuf and ull be out of the screen in no time 20:31 < bridge> The time 20:31 < bridge> K, send money 20:33 < bridge> It's like t0 skip but you can finish as team 21:30 < ws-client> omg so annoying https://paste.zillyhuhn.com/Lg 21:31 < ws-client> i deleted the build dir and smh bro still wants c++17 it should be an obvious fix. Second time i have this shti 21:36 < bridge> This allows you to do t0-style runs in a team 21:36 < bridge> oh whoop Jupstar already said that 21:42 < bridge> Which is weird tho, we have team0mode now :justatest: 21:43 < bridge> melon 21:43 < bridge> yes and string_view for strings 21:43 < bridge> its for t0 style runs while keeping teamrank 21:43 < bridge> meron 22:10 < ws-client> t0 mode gives team ranks? that sounds wrong 22:14 < bridge> @jupeyy_keks I want to benchmark my sprite rendering a little to figure out optimizations. When I generate example sprites, what should I look out for 22:14 < bridge> - ofc it matters if sprites are in the viewport, and if yes how much 22:14 < bridge> - does it matter if all sprites are the same (I guess no(?)) 22:14 < bridge> - does it matter how much the sprites overlap? (no depth buffer etc) 22:41 < bridge> but like is there any map where t0 mode is useful when you don't hit startline 22:41 < bridge> but like is there any map where t0 skip is useful when you don't hit startline 23:01 < bridge> Only if u can skip start line 23:04 < bridge> 1. Yes, but maybe the viewport check is too expensive 23:04 < bridge> 2. In your rendering probably not (with multi texture) 23:04 < bridge> 3. There was a cool optimization for non colliding primitives in earlier amd gpus but it got removed, probably bcs in 3d it's too rare and 2d doesn't matter 23:05 < bridge> Also non standard 23:07 < bridge> ah, so the viewport check is an optional optimization on the gpu? so viewport check is very different to sending zero pixels to the fragment shader? 23:08 < bridge> I thought u want to skip rendering some sprites 23:08 < bridge> But yes early discard 23:08 < bridge> Is what the gpu does 23:08 < bridge> no, I don't plan on doing that as I might use multiple camera angles 23:08 < bridge> hm I guess I can just make that an option in the benchmark 23:10 < bridge> Before we talk about different things. If you want a gpu side discard, that is what the rasterizer usually does anyway 23:12 < bridge> in this case I want to be aware of gpu side discard, to ensure that my new benchmark actually measures the stuff that I want to measure ^^ 23:13 < bridge> (currently not discussing optimizations, but stuff to be aware of when benchmarking) 23:13 < bridge> Overdraw is not free, so I'd think overlap obv matters 23:14 < bridge> Early discard is a thing. Overlapping will most like cost as much as no overlap 23:14 < bridge> I do want it to repeatedly draw each sprite, fully. do you think there will be some synchronization slowdown if I do this specifically, compared to each sprite being on random positions on the screen? 23:14 < bridge> So out of viewport might be cheaper 23:17 < bridge> I might have misunderstood something here I guess. Afaict partial overlaps should cost you, as you now have to do work for the same pixel more than once 23:17 < bridge> ah, you mean that pixels get overwritten, meaning that the original calculations for the pixels were obselete? 23:18 < bridge> That is the case 23:18 < bridge> Yes, the gpu can't cull a partial quad 23:18 < bridge> I believe I understand your point, and that is what I want it to do ^^ 23:19 < bridge> If I'd do such stuff on the cpu, I'd need to be aware of compiler optimizations, which might do crazy shit sometimes. That is why I'm cautious with the GPU stuff rn 23:19 < bridge> (Not like there is much that can be "optimized" you just want to avoid it if possible, but it's not always possible) 23:20 < bridge> @patiga but in theory the driver can be optimized to parallelize two separate primitives without any kind of barrier because they write to different areas of the frame buffer. 23:20 < bridge> 23:20 < bridge> So it might still be faster to render those sprites but does not need to be 23:24 < bridge> What do you mean by "generate example sprites" btw? 23:24 < bridge> benchmarking my sprite rendering to measure if I actually improve it on optimizations 23:24 < bridge> so far, I pretty much just hoped stuff gets better if I do certain stuff 23:25 < bridge> and I want to make sure 23:25 < bridge> And your sprites are just singular textured quads? 23:25 < bridge> yup 23:25 < bridge> Any instancing? 23:25 < bridge> no, though indexed 23:26 < bridge> two triangle primitives, from 4 vertices and 6 indices 23:26 < bridge> and many of those 23:30 < bridge> I'd guess just rendering lots of them, with just random positions is a fine benchmark here. I wouldn't bench the case where they are non-overlapping specifically. Culling the ones completely out of the viewport is afaict very cheap, having lots of them might have an effect but I think you'd find it insignificant. If you are not doing instanced rendering it shouldn't matter whether the sprites are the same or not. But it would be good to have a ben 23:33 < bridge> If you want to try culling on the cpu, I'd suggest the ancient single axis culling trick first. It's very cheap, just might still lead to performance improvement