00:09 < bridge> money 00:09 < bridge> !sh echo 00:09 < chillerbot12> bash moved to chat.zillyhuhn.com #off-topic 00:10 < bridge> chillerdragon: why does chillerbot have 12 in the name? 00:10 < bridge> is it an easter egg 01:17 < bridge> yo chillerdragon you have a license right? 01:17 < bridge> yo chillerdragon you have a drivers license right? 01:17 < bridge> yo chillerdragon you have a drivers license right? 01:27 < bridge> <.lepinpin> Good idea to recode ddnet in ddnet with quads x) ? 01:47 < bridge> the ddnet map format is not turing complete afaik, so probably no 01:48 < bridge> <.lepinpin> Okey so bad apple ? 01:48 < bridge> <.lepinpin> Or maybe doom 01:49 < bridge> bad apple was already made. doom isn't possible either 02:23 < bridge> with music? 02:23 < bridge> with server mods you should be able to make pretty much anything 02:23 < bridge> (yeah obviously with musc u can play random sounds) 02:25 < bridge> https://www.youtube.com/watch?v=7WPTr4meLIY 02:25 < bridge> this was like a week ago how can you not remember xD 02:25 < bridge> owo 02:25 < bridge> no map download 02:27 < bridge> @essigautomat solly is doubting you 02:46 < bridge> no i just wna play it on my compooper x-x 08:04 < bridge> I want to release it with recorder tiles as fun map in the future 08:07 < bridge> you can almost do that without client changes, but good luck syncing anything 08:42 < bridge> maybe I can send you the demo tho πŸ€” 08:46 < bridge> maybe I can send you the demo tho πŸ€” the demo is flickering and not as performant as ingame rendering πŸ™ 08:47 < bridge> maybe I can send you the demo tho πŸ€” the demo is flickering and not as performant as ingame rendering πŸ™ and it desyncs if you stop it 09:08 < bridge> btw I am not sure that doom is impossible πŸ˜‰ 09:51 < bridge> i think ill restart edos, i have tech debt from not knowing common patterns from start, i think i can do a way better job now 09:52 < bridge> so next you are working on edos reloaded 09:53 < bridge> I like doing this with my projects, first attempt is more like a playground where I figure things out 10:18 < bridge> I love the new blazingly fast loading times of nightly :brownbear: 11:03 < bridge> yeah, im also starting with limine now 11:04 < bridge> https://github.com/jasondyoungberg/limine-rust-template 11:05 < bridge> also on september 10 i have time off for 10 days so im gonna use it 11:05 < bridge> looking forward to my time off 11:05 < bridge> :justatest: 11:07 < bridge> nice 11:14 < bridge> I have vacation in 3 weeks, looking forward to it, I need it desperately 11:14 < bridge> I have vacation in 3 days, looking forward to it, I need it desperately 11:23 < bridge> I should read up on this limine thing 11:48 < ws-client> **** @teero777 my driver license status may or may not be confidental. What do you need? 11:49 < ws-client> **** @kollpotato irc increments a number after the name if it is already used (in the same network i think) so i assume my bot bugged and reconnected or something like that. I was worried when it reached 100 11:49 < ws-client> **** @milkeeycat I could meet in game now 11:52 < ws-client> **** @learath2 yes i want to sit down! when do we do it? how do we do it? 11:58 < bridge> y0.1k 11:58 < bridge> @chillerdragon 11:58 < bridge> same thing happened with browsers 11:58 < bridge> `"Chromium";v="139", "Not;A=Brand";v="99"` 12:04 < ws-client> **** same as what 12:05 < bridge> chiller for some reason ur bot constantly reconnects after some time, u can see that in quakenet webchat 12:06 < bridge> i assumed it was irc stuff but when i made my bot it was stable and didnt reconnect at all 12:06 < bridge> so javascript L 12:06 < bridge> or ur raspi has unstable internet 12:14 < ws-client> **** yea i am aware 12:14 < ws-client> **** no idea how to fix it 12:15 < ws-client> **** my webchat irc client is also unstable and it runs on my vps 12:15 < bridge> maybe the irc lib forgets to do ping pongs 12:15 < bridge> but how does it reconnect by itself then 12:16 < bridge> chillerdragon: I still don't like builder functions :/ imo default value consts would be enough 12:18 < bridge> builder pattern sucks 12:18 < bridge> just define everything beforehand smh 12:18 < ws-client> **** @milkeeycat yea default value consts could also work havent thought about it. I think both work the same for me. 12:18 < ws-client> **** go send pr 12:19 < ws-client> **** and replace the existing builder with default const 12:21 < bridge> anyone have expereicne getting windows runners to run 12:21 < bridge> chillerdragon: did you think about consts for messages or specific fields?(I was talking about specific fields) 12:22 < bridge> whats the issue w the runners 12:22 < bridge> just waiting forever 12:22 < bridge> yes, delete the idea, use a linux runner with MinGW instead and test with wine 12:22 < bridge> cant get it to compile 12:22 < bridge> or well cant get rust to accept the .dll 12:23 < bridge> https://github.com/SollyBunny/libccar/actions/runs/17263478129/job/48990122397 12:23 < bridge> (teeros thing) 12:24 < ws-client> **** @milkeeycat oh i thought you mean the entire message. Imo thats the most convienient and the one that actually saves the user time, lines, bugs and typing 12:26 < bridge> ccar.lib is the only thing there with no absolute path 12:26 < bridge> idk what generates that build command since you're using msvc but double check 12:27 < ws-client> **** not ddnet related 12:27 < ws-client> **** ban 12:28 < bridge> that applies to like 50% of the things in here 12:38 < bridge> I already suggested having a developer-off-topic 12:39 < bridge> but bad-apple is on topic πŸ˜„ 12:46 < bridge> second time destroying @chillerdragon in the PR comments πŸ˜„ 13:04 < bridge> chillerdragon: lerato said everything developa related can be talked about here 13:15 < bridge> idk what i did 13:15 < bridge> but it works now 13:15 < ws-client> **** @Assa destroy? ^^ 13:16 < bridge> somewhat 13:16 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1410221307441713242/image.png?ex=68b03a70&is=68aee8f0&hm=2a98ddae2d1cf4210221a4cd71b55260a5009b6a7b363b99cc8976ee81f6a7e2& 13:16 < bridge> delete pChillerDragon; 13:16 < ws-client> **** pff 13:16 < bridge> xD 13:16 < ws-client> **** that might cause a use after free 13:18 < bridge> ||_as if you ever get used_|| you make it too easy to answer like that 13:18 < bridge> `free(pChillerDragon); pChillerDragon = NULL`; 13:18 < bridge> `free(pChillerDragon); pChillerDragon = NULL;` 13:18 < bridge> :) 13:18 < ws-client> **** bing bong 13:19 < bridge> sollybunny1 went up in flames 13:19 < bridge> `pChillerDragon = std::make_shared(EDevRole::MAINTAINER);` 13:21 < bridge> imagine chillerdragon is an irc ai made by admins for developers not to feel lonely 13:48 < bridge> Chillerdragon AI manifested into existence from the collective consciousness of Teeworlds players 14:02 < bridge> limine is a bootloader, it supports grub protocols (multiboot 1 and 2) but it also has its own modern protocol. the thing is there is a rust crate for the limine protocol (aka limine boot protocol) which makes it a pleasure to work with in rust, im sure C has the same, the crate is rly easy 14:02 < bridge> limine is a bootloader/menu, it supports grub protocols (multiboot 1 and 2) but it also has its own modern protocol. the thing is there is a rust crate for the limine protocol (aka limine boot protocol) which makes it a pleasure to work with in rust, im sure C has the same, the crate is rly easy 14:03 < bridge> https://en.wikipedia.org/wiki/Intel_5-level_paging 14:03 < bridge> i wonder when this is used 14:05 < bridge> so that's why he is all knowing and all powerful 14:34 < bridge> setting up windows dual boot in limine was easy as hell 14:34 < bridge> Also I could fill my screen with screams and agony when selecting the windows entry 14:34 < bridge> :owo: 14:43 < bridge> :owo: 14:45 < bridge> > limine 14:45 < bridge> noted down ✍️ 14:56 < bridge> oh that is ugly doing this retroactively, you need to install ubuntu on other disks, then run limine and find windows 14:57 < bridge> oh god, looking forward to my 6 disk dual boot setup :pepeW: 14:57 < bridge> huh 14:57 < bridge> I have windows and Linux on the same disk 14:57 < bridge> :justatest: 14:58 < bridge> yeah this is possible, you don't need physicial disks don't worry 14:58 < bridge> it's just my HDD/SSD/NVMESSD is ... spicy 15:02 < bridge> limine wont find windows for u 15:02 < bridge> limine is just a config file 15:02 < bridge> https://wiki.gentoo.org/wiki/Limine#Dual-booting_with_Windows_in_Limine_.28UEFI.29 15:03 < bridge> ``` 15:03 < bridge> /Windows 15:03 < bridge> //Windows Example 15:03 < bridge> protocol: efi 15:03 < bridge> # This tells the efi protocol to call the specified EFI file and load it. 15:03 < bridge> path: boot():/EFI/Microsoft/bootmgfw.efi 15:03 < bridge> comment: Boot Microsoft Windows! 15:03 < bridge> ``` 15:04 < bridge> I didn't even have to "find" windows 15:04 < bridge> 15:04 < bridge> I just installed the bootloader files using a live USB :justatest: 15:07 < bridge> have a cat 15:07 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1410249424575397992/PXL_20250827_130659112.jpg?ex=68b0549f&is=68af031f&hm=033eb678001945d2c2b31a3f873c9fc104ebb3042044d1a9ed307207bc40aeb2& 15:08 < bridge> sir, it's #developer channel πŸ€“ 15:09 < bridge> silly cat 15:10 < bridge> my cat codes in rust 15:14 < bridge> https://milkv.io/jupiter 15:14 < bridge> @tsfreddie buy one for me pls 15:15 < bridge> anyone know big riscv boards with sensible ram? 15:15 < bridge> 15:15 < bridge> ohi can ship with the latest option 15:15 < bridge> €70,95 15:15 < bridge> 8gb ram riscv 15:15 < bridge> not bad 15:15 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1410251450059198484/K1_2048x2048.png?ex=68b05682&is=68af0502&hm=89de0a1f49ff742432f61761dcba699f278e8b4567f8a51b0ba31e73a57c8b78& 15:24 < bridge> Jupstars alter ego mentioned 15:25 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1410253853990981692/Screenshot_20250827-152505.png?ex=68b058c0&is=68af0740&hm=885064ab2d417559420bcba5e744273a14685b3060ac7045b49b47ff0b9fe730& 15:36 < bridge> I wonder if we would improve zooming out on maps, if we start to skip clipped regions that are smaller than a pixel or start to fade away high details 15:36 < bridge> I wonder if we would improve zooming out on maps, if we start to skip clipped regions that are smaller than a pixel ~~or start to fade away high details~~ 15:36 < bridge> probably yes, but I am already 3 steps ahead of my PRs 15:43 < bridge> Render the whole map to a texture, reduce FPS of animations, uhh, some other cursed LoD tricks 16:15 < bridge> Is every tile it's own quad or how does map rendering work at all 16:16 < bridge> Is every tile its own quad or how does map rendering work at all 16:17 < bridge> its beyond our understanding 16:18 < bridge> only the true developers know how all that works 16:18 < bridge> jupstar assa patiga 16:19 < bridge> Yeah 16:19 < bridge> There should a second dev role 16:19 < bridge> "Real Developer" 16:19 < bridge> Lol 16:24 < bridge> @0xdeen can you use in your announcement? `` 16:24 < bridge> ah my bad 16:24 < bridge> wrong timestamp 16:24 < bridge> tf 16:25 < bridge> @0xdeen can you use in your announcement? `` 16:25 < bridge> @0xdeen can you use in your announcement? `` 16:26 < bridge> @0xdeen can you use in your announcement? `` 16:27 < bridge> it also should be `Prize pool` not `Price pool` 16:28 < bridge> That's a funny mistake 16:29 < bridge> prize pool was misspelled quite a lot here 16:30 < bridge> mostly by germans 16:30 < bridge> its the same word in german :justatest: 16:30 < bridge> nah you actually have to pay the admins the amount that is given there. there is no prize 16:30 < bridge> xD 16:31 < bridge> everyone pays 200€ so ddnet servers keep running 16:31 < bridge> it's an involuntary donation if you win the tournament 16:32 < bridge> if everyone payed 200€ then the servers would keep running till 2500 16:33 < bridge> sadly dreams don't come true 16:36 < bridge> the average server costs are ~1731€. **all** active players would mean all active players, currently ~6000. so (6000*200€)/1731 = ~693 years of funding 16:37 < bridge> My playtime is 1k hours 16:37 < bridge> How much is a fair price to pay? 16:37 < bridge> but who is gonna manage all those servers besides The Deen 16:37 < bridge> lets subtract 100yrs of payment because deen has to buy the immortality potion to manage the servers. seems fine 16:38 < bridge> reasonable 16:39 < bridge> DDNet is cheap 16:40 < bridge> i recently found a free ddnet server hosting 16:40 < bridge> its not 24/7 but is good enough for testing 16:41 < bridge> trashmap? 16:41 < bridge> no another one 16:41 < bridge> they host many other game servers 16:41 < bridge> You can also host a server yourself 16:41 < bridge> On a raspberry pi 16:42 < bridge> but i dont have a raspberry pi 16:42 < bridge> Unfortunate 16:42 < bridge> It's an awesome device 16:42 < bridge> i can buy a small vps for 2.5€ a month for a ddnet server 16:43 < bridge> raspberry pi would cost about 50€ 16:43 < bridge> Raspberry pi + HDD is great for torrenting too 16:46 < bridge> ig im not a real develper anymore 16:46 < bridge> real maintainer 16:46 < bridge> afaik there is a tile shader, in vulkan 16:46 < bridge> :justatest: 16:46 < bridge> built-in? 16:47 < bridge> the shader is made by jupstar 16:48 < bridge> Okay 16:48 < bridge> A shader 16:48 < bridge> I have no idea how that works but cool 16:48 < bridge> https://github.com/ddnet/ddnet/blob/master/data/shader/tile.vert 16:48 < bridge> https://github.com/ddnet/ddnet/blob/master/data/shader/tile.frag 16:48 < bridge> https://github.com/ddnet/ddnet/blob/master/data/shader/vulkan/tile.vert 16:52 < bridge> least complex shaders i have ever seen 16:53 < bridge> It's sus 16:53 < bridge> How can it do anything when it doesn't do much? 16:55 < bridge> because ddnet just renders squares from an image 16:55 < bridge> it's simple 16:57 < bridge> And yet zooming out lags 16:57 < bridge> not for me 16:57 < bridge> Supposedly, on any half decent hardware it doesn't lag lol 16:57 < bridge> what are ur specs? 16:59 < bridge> it lags because you are rendering everything and cant ignore stuff 17:00 < bridge> also remember there are animations slowing things down 17:02 < bridge> oh well i always play with entities on 17:02 < bridge> so no lags for me 17:03 < bridge> I don't think we have LoD for zoom tbh 17:06 < bridge> But actually rendering everything to a texture could work 17:06 < bridge> Or probably multiple textures 17:07 < bridge> Conceptually it makes sense 17:07 < bridge> You just need a little bit of memory i guess 17:07 < bridge> exactly twice as much texture memory 17:07 < bridge> Twice as much? 17:08 < bridge> yes, like 1 + 0.5 + 0.25 + 0.125 ... 17:10 < bridge> oh since this is 2D i am wrong, it's smaller: 17:10 < bridge> https://upload.wikimedia.org/wikipedia/commons/5/5c/MipMap_Example_STS101.jpg 17:11 < bridge> it's exactyl 1/3rd more πŸ˜„ 17:11 < bridge> it's exactyl 1/3rd more data πŸ˜„ 17:12 < bridge> Okay but like 17:12 < bridge> it's exactyl 1/3rd more data (for full mimapping) πŸ˜„ 17:12 < bridge> This is a mip map 17:13 < bridge> I think we could get this working, but I have no idea if this would make things better tbh 17:14 < bridge> It's impossible to tell without testing it unfortunately 17:14 < bridge> It's a cool idea tho 17:15 < bridge> I was always wondering if in 3D games it would be possible to render distant terrain to a texture 17:15 < bridge> Because it doesn't change much 17:16 < bridge> And it seems dumb to do so much work every frame when the result is almost the same as the previous frame 17:16 < bridge> the vulkan backend seems to support mipmaps, I checked the code. I don't know if this is used anywhere 17:17 < bridge> But why would mip maps be useful? 17:17 < bridge> It's not a performance thing 17:17 < bridge> It makes textures look better i think 17:18 < bridge> "They are intended to increase rendering speed and reduce aliasing artifacts." 17:18 < bridge> https://github.com/ddnet/ddnet/blob/ff74738338202ba06aaf0518c89be40b65204565/src/engine/client/backend/vulkan/backend_vulkan.cpp#L2582 17:18 < bridge> "They are intended to increase rendering speed and reduce aliasing artifacts." () 17:18 < bridge> so vulkan seems to already do this as long as you don't deactivate it 17:19 < bridge> Wait so how can it make rendeing faster? 17:19 < bridge> In practice it doesn't matter i think 17:19 < bridge> opengl as well 17:19 < bridge> idk magic 17:19 < bridge> In minecraft it changes nothing as far as i can tell, textures just look better 17:21 < bridge> Also doesn't anisotropic filtering do the same thing basically 17:21 < bridge> Just better? 17:21 < bridge> Or maybe both approaches are needed? 17:22 < bridge> Anisotropic filtering also makes textures look better with minimal performance impact 17:23 < bridge> I guess mip maps are a way of doing anisotropic filtering 17:23 < bridge> That makes sense actually 17:24 < bridge> MipMapLevels are limited to 1 here 17:35 < bridge> mipmaps does make rendering faster by offering memory locality: it uses a smaller texture which makes caching more effective 17:36 < bridge> I currently don#t see any effect 17:36 < bridge> yes, otherwise you get weird artifacts 17:37 < bridge> to make the rendering faster, it would need to be a relevant bottleneck. typically, ddnet rendering is cpu bottlenecked iirc 17:40 < bridge> @essigautomat the opengl 1 implementation doesn't use mipmaps i think. you can take a look at that xd 17:42 < bridge> I now managed to get more fps when zoomed out on a map by just setting mip map level count to 10 17:42 < bridge> with vulkan 17:42 < bridge> set this to 10: 17:42 < bridge> https://github.com/ddnet/ddnet/blob/ff74738338202ba06aaf0518c89be40b65204565/src/engine/client/backend/vulkan/backend_vulkan.cpp#L2583 17:42 < bridge> 17:42 < bridge> Done 17:43 < bridge> maybe also the 1 in ImgSize below 17:43 < bridge> i know for a fact ddnet uses more than 1 mip map level in the vulkan implementation xD 17:43 < bridge> else it would look horrible 17:44 < bridge> i dont know what that variable controls 17:44 < bridge> even setting it to 4 improves this 17:45 < bridge> so then one thing is that ImgSize has a depth, and it it used to calculate mip map level, afait it always returns 1 17:46 < bridge> ``` 17:46 < bridge> static size_t ImageMipLevelCount(size_t Width, size_t Height, size_t Depth) 17:46 < bridge> { 17:46 < bridge> return std::floor(std::log2(maximum(Width, maximum(Height, Depth)))) + 1; 17:46 < bridge> } 17:46 < bridge> ``` 17:46 < bridge> 17:46 < bridge> What is this for Depth 1? 17:46 < bridge> ah it's maximum πŸ€” 17:47 < bridge> is the mipmap flag always off? πŸ€” 17:49 < bridge> wtf why does changing this work 17:53 < bridge> I think because I have entities overlay turned on, which contains text, which doesn't use mipmaps 17:55 < bridge> omg this was exactly it ._. mipmaps are implemented and working 17:55 < bridge> they are also deeply integrated so nobody needs to think about it 18:04 < bridge> I can confirm this, otherwise the culling wouldn't work 18:04 < bridge> ~~I can confirm this, otherwise the culling wouldn't work~~ 18:04 < bridge> 18:04 < bridge> Idk what I smoked today, culling improves both gpu calls and cpu 18:22 < bridge> # discordapp.com/invite/5yfVG8BewR @everyone @everyone 18:37 < ws-client> **** @learath2 where u 19:39 < bridge> time to make blog posts with the restart of edos 19:39 < bridge> as i progress 19:40 < bridge> if it's a restart here's a logo 19:40 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1410318005597896845/Screenshot_2025-08-23-17-53-02-91_40deb401b9ffe8e1df2f1cc5ba480b12.jpg?ex=68b0947e&is=68af42fe&hm=ebdc8c4254992fb88506bc9bd80ad70e8abe8b0c9c1849688693abeb6b1d3d7e& 19:40 < bridge> i made it in some random generator xd 19:40 < bridge> looks funny 19:40 < bridge> highly jpegged OS 19:42 < ws-client> **** pegged OS 19:44 < bridge> is it FAT64 time?!??!?!?!?! 19:57 < bridge> less memory allocation, better vram usage, less memory addresses need to be filled, proper texture scaling on a pixel perfect level 19:57 < bridge> why do you even question this as a programmer? 19:58 < bridge> better vram usage? 19:58 < bridge> how 19:58 < bridge> it literally uses more memory 19:58 < bridge> you use a lower version of the image, that you can store and use in the GPU's memory as cache 19:59 < bridge> since the bitmap version (which you could also just generate at runtime) will be lower res, than the original, while also allow you to run algorithms to make it less pixelated or less distorted, then* storing it to the GPU 20:00 < bridge> xdd 20:00 < bridge> implementing a bitmap system from scratch is a nightmare tho, due to CPU / GPU communication :justatest: 20:00 < bridge> mipmaps actually use more vram iirc xd 20:01 < bridge> > A mipmap is a pre-calculated sequence of images, each one half the resolution of the previous level, used to improve rendering performance and visual quality when objects appear at different distances from the camera. 20:01 < bridge> > 20:01 < bridge> > For example, if you have a 1024x1024 texture, the mipmap chain would include: 20:01 < bridge> > 20:01 < bridge> > Level 0: 1024x1024 (original) 20:01 < bridge> > Level 1: 512x512 20:01 < bridge> > Level 2: 256x256 20:01 < bridge> > Level 3: 128x128 20:01 < bridge> > And so on down to 1x1 20:01 < bridge> > 20:01 < bridge> > Yes, using mipmaps increases VRAM usage by approximately 33% compared to storing just the original texture. This is because you're storing multiple versions of the same texture. 20:01 < bridge> > 20:01 < bridge> > However, mipmaps provide significant benefits: 20:01 < bridge> > 20:01 < bridge> > Reduced texture aliasing (shimmering/flickering at distances) 20:01 < bridge> > Better performance when sampling distant objects (GPU can use smaller textures) 20:01 < bridge> > Improved cache coherency 20:01 < bridge> > 20:01 < bridge> > The memory overhead is usually worth it for the visual quality and performance gains, which is why mipmaps are enabled by default in most graphics applications. 20:01 < bridge> only when you send the full resolution image to the GPU and let the GPU handle bitmapping 20:01 < bridge> Then yes, it costs more. 20:01 < bridge> But you can assign a cache to the lower resolution image (bitmap image) and discard the full resolution one when it's not needed 20:02 < bridge> But to implement such a dynamic system while keeping a record on what bitmap texture is being cached is... eh xdd 20:02 < bridge> Possible to do for sure, but takes a lot of trial and error to complete 20:02 < bridge> Possible to do for sure, but takes a lot of trial and error to complete (all for not getting GPU memory leaks) 20:04 < bridge> I think for chunks that include textures, should be able to have a dynamic LOD system which would benefit from those bitmap changes. Otherwise handling every instance of that texture one by one is a nightmare to do 20:06 < bridge> In Teeworlds' case, every block is unique in a way, that shouldn't be chunk-ated due to the visible repetitiveness of such LOD systems. 20:06 < bridge> Also the fact that rapid zoom in, zoom out and immediate zoom scale exists, the system has to prepare for discarding / allocating memory for all involved textures and their bitmap (cached) variants, which kinda sucks in performance 20:08 < bridge> (And unless you actually keep all those cached bitmaps from the map which is just extra memory in stack, this will be like... idk, exponentially expensive based on the amount of textures in a map used) 20:08 < bridge> Rename to Erdos 20:09 < bridge> perdoles? 20:09 < bridge> :justatest: 20:10 < bridge> So based on my knowledge and my nerd factor: 20:10 < bridge> - Bitmaps are only useful if you can effectively afford textures to be cached, which will not only yield better performance, but a stable cache line that can be tracked on what to load 20:10 < bridge> - Bitmaps only suck if you want to avoid allocating more than one version of the same texture, thus essentially increasing memory, where memory is tight 20:11 < bridge> I might miss out on crucial info about this topic, in that case feel free to correct me πŸ˜„ 20:11 < bridge> u are a goddamn walking encyclopedia 20:12 < bridge> (of nonsense xd) 20:12 < bridge> No, I just experienced with this exact thing a couple of weeks ago in Godot 20:12 < bridge> Rude 20:12 < bridge> In honor of Paul ErdΕ‘s 20:12 < bridge> why nonsense? 20:12 < bridge> As I said, feel free to correct me 20:12 < bridge> master 20:12 < bridge> nice reference 20:13 < bridge> :feelsbadman: πŸ™ 20:13 < bridge> ugh we have been at the same iteration of this event multiple times 20:13 < bridge> correcting takes longer than ignoring and going on 20:13 < bridge> u started by saying mimaps use less vram, u were already wrong, so lets just stop there 20:13 < bridge> true but... eh? 20:14 < bridge> did u get ur text from a blog post or is ur own 20:14 < bridge> So you read only that part and then ignored the rest? 20:14 < bridge> damn 20:14 < bridge> i read a bit what u said, but it just makes no sense to me xD 20:14 < bridge> ig I'm cooked then 20:15 < bridge> btw was this a chatgpt response? 20:15 < bridge> Is AI determining what you know about this topic? 20:15 < bridge> If so, I think I know who is more cooked then 20:15 < bridge> btw by definition a cache also means more vram usage, if said cache is on the gpu, otherwise i guess on the memory but then i would bet my ass a texture transfer from host to gpu is slower than using more vram 20:15 < bridge> so ur just probs overengineering nonsense 20:16 < bridge> btw I already mentioned the 33%, you probably safe more vram by disabling skins. 20:16 < bridge> that was yeah, but i already knew mimaps used more ram 20:16 < bridge> anyway mipmaps arent bad u should use them probs nearly always 20:16 < bridge> from CPU to GPU is faster, than reading back from GPU 20:16 < bridge> its still a transfer 20:17 < bridge> than having all on gpu 20:17 < bridge> I already implemented gpu memory clearing so the vram Situation should be better in nightly 20:17 < bridge> and??? 20:17 < bridge> skins are really small though 20:17 < bridge> It's like "oh my pizza takes a long time cuz it's far away from me" 20:17 < bridge> like fucking duh 20:17 < bridge> my point is Premature Optimization Is the Root of All Evil 20:17 < bridge> but I would rather have 100 pizzas arrive in a truck, then having a new pizza every now and then with a car 20:18 < bridge> how you process the transfers is where caching matters 20:18 < bridge> sorry for being rude btw 20:18 < bridge> thus why I brought it up xd 20:18 < bridge> Well you are from ignoring me, wtf 20:18 < bridge> how do you keep them fresh, do you have a pizza fridge? 20:18 < bridge> no, that's just called skill issue my friend 20:19 < bridge> the only thing I'd say is fishy from the chatgpt response is 20:19 < bridge> > Improved cache coherency 20:19 < bridge> especially because if its about the improved locality, then it already said that in the bullet point above 20:19 < bridge> ... who told you that 20:19 < bridge> :justatest: πŸ”« 20:19 < bridge> oop, wrong message reply 20:19 < bridge> you are often confidently wrong, so arguing against you is a bit tiresome, i probs shouldnt have said anything and let the day continue 20:20 < bridge> I just experienced with this in Godot, so I know what I talk about. It looks like some crucial information is missing, which you clearly refuse to tell me 20:20 < bridge> Instead, you stick with an AI response 20:20 < bridge> And you want me to take you seriously, while you also don't 20:20 < bridge> premature optimization is not a skill issue, its smth u should avoid 20:21 < bridge> What, is it by principle? Will I be in jail if I dare to make a system for a specific purpose? 20:21 < bridge> but in the end it would be hard to fix all those issues that came from the beginning 20:21 < bridge> unix philosohpy: "Make it work, then make it beautiful, then if you really, really have to, make it fast" 20:21 < bridge> I didn't tell, to implement it in Teeworlds, now didn't I 😭 20:22 < bridge> I would rather use some anti-aliasing technique to smooth out the pixels, than bitmapping, but there are cases where it's useful 20:22 < bridge> i dont get this sentence 20:22 < bridge> that's all I mentioned xd 20:23 < bridge> you will often want both anyway 20:23 < bridge> bitmap is not just about making it nice 20:23 < bridge> it makes it more performant 20:23 < bridge> not writing excess abstraction is different from doing excess optimization 20:23 < bridge> Yeah, so mention it to me, please correct me 20:23 < bridge> i just said it 20:23 < bridge> That? 20:23 < bridge> yes 20:23 < bridge> premature optimization is the root of all evil? 20:23 < bridge> that tells me shit about bitmap caching in GPU 20:23 < bridge> using bitmaps is literally free option on gpus 20:24 < bridge> and you know why is that? 20:24 < bridge> :pepeW: 20:24 < bridge> the manufacturers made a specific cache line for this 20:24 < bridge> genius ye 20:24 < bridge> for a proper response @cellegenrih: you are confusing some sort of texture management system with mipmaps. 20:24 < bridge> mipmaps have nothing to with moving textures between the gpu and cpu. 20:25 < bridge> instead, they are only about **also** having lower-res version of a texture alongside it. it is a trade-off between more vram usage for better visual quality at a distance + better locality for the texture accesses. 20:25 < bridge> 20:25 < bridge> with that in mind, going back to the first statement 20:25 < bridge> > less memory allocation, better vram usage, less memory addresses need to be filled, proper texture scaling on a pixel perfect level 20:25 < bridge> I'd reduce this to 20:25 < bridge> > better ram usage and better image sampling 20:25 < bridge> ok I have no idea what the context is here but I have a hard time imagining ANY context where this makes sense :pepeW: 20:25 < bridge> also everyone please say mipmap and not bitmap :c 20:26 < bridge> uh did i say bitmap? xd 20:26 < bridge> i love bitmap fonts! 20:26 < bridge> I love bitmap fonts 20:26 < bridge> I didn't talk about moving memory from GPU to CPU, that is the worst fucking thing to do 20:26 < bridge> 20:26 < bridge> I only talked about how the dynamically created bitmap textures are allocated in a GPU cache, and in a way, how to keep track from the CPU what textures are cached onto the GPU in the first place, so the CPU knows what NOT to draw, **as well as what not to cache** 20:27 < bridge> idk where that argument about moving stuff out of the GPU comes from 20:27 < bridge> Ain't it so pretty? 20:27 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1410329819211763864/dylex.png?ex=68b09f7f&is=68af4dff&hm=4026242e1efe45162adb9ce8cc15a18f7458884cdf826b5896a0a747af00a382& 20:27 < bridge> but keeping track of anything on the cpu has **nothing** to do with the concept of mipmaps 20:27 < bridge> ^ 20:27 < bridge> How do you tell the GPU what cache to store, as well as what to draw? 20:27 < bridge> hell yeah is that the minecraft font? 20:28 < bridge> if not via OpenGL or Vulkan which you desperately need the CPU for communication? 20:28 < bridge> I don't understand what you mean as the GPU cache, I believe that cache you are talking about is on the CPU 20:28 < bridge> nah it's https://github.com/dylex/fonts 20:28 < bridge> GPU has a local memory which can store textures, so the CPU doesn't have to always transfer new (but potentially the same) texture all the itme 20:28 < bridge> GPU has a local memory which can store textures, so the CPU doesn't have to always transfer new (but potentially the same) texture all the time 20:29 < bridge> one of those rendered to a texture for my little toy Godot project 20:29 < bridge> hence why I brought it up in the first place!! 20:29 < bridge> I believe he just means gpu vram 20:29 < bridge> that is VRAM 20:29 < bridge> And Ryo didn't know that?? 20:29 < bridge> what? 20:29 < bridge> @ryozuki are my render optimizations premature? 20:29 < bridge> He mentioned other stuff that was not related to the local GPU memory 20:30 < bridge> but rather transfering out from the GPU and other bs 20:30 < bridge> idk 20:30 < bridge> Thing is the way you're referring to it made it sound like you're referring to cache on the GPU processor die itself 20:30 < bridge> I don't want to entertain your discussion above, I just wanted to clear up your message like you asked us to 20:30 < bridge> not normal DRAM on the graphics card 20:30 < bridge> prob he misinterpreted it 20:30 < bridge> yeah 20:30 < bridge> storing fonts in a text file xd 20:30 < bridge> should've mentioned VRAM, would've been clearer 20:30 < bridge> i woudlnt say cuz ddnet is already an established product where most work is improvements like these 20:31 < bridge> hell yeah bdf my goat 20:31 < bridge> (it's a format made by Apple so idk if it's so goated but it's fun) 20:31 < bridge> Oh, so only when I mention it 20:31 < bridge> gotcha 20:31 < bridge> :pepeW: πŸ‘ 20:31 < bridge> I could've used like a serious big boy library to read the font file but instead I parsed it by hand because I felt like it xd 20:31 < bridge> i cant xd 20:31 < bridge> people without ui are crazy, did they use to make fonts by hand? 20:32 < bridge> absolutely unacceptable 20:32 < bridge> nah it's just the storage format 20:32 < bridge> though I *have* modified some of these fonts by modifying the hex values directly 20:32 < bridge> shut up at this point 20:32 < bridge> what do you mean without ui? xD everything is an ui 20:32 < bridge> well uh 20:33 < bridge> i meant gui 20:33 < bridge> but tui is gui too 20:33 < bridge> only for text 20:33 < bridge> I could've also used like a serious big boy library to write the image to a PNG but instead I wrote a NetPBM exporter 20:33 < bridge> seriously ridiculous case of NIH syndrome 20:34 < bridge> (to be clear the exporter is like a couple of lines of code that's really trivial to write) 20:34 < bridge> Oh 20:41 < bridge> TUI is VUI but not GUI 20:41 < bridge> That's why VI is called VI 20:41 < bridge> I don't know if that's an established term 20:41 < bridge> gui is ambiguous 20:41 < bridge> but point is it's visual but not graphical 20:41 < bridge> it means graphics 20:42 < bridge> text is graphics 20:42 < bridge> chillerdragon: I won't merge the builder pr, sowwy. I think they could be defined either in user's code or some "util" lib. I think we should only provide default values for certain things. 20:45 < bridge> context? i wana know 20:45 < bridge> context? i wana know pwease 20:46 < bridge> I maintain [ddnet_protocol](https://github.com/MilkeeyCat/ddnet_protocol) lib by merging all chiller's prs xd 20:47 < bridge> ```cpp 20:47 < bridge> class SharedPtr { 20:47 < bridge> constructor(data, deconstruct) { 20:47 < bridge> this._data = data; 20:47 < bridge> this._deconstruct = deconstruct; 20:47 < bridge> this.count = 1; 20:47 < bridge> } 20:47 < bridge> deref() { 20:47 < bridge> if (this.count > 0) 20:47 < bridge> return this._data; 20:47 < bridge> } 20:47 < bridge> clone() { 20:47 < bridge> this.count += 1; 20:47 < bridge> return this; 20:47 < bridge> } 20:47 < bridge> destroy() { 20:47 < bridge> this.count -= 1; 20:47 < bridge> if (this.count === 0 && this._deconstruct) { 20:47 < bridge> this._deconstruct(); 20:47 < bridge> delete this._data; 20:47 < bridge> } 20:47 < bridge> } 20:47 < bridge> } 20:47 < bridge> ``` 20:47 < bridge> js shared ptr x-x 20:47 < bridge> ew 20:47 < bridge> i need the behaviour 20:49 < bridge> so @cellegenrih what do you want? Less VRAM usage, more fps? We have some nice improvements in the Pipeline 20:55 < bridge> On Godot? Godot manages it's VRAM very nicely, but the CPU overdraws on textures sent to the GPU, so I manage a system to dynamically allocate caches textures from stack 20:55 < bridge> On Godot? Godot manages it's VRAM very nicely, but the CPU overdraws on textures sent to the GPU, so I manage a system to dynamically allocate cached textures from stack 20:55 < bridge> (Onto Nodes where it's either being used, or should be visible, even on a far distance) 20:57 < bridge> But I won't implement it yet, due to Godot shaders just refusing to cooperate with me 21:00 < bridge> were in ddnet here, not in godot 21:03 < bridge> why c 21:03 < bridge> If the entities (as in inmovable blocks) were chunk-ated into like 4 pixel divisible chunks, then probably there could be a pattern for sending less draw commands with bitmapping textures from the local memory by storing less data using unsigned values 21:03 < bridge> because I was writing a kernel module https://github.com/MilkeeyCat/n 21:04 < bridge> because I was writing a kernel module https://github.com/MilkeeyCat/nodummies 21:04 < bridge> because I was writing a kernel module https://github.com/MilkeeyCat/nodummies 21:04 < bridge> because I was writing a kernel module https://github.com/MilkeeyCat/nodummies 21:04 < bridge> currently we are working on chunking quads in a similar way 21:04 < bridge> i updated my blog https://edgl.dev/ 21:04 < bridge> uh the image is not mine kek 21:04 < bridge> you will lose precision if it's floating point value dependant, but if it can be allocated in integer values, then you can use unsigned bytes to allocate less info, thus feeding more data to the GPU, including bitmap info and such... but this is an entirely different system tho 21:04 < bridge> i used a theme :justatest: 21:04 < bridge> Ah 21:04 < bridge> soon blog post about kernel dev 21:04 < bridge> tiles would be next, i am not sure if this would improve much 21:05 < bridge> The point should be to have less draw commands, while the data is small enough for the GPU to draw extremely fast 21:06 < bridge> based on the vercidium implementation 21:07 < bridge> the more data you can cramp into the unsigned values, the better, including what bitmap textures to use from VRAM 21:07 < bridge> if that can be done, perhaps Teeworlds could reach more than 10k FPS while in idle 21:07 < bridge> ChillerDragon: look the new blog has series https://edgl.dev/series/ddracenetwork/ 21:07 < bridge> But ofc, that depends on what entities actually store.... which makes it hard af to compress into unsigned values 21:08 < bridge> cuz then you'll have to make readable functions that do exactly what you need, otherwise implementing it manually with bit manipulations will be a pain in the ass 21:10 < bridge> overengineered blog xd 21:10 < bridge> but the monospace font looks good 21:12 < bridge> we are currently at 3-5 K 21:12 < bridge> see 10732 21:12 < bridge> see #10732 21:12 < bridge> #10732 21:12 < bridge> https://github.com/ddnet/ddnet/pull/10732 21:35 < bridge> is your pfp you irl? 21:37 < bridge> can make it wider? all that wasted space πŸ₯² 21:37 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1410347469778194432/image.png?ex=68b0afef&is=68af5e6f&hm=c8bc472a3a46f4fbd7aeb22a0b32d4fee5d5970e625ddcd44a6152759cbc5065& 21:38 < bridge> It's called ✨*DESIGN*✨ 21:38 < bridge> fuck design i can't read shit 21:38 < bridge> simplicity is best 21:38 < bridge> It's called *✨DESIGN✨* 21:38 < bridge> https://motherfuckingwebsite.com/ 21:39 < bridge> doesn't mean every row has to be limited to 15 characters :( 21:40 < bridge> what 21:40 < bridge> replies are considered pings 21:40 < bridge> wait since when 21:41 < bridge> can i ping myself 21:41 < bridge> yes 21:41 < bridge> @inv41idu53rn4m3 21:41 < bridge> that line is yellow (brown) on my screen 21:41 < bridge> @kollpotato 21:42 < bridge> ghost ping doesnt work sad 21:42 < bridge> like actually look at the message highlight colour, it's just straight up poop brown 21:42 < bridge> if you look at it in isolation 21:42 < bridge> yea i love poop brown 21:53 < bridge> So this benefits in maps where there are large quad amounts? 21:53 < bridge> @essigautomat 21:53 < bridge> hmm odd maybe ill switch back 21:54 < bridge> yes 21:54 < bridge> read the fucking description? why are you asking useless questions 21:55 < bridge> previous one was much better 21:55 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1410352140282630224/dd.png?ex=68b0b449&is=68af62c9&hm=4b1ff67a4dc20400fb17956fd1265eda74612c3e477b7aa28f19110d4846306d& 21:56 < bridge> maybe be fucking nice πŸ˜„ 21:57 < bridge> I expected a code complexity result 21:57 < bridge> yes 22:10 < bridge> huhh, this is like the same issue I have, but with Nodes not handling visibility correctly, as well as shaders not disabling computation when not visible 22:11 < bridge> the auto quad cropping system I mean 22:11 < bridge> ddnet chunks the number of quads anyways, attaching a clip region was not too hard 22:13 < bridge> the easiest way that I could think outright would be to get all of the quads inside one chunk and check their position and animations, then get the least and highest vec2 values for an approximation for clipping 22:13 < bridge> mustard 22:14 < bridge> I didn't peek inside the implementation, so this is my hunch xd 22:19 < bridge> this is exactly what I am doing, but in an optimized manner by precalculating envelope Extrema. Rotation makes this non trivial 22:20 < bridge> no :greenthing: 22:20 < bridge> How is https://codedoc.ddnet.org updated? It doesn't seem to have changes from weeks ago (#10620). 22:20 < bridge> https://github.com/ddnet/ddnet/pull/10620 22:22 < bridge> big 22:34 < bridge> Can you make the vec2 values integers, so yo u can store them as bytes? 22:36 < bridge> using floor() on the least vec2 and ceil() on highest vec2 to make sure it's still in bounds; should produce less memory allocation without any precision loss since it's gonna be a little bigger, but aligned 22:36 < bridge> using floor() on the least vec2 and ceil() on highest vec2 to make sure it's still in bounds; should produce less memory allocation without any precision loss since it's gonna be a little bigger (the cropping size I mean), but aligned 22:44 < bridge> I guess the overall formula would be for the position and size: 22:44 < bridge> - Chunk position (if each chunk is a size of 32 subpixels), then for a maximum of 1000x1000 block map, it needs a **[u16]** 22:44 < bridge> - The estimated cropping position converted to integerized [**x -> u8**] and [**y -> u8**], it needs a **[u16]** (**x** or **y** as float / chunk as float = **x** or **y** as i8, converting it to u8) 22:44 < bridge> - The estimated cropping size converted to integerized [**x -> u8**] and [**y -> u8**], it needs a **[u16]** (**x** or **y** as float / chunk as float = **x** or **y** as i8, converting it to u8) 22:44 < bridge> 22:44 < bridge> So it needs a u64 to get all the info needed to allocate onto 22:45 < bridge> you can also insert m_Clipped and m_Grouped as flags onto it 22:46 < bridge> you can also insert m_Clipped as a flag onto it 22:47 < bridge> Am I cooking and am I mega cooked? πŸ’€ 22:47 < bridge> read the PR, this exists 22:48 < bridge> I read it as a boolean ye 22:48 < bridge> u8 is too small 22:48 < bridge> I mean, yeah it's for a max 1000x1000 map, which I assume nobody really uses any higher values 22:48 < bridge> idk 22:48 < bridge> what do you think how many clip regions a map contains? I see no need in optimizing this 22:49 < bridge> we don't need the exact position, we just need the index for that grid pos 22:49 < bridge> I have a 20000 x 1000 map already 22:49 < bridge> actually no clue, Imma guess anywhere from 0 to 10 22:49 < bridge> for a normal map made purely by a mapper 22:49 < bridge> :justatest: 22:50 < bridge> then u64 is... just barely enough 22:50 < bridge> if we only store the pos and size 22:50 < bridge> both as u32 22:51 < bridge> I mean, that should still be faster tho 22:51 < bridge> u also doesn't work, you can have negative values 22:51 < bridge> mmmm 22:52 < bridge> allocate 4 bytes for a flag for negative value (pos x / y and size x / y)? 22:52 < bridge> after all, you have to somehow convert back from unsigned values if it's negative 22:52 < bridge> or use normal datatypes that can be calulated fast because this is done every frame 22:52 < bridge> ... eh? 22:53 < bridge> what does it need to calculate each frame? 22:53 < bridge> a clip region needs to be checked every frame because players _move_. 22:53 < bridge> (or is it the draw call?) 22:54 < bridge> eh? What value does it need to change based on camera movement? 22:54 < bridge> you could further optimize this with a kd tree or something, but thats a PR you cam write 22:54 < bridge> yes on camera basically 22:54 < bridge> I can't since I don't really understand the problem fully 22:55 < bridge> if thing in range then render 22:55 < bridge> meaning you need to check clip region 22:56 < bridge> Oh you mean the clipping border rendering 22:56 < bridge> cuz that should be the only thing with a draw call 23:01 < bridge> the clip regions are not on the gpu or something for quads, other than groups. I don't think you can stack clips on top of each other 23:18 < bridge> yeah I think so too, and bummer it doesn't use the GPU xd 23:27 < bridge> But ryo where new ddnet blogs 23:36 < bridge> you can use ivec2 iirc 23:37 < bridge> it's just vec2 but with integers 23:41 < bridge> yeah, but the unsigned value allocation I mentioned is for the class having less data cramped in it 23:41 < bridge> not for simplifying the data 23:42 < bridge> converting an f64 to an i64 does not really benefit anything other than a specific purpose related to integers 23:43 < bridge> but converting float values into bytes using different methods of identifying a quad's position via chunks and the distance inside the top left side of the chunk, now that's where it matters 23:43 < bridge> the tl;dr version is that bytes are the smallest and fastest, yet hardest and most unsafe way to get your data as small as possible 23:44 < bridge> (btw this way, you can use shaders to render the clip)