00:25 <+bridge> [ddnet] Anyone? 00:27 <+bridge> [ddnet] Can't you just find it yourself? 00:29 <+bridge> [ddnet] Put some comments on your code then i will try 00:29 <+bridge> [ddnet] Literally 0 comments 00:32 <+bridge> [ddnet] Well sadly there is no documentation (Needs to be imo...), I would recommend getting a good IDE that can search the whole project for lines, that way you can easily get what you need without asking here. 00:34 <+bridge> [ddnet] I did, still it send to so many functions that send you to other ones and i really not gonna waste so much time just to find 1 line 00:34 <+bridge> [ddnet] So asked if someone here does know where it is 00:37 <+bridge> [ddnet] I don't, and the people who do are probably sleeping, so I recommend going to bed. Good night! :zzzz: 00:38 <+bridge> [ddnet] didnt expect to answer rn, gn anyways just gonna leave it 00:49 <+bridge> [ddnet] You can also use https://ddnet.tw/codebrowser/ 01:00 <+bridge> [ddnet] im trying to get it but all i get from searching is sdl2.dll 06:14 <+bridge> [ddnet] @deen Update on **hungarian.txt**: 06:14 <+bridge> [ddnet] -Fixed most uppercase errors on words 06:14 <+bridge> [ddnet] -Fixed spelling mistakes 06:14 <+bridge> [ddnet] -Added extra details on option descriptions 06:14 <+bridge> [ddnet] -Added the rest of the missing words, that needed translation 06:14 <+bridge> [ddnet] https://cdn.discordapp.com/attachments/293493549758939136/1004240777712513084/hungarian.txt 07:35 <+ChillerDragon> > maybe I can try working on 0.7 support in the dissector, ChillerDragon 07:35 <+ChillerDragon> @heinrich5991 that would be hot :) 07:38 <+ChillerDragon> @yair you mean the score that gets displayed for each player in the scoreboard? That is indeed CPlayer::m_Score but you can also modify when it when its being send to the client in the snap 09:29 <+bridge> [ddnet] neat 😉 09:42 <+bridge> [ddnet] Thanks 10:07 <+ChillerDragon> @Jupstar seems like linus torvald touched the new apple m2 for linux stuff https://lore.kernel.org/lkml/CAHk-=wgrz5BBk=rCz7W28Fj_o02s0Xi0OEQ3H1uQgOdFvHgx0w@mail.gmail.com/ 10:08 <+ChillerDragon> cuz u said m2 cant do linux :p 10:14 <+bridge> [ddnet] Its more about gpu 10:14 <+bridge> [ddnet] Could also be m1 10:15 <+bridge> [ddnet] It's really nice to finally have a high performance computer you can use without any fan noise 10:25 <+bridge> [ddnet] Found it thanks man 11:15 <+bridge> [ddnet] hi, I would like to tackle this Prediction for laser doors #1279 , how difficult is this to implement (I'm beginner cpp)? 11:15 <+bridge> [ddnet] https://github.com/ddnet/ddnet/issues/1279 11:29 <+bridge> [ddnet] This might be fixed by https://github.com/ddnet/ddnet/pull/4176. Have you checked that it still looks bouncy? 11:30 <+bridge> [ddnet] For prediction stuff its probably best to get advice from @nuborn , he's our expert and can maybe guide you to how to best implement something there 12:11 <+bridge> [ddnet] I'm not sure how nice it is to have a different CPU architecture though. on mac I heard it's pretty well integrated, but if you're running linux, I'm not aware of such a thing 12:14 <+bridge> [ddnet] What would the problem be? 12:15 <+bridge> [ddnet] not being able to run some proprietrary software, having to manually compile stuff, less distros to choose from 12:17 <+bridge> [ddnet] I don't think I run any proprietary software on Linux, all just intalled from pacman/AUR 12:18 <+bridge> [ddnet] except some stuff via wine, surprisingly that just works on macOS 12:19 <+bridge> [ddnet] So I guess Rosetta helps for some stuff 12:23 <+bridge> [ddnet] (e.g. docker also doesn't work anymore, it's all x86_64 binaries, I think) 12:24 <+bridge> [ddnet] it seems the only proprietrary software that I have installed is clion and webstorm, so that would be fine on ARM as well 12:24 <+ChillerDragon> gpu driver? 12:25 <+bridge> [ddnet] my laptop also has an intel CPU that works out of the box, I haven't installed the discrete GPU's driver 12:25 <+bridge> [ddnet] and that wouldn't matter for the apple laptop, they have their own GPU, no proprietrary drivers for linux exist AFAIK 12:28 <+bridge> [ddnet] Intel can kind of blame themselves with ARM. They dropped their own ARM processors to promote Atom, but no one wanted to use Atom in mobile devices. Otherwise iPhone would probably have ran on Intel ARM CPUs from the start 12:32 <+bridge> [ddnet] A new processor generation with 30% more performance can overcome the emulation cost required to emulate old software in the processor architecture. Then new software ofc needs to adopt to the new architecture or live with some perf decrease.. But most apps probs don't need insane perf... 12:33 <+bridge> [ddnet] but AFAIK I don't have a convenient way to do that on linux yet; that's what I'm complaining about, not the cost of emulation 12:34 <+bridge> [ddnet] So Linux on M2/M1 means you don't get any x86_64 compatibility at all? 12:34 <+bridge> [ddnet] Mh ok but that's simply bcs it's not a deal yet.. Yuzu emulates the switch. Which is an arm processor 12:34 <+bridge> [ddnet] So it basically exists 12:34 <+bridge> [ddnet] I know that theoretically it's no problem 12:34 <+bridge> [ddnet] Just wrong way 12:34 <+bridge> [ddnet] but I want to practically use the computer ^^ 12:35 <+bridge> [ddnet] A much weaker one than an m1 btw. As it gets stronger and stronger emulating stuff gets very annoying 12:35 <+bridge> [ddnet] In VMs you can use Rosetta to run x86-64: https://developer.apple.com/documentation/virtualization/running_intel_binaries_in_linux_vms_with_rosetta?language=objc 12:35 <+bridge> [ddnet] Not yet on Asahi 12:36 <+bridge> [ddnet] it's also not just about being able to run x86_64 code 12:36 <+bridge> [ddnet] apple integrated the whole stuff into the system AFAIK, shipping the relevant libraries, etc. 12:36 <+bridge> [ddnet] apple integrated the whole stuff into the system AFAIK, shipping the relevant libraries also for x86_64, etc. 12:37 <+bridge> [ddnet] yeah, works quite seamlessly 12:37 <+bridge> [ddnet] PS3's Cell gave everyone hell, it's emulation is still janky. Older nintendo consoles emulation is still usually inaccurate so it can perform well realtime 12:39 <+bridge> [ddnet] So for native linux to have this someone would need to basically figure out what rosetta does and reimplement it 12:40 <+bridge> [ddnet] Though I guess arm does mesh rather well with native x86 instructions in most matters, hence how performant rosetta and new nintendo console emulators are 12:41 <+bridge> [ddnet] Not sure how much of the magic is in hardware vs software. The Linux VM solution could probably be adapted for Asahi 12:43 <+bridge> [ddnet] Huh apparently some people already ran rosetta for linux on arm64 machines other than apples 12:44 <+bridge> [ddnet] Just got back, do you know if it fixed it or should i go test it out? 12:44 <+bridge> [ddnet] I guess what's required for Linux is a multiarch like some distros have for x86-64/x86, but for arm/x86-64 instead, to bring in all the shared libs etc 12:46 <+bridge> [ddnet] yup 12:47 <+bridge> [ddnet] ah, is arch linux available for arm? I think there's only archlinuxarm.org, which seems to be for specific systems only 12:47 <+bridge> [ddnet] and not apple hardware 12:48 <+bridge> [ddnet] Nope 12:48 <+bridge> [ddnet] just x86-64 12:51 <+bridge> [ddnet] For reference, Debian builds DDNet for these archs: amd64 arm64 armel armhf hppa i386 m68k mips64el mipsel ppc64 ppc64el riscv64 s390x sh4 sparc64 x32 12:51 <+bridge> [ddnet] I don't know, you test it please 🙂 14:16 <+bridge> [ddnet] it's a little bit complex, but it depends a bit on how it is implemented. the current implementation of laser doors isn't too nice since it works by modifying the map, so the implementation has to take that into account when a switch is enabled/disabled and when a laser door goes in/out of the snapshot (the most complex question is perhaps overlapping laser doors, although a perfect solution is maybe not necessary) 14:21 <+bridge> [ddnet] Client could read and store the value of switches, using that to be able to not walk through a gate when it would know that it is closed. Eliminating the issue since the client would not have to wait for a response from the server. Could this work? 14:25 <+bridge> [ddnet] the client already has the switch values (and predicted as well) from the server, so the client only needs to identify where the doors are (using the laser netobjects -- here one question is whether to keep using entityex) and apply these to the map (using the switch status) 14:27 <+bridge> [ddnet] I guess there is no harm in introducing new netobjs instead of the entityex solution we initially went with 14:28 <+bridge> [ddnet] out of the loop on this, will have to read up how the map interacts with players etc 14:29 <+bridge> [ddnet] I would kind of also want to go in that direction 14:29 <+bridge> [ddnet] and that was also why I didnt try to add too many features using entityex initially (also with the new work by codedev etc on extendible netobjects there are perhaps no downsides left) 14:30 <+bridge> [ddnet] The only issue I can think of is it's introducing a branch per player, that I dont know how well the branch predictor will do on 14:31 <+bridge> [ddnet] The index of the loop strongly predicts the outcome of the branch, but idk if it's smart enough nowadays 14:36 <+bridge> [ddnet] yes, true, have never really looked at server profiling. but since those branches can skip the entityex snapping it will perhaps be a net positive? 16:31 <+bridge> [ddnet] @Patiga u ready for the next optimization? xd 16:56 <+bridge> [ddnet] Dont we snap entityex either way and just let clients that dont understand it drop it themselves? 16:58 <+bridge> [ddnet] am on one rn :d 17:03 <+bridge> [ddnet] trying to optimize the cpu calculations, figured out that combining the transformations into one matrix multiplication could be better 17:08 <+bridge> [ddnet] i thought about minimizing GPU load too 😄 17:08 <+bridge> [ddnet] but cpu + gpu would ofc be optimal 17:08 <+bridge> [ddnet] what did you think of? ^^ 17:09 <+bridge> [ddnet] its nothing special actually, i already thought about it for ddnet 17:09 <+bridge> [ddnet] but with ddnet it would increase load times, which might be worse than in your experiments 17:09 <+bridge> [ddnet] hm what is it? I'm curious :) 17:10 <+bridge> [ddnet] I dont remember how it is for all entities, but since entityex aren't used for anything yet for lasers (except slightly optimizing delta snapshots) it would be safe to drop them completely if a new laser/door netobject is added 17:11 <+bridge> [ddnet] well actually its 2 approaches, since i dunno which is better, and also i am not 100% sure how much it will give, but for bigger maps probs more than enough 17:11 <+bridge> [ddnet] 17:11 <+bridge> [ddnet] u pre analyze all textures used, put all non transparent pixels of the texture into one texture, all non transparent into another 17:11 <+bridge> [ddnet] oh god o.o 17:11 <+bridge> [ddnet] then u draw the non transparent in reversed order without alpha enabled 17:11 <+bridge> [ddnet] and then u either render the others with stencil buffer(which was filled by the non transparent ones) 17:12 <+bridge> [ddnet] yeah i guess try that first, if u interested xd 17:13 <+bridge> [ddnet] don't we have to use the stencil buffer here already, since we render the layers from front to back 17:14 <+bridge> [ddnet] yes 17:14 <+bridge> [ddnet] i just forgot to write it there and included it in the next sentence XD 17:14 <+bridge> [ddnet] the first pass fills stencil whenever there is a pixel drawn 17:15 <+bridge> [ddnet] the advantage in your approach over ddnet is, that there a few vertices total 17:15 <+bridge> [ddnet] the GPU ofc has to wait for the previous fragment stage, before starting a new, bcs stencil is used(needs sync for colliding fragments) 17:16 <+bridge> [ddnet] oh god separating transparent and non-transparent sounds a bit nightmarish to me o.o 17:16 <+bridge> [ddnet] but maybe the drivers are clever enough anyway 17:16 <+bridge> [ddnet] without that approach it wont give u alot tho 17:16 <+bridge> [ddnet] almost any tile layer has atleast rounded corners 17:16 <+bridge> [ddnet] which would result in non transparent pixels 17:17 <+bridge> [ddnet] I don't understand how the transparent pass should work though, the stencil buffer has to change whenever there was another non-transparent layer inbetween, no? 17:17 <+bridge> [ddnet] also rendering without an alpha stage after the fragment shader should eliminate the extra work for the stencil test inside the fragment shader(e.g. is pixel transparent) 17:18 <+bridge> [ddnet] the non transparent are completly rendered before tho 17:18 <+bridge> [ddnet] u render the tilemap twice, but with enough opaque pixels it will skip basically almost all fragments 17:20 <+bridge> [ddnet] i would say its rahter simple to split the textures and call the whole render function twice 17:20 <+bridge> [ddnet] i just cannot say how much worth it is.. there are certainly maps that use transparency pretty much everywhere 17:20 <+bridge> [ddnet] but all vanilla tilesets are like 90% opaque 17:21 <+bridge> [ddnet] ah and also implement an own clear pass 17:21 <+bridge> [ddnet] dont use glclear 17:21 <+bridge> [ddnet] its wasted for opaque pxiels 17:24 <+bridge> [ddnet] the thing I still don't understand is how transparent parts of layers still have the correct render order with the opaque parts 17:25 <+bridge> [ddnet] u mean the ones rendered above? 17:25 <+bridge> [ddnet] yes 17:25 <+bridge> [ddnet] stencil buffer is 255 combinations 17:25 <+bridge> [ddnet] but u can also use dpeth buffer 17:26 <+bridge> [ddnet] ah true 17:26 <+bridge> [ddnet] if thats easier to imaginate, and give u bit more flexibility 17:26 <+bridge> [ddnet] yeah i dunno, u can theoretically with modern hardware use a 32bit stencil buffer 17:26 <+bridge> [ddnet] since we dont use depth buffer at all 17:27 <+bridge> [ddnet] yes that sounds reasonable actually 17:28 <+bridge> [ddnet] the depth buffer was the thing I missed in the explanation ^^ 17:28 <+bridge> [ddnet] do you have 1 quad per tile @Patiga ? 17:29 <+bridge> [ddnet] ah wait now I'm confused, did you mean the depth buffer when you said stencil? 17:29 <+bridge> [ddnet] no, 1 quad for the entire tilemap 17:29 <+bridge> [ddnet] i imagine it with a stencil buffer that is atleast 16 bit 17:29 <+bridge> [ddnet] the tilemap is stored in a texture 17:29 <+bridge> [ddnet] its just so u know the tile layer index 17:29 <+bridge> [ddnet] and can drop transparent layers "before" the opaque 17:29 <+bridge> [ddnet] ah so stencil buffer <-> depth buffer 17:29 <+bridge> [ddnet] its the same buffer 😄 17:29 <+bridge> [ddnet] 24bit depth, 8bit stencil 17:30 <+bridge> [ddnet] maybe vulkan allows splitting them, but opengl probs not 17:30 <+bridge> [ddnet] ah 17:30 <+bridge> [ddnet] but if u can set a atleast 16bit stencil buffer, i dont see any reason to use the depth buffer 17:31 <+bridge> [ddnet] even 8bit might be enough for testing, i dunno any map that uses 255layers 17:31 <+bridge> [ddnet] if u build 1 tex per layer why don't you combine all layers of the same group in a tex ? 17:32 <+bridge> [ddnet] as a atlas? 17:32 <+bridge> [ddnet] thats not how alpha calculation works sadly 17:32 <+bridge> [ddnet] I will shut up and look at your code i'm intrested 😆 17:33 <+bridge> [ddnet] the shaders are probably the most telling 17:33 <+bridge> [ddnet] not much code documentation yet, although already some nice struct/module documentation 17:34 <+bridge> [ddnet] tbh just open it in renderdoc and u get the clue pretty much instant 17:34 <+bridge> [ddnet] 👍 👍 👍 gud idea 17:35 <+bridge> [ddnet] its just a fullscreen quad, with no special values. the vertex shader transforms it so that we have the map coordinates for each pixel after fragmentation, the fragment shader then figures out the correct tile 17:39 <+bridge> [ddnet] ``` 17:39 <+bridge> [ddnet] Top five maps sorted by layer count: 17:39 <+bridge> [ddnet] types/brutal/maps/Lost Story 2.map: 192 17:39 <+bridge> [ddnet] types/brutal/maps/Wour Forlds.map: 130 17:39 <+bridge> [ddnet] types/ddmax/maps/Lost Story.map: 128 17:39 <+bridge> [ddnet] types/fun/maps/Seagull City.map: 123 17:39 <+bridge> [ddnet] types/insane/maps/Justice 2.map: 114 17:39 <+bridge> [ddnet] ``` 17:39 <+bridge> [ddnet] https://cdn.discordapp.com/attachments/293493549758939136/1004413315612676116/max_layers.py 17:40 <+bridge> [ddnet] `#!/bin/env` → `#!/usr/bin/env` btw 17:40 <+bridge> [ddnet] `env` is de-facto standardized to be at `/usr/bin/env` 17:40 <+bridge> [ddnet] I'ts a bit sad if your renderer stops working beyond 255 layers 17:40 <+bridge> [ddnet] it doesnt 17:40 <+bridge> [ddnet] I mean it would 17:41 <+bridge> [ddnet] u can usually set stencil size 17:41 <+bridge> [ddnet] its nothing too special 17:46 <+bridge> [ddnet] in worst case depth buffer supports unsigned int too, GLES 3 spec 17:46 <+bridge> [ddnet] https://cdn.discordapp.com/attachments/293493549758939136/1004414914288423032/unknown.png 17:46 <+bridge> [ddnet] which is basically webgl2 17:48 <+bridge> [ddnet] GL_DEPTH_COMPONENT16 is even the "standard" for embedded devices 17:49 <+bridge> [ddnet] in desktop opengl you are allowed to simply use GL_DEPTH_COMPONENT and dont specify the size xd 17:52 <+bridge> [ddnet] with depth buffer u need to ship the zvalue 17:52 <+bridge> [ddnet] with stencil u can simply use the stencil API 17:52 <+bridge> [ddnet] 17:52 <+bridge> [ddnet] thats basically the only difference later 17:53 <+bridge> [ddnet] you could be mean and say, stencil only exists bcs there weren't any shaders back in the days 😄 18:00 <+bridge> [ddnet] https://stratechery.com/2022/political-chips/ 18:00 <+bridge> [ddnet] amd has more market cap than intel rn 21:05 <+bridge> [ddnet] `test4321` doesnt work as rcon password on usa test 21:07 <+bridge> [ddnet] also the mod password only gives helper status 21:08 <+bridge> [ddnet] i guess i got demoted :feelsbadman: 21:23 <+bridge> [ddnet] Fixed 21:53 <+bridge> [ddnet] oh damn 21:53 <+bridge> [ddnet] ddnet getting bigger 21:53 <+bridge> [ddnet] 3598 peak 21:53 <+bridge> [ddnet] https://cdn.discordapp.com/attachments/293493549758939136/1004477243017404490/unknown.png 21:54 <+bridge> [ddnet] it also hit the all time peak on steam today 21:54 <+bridge> [ddnet] https://cdn.discordapp.com/attachments/293493549758939136/1004477378568933417/unknown.png 21:54 <+bridge> [ddnet] poggers 21:55 <+bridge> [ddnet] whats that huge jump? 21:55 <+bridge> [ddnet] https://cdn.discordapp.com/attachments/293493549758939136/1004477652079497226/unknown.png 21:55 <+bridge> [ddnet] idk 21:55 <+bridge> [ddnet] maybe its cuz ppl have summer vacations 21:56 <+bridge> [ddnet] steam was the biggest success of ddnet kek 21:58 <+bridge> [ddnet] videos, the spike is mostly chn players 21:58 <+bridge> [ddnet] https://b23.tv/5mCrxlL 21:59 <+bridge> [ddnet] 2M views 21:59 <+bridge> [ddnet] but i would take it with a grain of salt 21:59 <+bridge> [ddnet] cuz on esports i heart they inflate views a lot 21:59 <+bridge> [ddnet] e.g dota 2 esports is supported to have like 9M live viewers XD 21:59 <+bridge> [ddnet] on chn 22:07 <+bridge> [ddnet] i mean there are 1.5 billion chinese ppl 22:07 <+bridge> [ddnet] we are nothing against them xd 22:07 <+bridge> [ddnet] The current population of Europe is 748,583,133 a 22:08 <+bridge> [ddnet] not that far 22:09 <+bridge> [ddnet] and usa is like 300mios 22:09 <+bridge> [ddnet] and south 400 mios 22:09 <+bridge> [ddnet] so all these together are just as much as china xD 22:12 <+bridge> [ddnet] i think india is bigger 22:12 <+bridge> [ddnet] damn I wish usa had even 2% of CHN players xd 22:13 <+bridge> [ddnet] india 1.38 billion (2020) 22:29 <+bridge> [ddnet] there was also a large live stream on twitch yesterday, anyone caught it?