01:18 < bridge> @jupeyy_keks I got it to 3.9KB/s max for 14 inputs with bit packing and the xor trick. 01:18 < bridge> 2.1KB/s with infrequent sending 01:19 < bridge> which is only like 1.8x normal 01:30 < bridge> Can I try aswell? Are you still working on the same pr branch? 01:33 < bridge> yes but I haven't commited anything because all the code is just for testing and very messy 01:33 < bridge> also I'm only doing the client compression without decompressing on the server yet 01:34 < bridge> I'll give it a go tomorrow 01:35 < bridge> I'm working with the assumption that m_Size can be removed and m_AckGameTick and m_PredTick can be added only once for the whole bundle of inputs. 01:35 < bridge> 01:35 < bridge> I also don't touch the original regular input at the moment but if you added that to the bundle it could save up to 0.5KB/s I think 01:36 < bridge> I'm a little unsure whether you can just add those ticks once per bundle ngl. I'd have to dig around to make sure since it's been a while 01:37 < bridge> well each input just increments both of them by 1 01:37 < bridge> so as long as you know how many inputs are in the bundle you can just put it once 01:37 < bridge> If you are certain that it's always only ever incremented, it should be fine 01:38 < bridge> it depends how you want to handle it on the server 01:38 < bridge> m_AckGameTick should only ever be the highest value that the server has ever recieved so you don't even need to increment actually 01:38 < bridge> unless it's -1 but you put a special exception for that 01:39 < bridge> m_PredTick is always sequential 01:40 < bridge> you can check before sending the inputs that they are actually generated by the client sequentially and fill in any missing ones with a copy of the previous which will get XORed into 0s and compress well 01:41 < bridge> I'm still not sure what to do with TargetX and TargetY. If you zoom out to the maximum it fits into 17 bits so I'm just packing it into that size for now. Maybe we can simply make it stop scaling with zoom and then it should fit into 10-12 bits 01:45 < bridge> we explicitly made it scale with zoom so that the server can know what you point at 01:45 < bridge> note that you can unlock zoom and zoom out further 01:49 < bridge> doesn't the server also know your zoom value? 01:50 < bridge> not directly. it knows what area the client wants to receive data for 01:51 < bridge> https://stackoverflow.az1.qualtrics.com/jfe/form/SV_6rJVT6XXsfTo1JI 01:51 < bridge> scaling it by zoom also makes anticheat really hard 01:52 < bridge> people get false banned all the time on FNG for playing with non standard zoom values 01:53 < bridge> I don't want to get into details of anticheat, but nonstandard zoom values in FNG already sound like cheats to me 01:53 < bridge> hmm I guess that's true 01:55 < bridge> ^ @archimede67 01:58 < bridge> actually it's probably still less expensive to just send the full 32 bit zoom value once per bundle and the non-scaled mouse positions then scale them on the server 01:59 < bridge> and then just assume that clients won't change their zoom value extremely fast, but a mouse position with wrong magnitude is still much better than no input at all 01:59 < bridge> and then just assume that clients won't change their zoom value extremely fast, a mouse position with wrong magnitude is still much better than no input at all 02:02 < bridge> even if you only save 6 bits per mouse int you save 192 bits over the whole bundle which is more than the 32 for the zoom value 07:47 < bridge> multiview is constantly changing the zoom level though. though i guess in this case mouse events aren’t being sent in the first place 07:49 < bridge> I don't think input relability is a concern when you're spectating 08:02 < bridge> well i definitely do not want any misinterpretations of my inputs 08:02 < bridge> and i do change zoom as i play 08:03 < bridge> with the scroll wheel so it’s many smaller changes 😃 08:34 < bridge> yeah but it's only when you're network is lagging or dropping packets 08:34 < bridge> yeah but it's only when your network is lagging or dropping packets 08:34 < bridge> the less unnecessary data we send the more protection you can get from lag 08:35 < bridge> under regular conditions nothing changes 08:35 < bridge> i think there is a good way to do it without factoring zoom in 08:35 < bridge> ok how so? 08:35 < bridge> the bitpacking and xoring then compressing methods seemed promising 08:36 < bridge> I'm doing that already 08:36 < bridge> and is that not enough 08:36 < bridge> ultimately this is not a lot of data 08:36 < bridge> well it is 08:36 < bridge> kind of a lot of data 08:37 < bridge> Gm tater 08:37 < bridge> late for us 08:37 < bridge> gm teero 08:37 < bridge> presumably deen is going to tell me that we can't have 2000ms of input bandwidth but I want to get as much as we can 08:38 < bridge> the current bandwidth is 0.5kB-1.5kB/s. if you send a full 100 inputs per tick then it goes up quite a lot to like 15kB-20kB/s 08:38 < bridge> (for upload) 08:38 < bridge> hmm 08:38 < bridge> currently we are doing half that and there’s not a huge need to go higher afaict 08:39 < bridge> tho i missed a lot of the convo 08:39 < bridge> wdym 08:39 < bridge> half what? 08:39 < bridge> servers run at 50 tps 08:39 < bridge> does client send 100 and it processes both? 08:39 < bridge> does client send 100 and it processes 2 frames in 1 tick? 08:40 < bridge> no I mean every tick the client would send 100 inputs if you were doing a full 2000ms of input bundling but that's not really nessecary anyway 08:40 < bridge> oh i misread 08:40 < bridge> so it would go from 50 inputs sent per second to 5000 08:40 < bridge> but obviously we don't need 100x 08:41 < bridge> 300-400inputs per second is enough to get very good relability 08:42 < bridge> I think targeting a max of 500ms is reasonable. any bigger than that is completely unplayable 08:43 < bridge> I think targeting a max of 500ms (20 buffered inputs) is reasonable. any bigger than that is completely unplayable 08:43 < bridge> I think targeting a max of 500ms (25 buffered inputs) is reasonable. any bigger than that is completely unplayable 08:44 < bridge> so what does this rly end up looking like 08:44 < bridge> like is bandwidth rly the root of most of ddnet’s connection problems 08:44 < bridge> idk 08:44 < bridge> the root of the connection issues is the lack of any relability in the critical input packets 08:45 < bridge> which is what this is trying to fix 08:45 < bridge> it’s rare to see connection upload speeds that can’t handle 1 or 2 clients with ease and outbound player data can be sent at a way lower frequency by the server 08:45 < bridge> and already is afaik 08:45 < bridge> wdym handle 1 or 2 clients 08:45 < bridge> dummy 08:46 < bridge> it's about the server's bandwidth not the client 08:46 < bridge> are ddnet servers not run on servers with 100s of Mbps connections 08:46 < bridge> idk 08:47 < bridge> having all clients send 20kB/s to the server would be a max of 10.24megabits/s which is sort of a lot 08:47 < bridge> idk 08:47 < bridge> deen implied they are not 08:47 < bridge> i mean you’re right that we should get the bandwidth as conservative as possible 08:48 < bridge> but not at the cost of playability like i think factoring zoom might do 08:48 < bridge> another variable 08:48 < bridge> it will strictly only improve your playability over the current version 08:48 < bridge> it can never make it worse than it already is 08:49 < bridge> well actually how can it even cause problems if zoom is factored in 08:49 < bridge> is zoom being sent with each packet or as it’s changed 08:49 < bridge> the issue is that the mouse position is stored as 2 32 bit ints which are large and change on nearly every tick 08:49 < bridge> in this hypothetical scenario 08:50 < bridge> it will be sent only once per packet and applied to all inputs in that packet instead of for each input 08:50 < bridge> that way the mouse positions can be 12 bits instead of 32 08:50 < bridge> ah 08:51 < bridge> this is just in theory I'm not sure that's the best idea yet 08:51 < bridge> maybe the client should keep track of a separate zoom for the server then 08:51 < bridge> the zoom that the client already sends to the server is totally unrelated to this 08:51 < bridge> it won't be touched 08:51 < bridge> this is a new zoom value 08:51 < bridge> only for mouse positions 08:51 < bridge> right 08:52 < bridge> but if the zoom can’t change between packets then there won’t be a problem 08:52 < bridge> client can render whatever zoom 08:52 < bridge> well likely you are not spamming zoom in/out more than 3 times per second very often 08:52 < bridge> ehh 08:53 < bridge> and even if you do the other difference is that the server will use a mouse angle which is millimeters wrong which makes the desync very small and probably not noticeable 08:53 < bridge> and even if you do the only difference is that the server will use a mouse angle which is millimeters wrong which makes the desync very small and probably not noticeable 08:54 < bridge> you still get the same "angle" it's just slightly different because it gets rounded to the nearest integer position 08:54 < bridge> where each pixel on your screen is an int 08:54 < bridge> so its literally wrong by less than a pixel 08:55 < bridge> in some cases the value can actually be floating point iirc 08:55 < bridge> i think SDL sends floats 08:55 < bridge> and it's only wrong if you drop packets and then change zoom while your network is dying and then a later input fills in the server buffer in that time 08:55 < bridge> yeah 08:55 < bridge> it's not 08:56 < bridge> it uses ints 08:56 < bridge> maybe ddnet does 08:56 < bridge> your mouse pos is rounded to an int every time it moves afaik 08:56 < bridge> that's why the aim bind works 08:56 < bridge> Btw scaling target pos with zoom is still broken af 08:57 < bridge> wdym 08:57 < bridge> You can have targetxy == 0 at some zooms 08:57 < bridge> And generally zooming in is broken with mouse angles 08:57 < bridge> Because everything is truncated to integers 08:57 < bridge> this might make hookline even less reliable lol 08:58 < bridge> It already is xd 08:58 < bridge> European servers are mostly severed with high bandwidth like 100 to 1000 mbit/s. But in asia bandwidth is very expensive, esp. china 08:58 < bridge> targetxy == 0 is explicity forbidden in the code 08:58 < bridge> Only mouseposxy == is forbidden 08:58 < bridge> not true? 08:58 < bridge> targetxy does not care 08:59 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1247806874749763646/image.png?ex=66615e36&is=66600cb6&hm=318ba0289c66d5f5c6ef3e773c7d7124f2c985a5e4edf3819e062ffef3e971e7& 08:59 < bridge> it's removed on the client and server multiple times even 09:00 < bridge> the idea is just you send the targets before doing this calculation, then do it on the server 09:00 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1247807163624329326/image.png?ex=66615e7a&is=66600cfa&hm=133a86bfeac617cd0d4a85d97ffb2aee92f51a275956aeec5497e2488f0ce6c1& 09:00 < bridge> This can lead to targetxy being 0 09:00 < bridge> how 09:00 < bridge> ?? 09:00 < bridge> ?? 09:01 < bridge> it wouldn't even matter because it gets sansitized by the server as soon as it recieves the input 09:01 < bridge> (int)(1 * 0.6) == 0 09:01 < bridge> it wouldn't even matter because it gets sanitized by the server as soon as it recieves the input 09:02 < bridge> But it messes but the hook line smh 09:02 < bridge> When I put my mouse in the centre the hook line disappears and I hook to the top 09:02 < bridge> When zoomed in 09:02 < bridge> I am not even planning to send hookline data in relability inputs 09:02 < bridge> we do lots of corner cutting so you get more reliability 09:03 < bridge> these inputs will only get used if the server didn't recieve your other ones 09:03 < bridge> just zooming in or out can make some edges possible or not 09:03 < bridge> That is bs 09:03 < bridge> that's fine 09:04 < bridge> it won't change that 09:04 < bridge> You are disadvantaged when playing with more zoom than 10 09:04 < bridge> this is a seperate conversation 09:04 < bridge> ye xd 09:05 < bridge> I'm just ranting about the inputs scaling by zoom dw 09:05 < bridge> also you're not disadvantaged, you still only get as many positions as there are pixels on your screen. they just have slightly different angles than at different zoom levels. 09:05 < bridge> 09:05 < bridge> you're actually disavantaged if you use a low resolution monitor 09:06 < bridge> Why is that? The positions stay the same not dependant on res 09:06 < bridge> hmm maybe that's true actually 09:06 < bridge> I guess the game assumes you use 1920 res 09:06 < bridge> If you zoom to zoom 15 you have literally less angles for your hook to be in since integer precision is limited 09:07 < bridge> I wonder if you can see the cursor jumping multiple pixels when you move it slightly on a higher res screen 09:07 < bridge> It will be the same distance on the monitor but yes 09:10 < bridge> Yes 09:11 < bridge> Imagine you have a max mouse distance of 400. When zooming in 2x you would have the same precision as someone playing with mouse max distance 200 09:11 < bridge> why’s that 09:11 < bridge> oh yeah, 09:11 < bridge> hmm 09:11 < bridge> . 09:12 < bridge> not sure if you can do anything about that without changing physics a lot 09:12 < bridge> ??? 09:12 < bridge> ?? 09:12 < bridge> It was just added lmao 09:12 < bridge> what? 09:12 < bridge> when 09:12 < bridge> The zoom scaling 09:12 < bridge> I thought the game sends mouse positions as int forever 09:12 < bridge> Yes 09:12 < bridge> oh 09:12 < bridge> it was? 09:13 < bridge> Targetxy only gets scaled for the tp to mouse position cmd 09:13 < bridge> That's the only reason 09:13 < bridge> hmm 09:13 < bridge> You could just send your zoom to the server and the server can handle it. 09:14 < bridge> would need to put it somewhere 09:14 < bridge> it should be in however the server knows your screen window 09:14 < bridge> for removing entities 09:15 < bridge> Oh that's true. So it could already be done? 09:15 < bridge> heinrich said it doesn't send the zoom value directly only the window position 09:15 < bridge> What's happening here? 09:15 < bridge> The client sends it's cursor in zoomed state nowadays 09:16 < bridge> yeah 09:16 < bridge> It used to not do it 09:16 < bridge> yeah 09:16 < bridge> exactly 09:16 < bridge> I guess you can figure that out 09:16 < bridge> the idea is it could just send the non-zoomed value along with ``m_pClient->m_Camera.m_Zoom`` then the server can get the exact position by itself 09:17 < bridge> not that I'm suggesting it do this 09:17 < bridge> just that it could 09:17 < bridge> not that I'm suggesting it should do this 09:17 < bridge> Might be good, but if you plan on doing something in these regards, then please make sure dyncam will be supported properly, because it isnt. Neither with the way it was before nor now with the zoomed cursor being sent 09:18 < bridge> wdym 09:18 < bridge> You can't get the correct position of dyncam on the serverside, only if the server knows the client uses dyncam (e.g. via a command to enable) 09:18 < bridge> weird 09:19 < bridge> you could steal it from checksum with a lot of effort I think 09:19 < bridge> How long has this been a thing? 09:19 < bridge> only cursor + zoom is fucked iirc 09:19 < bridge> hi fokkonaut 09:19 < bridge> hi 09:20 < bridge> I expect you would be the biggest proponent of the mouse zoom with your drawing feature on your server 09:21 < bridge> https://github.com/fokkonaut/F-DDrace/blob/de3299d92ed85bd7c64c1f098ae7990d2b5155d6/src/game/server/gamecontext.cpp#L2937-L2945 09:21 < bridge> 09:21 < bridge> https://github.com/fokkonaut/F-DDrace/blob/de3299d92ed85bd7c64c1f098ae7990d2b5155d6/src/game/server/player.cpp#L906C16-L906C26 09:21 < bridge> 09:21 < bridge> https://github.com/fokkonaut/F-DDrace/blob/de3299d92ed85bd7c64c1f098ae7990d2b5155d6/src/game/server/entities/character.cpp#L3930-L3975 09:22 < bridge> nice 👍 09:22 < bridge> ah 09:23 < bridge> so you can get exact position without zoomed mouse? 09:23 < bridge> https://github.com/fokkonaut/F-DDrace/blob/de3299d92ed85bd7c64c1f098ae7990d2b5155d6/src/game/server/entities/character.cpp#L3969-L3973 09:23 < bridge> This is where it checks for the new clients 09:24 < bridge> if they are new, we don't multiply it with the zoomlevel because they send it correctly 09:24 < bridge> hmm 09:24 < bridge> rather annoying that ddnet did not do this 09:24 < bridge> For older clients (before that change) I manually calculated the zoom level 09:24 < bridge> morning from gym 09:24 < bridge> hi 09:25 < bridge> That seems to be the correct way. Physics should use a non-scaled version 09:25 < bridge> if you change it back then you need 3 ways to calculate mouse position 09:25 < bridge> Gm 09:25 < bridge> yeah I agree 09:25 < bridge> Yes, that's true. I was not really happy when I saw them merge this without issues 09:25 < bridge> remove dyncam 😭 09:26 < bridge> please 09:26 < bridge> what does dyncam have to do with this 09:26 < bridge> . 09:26 < bridge> it just needs more data send to the server it can stay 09:26 < bridge> You need follow factor, deadzone etc 09:26 < bridge> otherwise u cant calculate that pos 09:27 < bridge> client should just send two 09:27 < bridge> one zoomed and one unzoomed cursor pos 09:27 < bridge> and unzoomed = physics 09:27 < bridge> zoomed = do whatever u want 09:27 < bridge> We just talked about how huge those 2 32bit ints for targetxy are that's why I was not suggesting this 09:27 < bridge> ah 09:28 < bridge> well, if u'd need to send follow factor, deadzone etc then it's not gonna be less 09:28 < bridge> that is also true I guees 09:28 < bridge> Could send it once, or only on change 09:28 < bridge> But you only need to send them on joining and on changing them 09:28 < bridge> but idk if thats worth it 09:28 < bridge> yea 09:29 < bridge> the zoom level changing edge hooks seems really bad actually? 09:29 < bridge> like that was a bad change 09:29 < bridge> yes 09:29 < bridge> yes 09:29 < bridge> yes 09:29 < bridge> do we have an issue for that? 09:29 < bridge> i always have to play around with different zoom to hit bad steep hooks 09:29 < bridge> yea :D 09:30 < bridge> well in that case it gives you an advantage lol 09:30 < bridge> before it was like you had zoom =10 always 09:30 < bridge> Well no xd 09:30 < bridge> yes 09:30 < bridge> if you play on zoom 10 it's the same as before 09:30 < bridge> but if you line up an edge hook then zoom in/out and it goes away I feel that's not good 09:31 < bridge> I guess you can get lucky with the int truncation and then hit the edge hook lmao 09:32 < bridge> lots of assumptions that everyone uses zoom 10 lol 09:39 < bridge> Where would you store the scaled version? In CNetObj_PlayerInput? Would that break compatibility? 09:39 < bridge> Yes, that would break it 09:39 < bridge> I think so 09:40 < bridge> i'd create a new Ex object i guess ? havent really looked into all that in a while 09:40 < bridge> So a new netmsg just for camera related change so guess? 09:40 < bridge> So a new netmsg just for camera related changes so guess? 09:40 < bridge> Before doing that we should probably wait on @heinrich5991 to approve smth in the issue 09:41 < bridge> I will always happily waste my time doing fun stuff 09:42 < bridge> :D 09:48 < bridge> @teero777 you have to be careful because some clients will still be sending the scaled mouse inputs 09:49 < bridge> that's why I am a little upset that this was ever allowed cause it's going to require permeant backwards compatibility code 09:49 < bridge> that's why I am a little upset that this was ever allowed cause it's going to require permeant backwards compatibility code to be fixed 09:50 < bridge> that's why I am a little upset that this was ever allowed cause it's going to require permanent backwards compatibility code to be fixed 09:52 < bridge> That's the reason I do not understand why @heinrich5991 let that one slip. 09:53 < bridge> He himself said we have to discuss such things properly before. 10:04 < bridge> :justatest: 10:04 < bridge> gm 10:11 < bridge> Actually only 1 check for older clients that don't send scaled inputs. 10:11 < bridge> If the client doesn't send the camera netmsg we know it will be scaled except the version is lower than x 10:12 < bridge> :brownbear: 10:12 < bridge> seems a little annoying to track if the client sends the camera netmsg 10:13 < bridge> you need extra state somewhere right? 10:14 < bridge> Ye Just some bool m_IsTargetScaled 10:14 < bridge> idk where the server puts such stuff 10:15 < bridge> And each client will have some Params such as deadzone, follow factor etc 10:15 < bridge> I think you need to add it to some sort of reset function for handling swaps 10:15 < bridge> idk it's not my concern :) 10:15 < bridge> inbetween versions(?) 10:16 < bridge> wdym? 10:16 < bridge> scale thing wasn't there forever was it 10:17 < bridge> i mean even older clients can play thia game :justatest: 10:17 < bridge> or does ddnet check it already 10:17 < bridge> latter i think xd 10:17 < bridge> it's the already scaled clients who you need to check for 10:18 < bridge> Older clients inputs just don't get scaled by the server. Since they are not scaled by the client it's fine. 10:20 < bridge> so you propose to send additional data about camera parameters every input for newer clients 10:20 < bridge> Only on camera settings change and on join. 10:22 < bridge> hm i wonder if theres something you can manipulate server with 10:23 < bridge> not on ddrace though 10:45 < bridge> @heinrich5991 didn't you want to unsort #8448 before merging? The diff ended up being massive for no reason 10:45 < bridge> https://github.com/ddnet/ddnet/pull/8448 10:47 < bridge> It's also the only language file with a different order of strings 10:49 < bridge> I wanted to, but I forgot to do it yesterday and now deen has merged it 10:50 < bridge> no way to undo, I guess 10:54 < bridge> hmmm. we don't have a script to sort the translations, i seems? 10:54 < bridge> weird oversight 10:54 < bridge> @learath2 10:54 < bridge> I was thinking we could revert the commit and add it sorted again 10:54 < bridge> but I can't find the sort function 10:56 < bridge> I doubt there is a sort function, is it even sorted normally? 10:56 < bridge> yes, seems to be sorted in other language files 10:57 < bridge> Does anyone know if the automapper rules format is described anywhere? Or is this a "custom/yolo" format and I need to hope, that ddnet get's it right? 10:57 < bridge> yolo format 10:57 < bridge> xD okay, an example of what may break is the length of the automapper rule, I probably need to extract limits out of the client 10:58 < bridge> xD okay, an example of what may break is the length of the automapper rule **name**, I probably need to extract limits out of the client 10:58 < bridge> Sorted by what though? german.txt:L482 is Yes. L488 is Name Plate 10:58 < bridge> hmmmmmm 11:00 < bridge> Maybe sorted within the file it's grabbed from? 11:07 < bridge> i found a gigachad 11:07 < bridge> https://ageinghacker.net/ 11:07 < bridge> can u imagine me having this blog when im old 11:08 < bridge> https://ageinghacker.net/talks/#jitter-gnu40-2023 11:12 < bridge> when phd thesis 11:12 < bridge> and when ego check 11:12 < bridge> the guy is old in that picture? 11:13 < bridge> damn gigachad look 11:14 < bridge> xd 11:15 < bridge> :pepeW: 11:15 < bridge> :owo: 11:15 < bridge> phd doesn't pay off in computer science, at least where I live :saddo: 11:15 < bridge> phd doesn't pay off in computer science, at least where I life :saddo: 11:16 < bridge> I can't imagine a PhD in anything but Chemistry paying 😛 11:16 < bridge> Biology, if you're lucky, making the next Covid vaxine 11:17 < bridge> Biology, if you're lucky, making the next Covid vaccine 11:17 < bridge> ty for spelling 11:17 < bridge> Biology, if you're lucky, making the next vaccine 11:18 < bridge> It looked so wrong, I couldn't even tell what was wrong 11:20 < bridge> I doubt any PhD that worked on any of the vaccines got anywhere near fairly paid 11:21 < bridge> probably true. At the end this is case-by-case, I also know a person who made a PhD in IT who now makes 6-figures, he is specialised in natural language processing with neural networks 11:22 < bridge> but only, because he didn't stay in science 11:22 < bridge> but only, because he didn't stay in science and works in the insdutry 11:22 < bridge> but only, because he didn't stay in science and works in the industry 11:22 < bridge> I don't think you need a phd for that 11:23 < bridge> Machine learning people do get paid really well yeah. I wish I was more interested in that stuff 11:23 < bridge> medicine? 11:24 < bridge> Oh yeah duh 11:24 < bridge> Of course you don't need a phd for that, you just need a computer and brain power. But the topic he was working on was very very specific and he was an expert in this field and he keeps working on it at his job 11:24 < bridge> for me the only interesting part of ML are compilers related to optimizing that stuff 11:24 < bridge> MLIR 11:25 < bridge> not interested in the hardware optimizing this stuff? Special TPUs and architectures? 11:25 < bridge> https://mlir.llvm.org/docs/Dialects/MLProgramOps/ 11:25 < bridge> there is a new dialect in MLIR 11:25 < bridge> https://mlir.llvm.org/docs/Dialects/IRDL/ 11:25 < bridge> > IR Definition Language Dialect IRDL is an SSA-based declarative representation of dynamic dialects. It allows the definition of dialects, operations, attributes, and types, with a declarative description of their verifiers. IRDL code is meant to be generated and not written by hand. As such, the design focuses on ease of generation/analysis instead of ease of writing/reading. 11:25 < bridge> > 11:25 < bridge> > Users can define a new dialect with irdl.dialect, operations with irdl.operation, types with irdl.type, and attributes with irdl.attribute. 11:26 < bridge> ```mlir 11:26 < bridge> irdl.dialect @cmath { 11:26 < bridge> irdl.type @complex { 11:26 < bridge> %0 = irdl.is_type : f32 11:26 < bridge> %1 = irdl.is_type : f64 11:26 < bridge> %2 = irdl.any_of(%0, %1) 11:26 < bridge> irdl.parameters(%2) 11:26 < bridge> } 11:26 < bridge> 11:26 < bridge> irdl.operation @mul { 11:26 < bridge> %0 = irdl.is_type : f32 11:26 < bridge> %1 = irdl.is_type : f64 11:26 < bridge> %2 = irdl.any_of(%0, %1) 11:26 < bridge> %3 = irdl.parametric_type : "cmath.complex"<%2> 11:26 < bridge> irdl.operands(%3, %3) 11:26 < bridge> irdl.results(%3) 11:26 < bridge> } 11:26 < bridge> } 11:26 < bridge> ``` 11:27 < bridge> looks pog 11:27 < bridge> a ir to define irs dynamically 11:27 < bridge> rly? 11:27 < bridge> i guess it depends what niche ur studying 11:27 < bridge> ah, I see 🙂 11:27 < bridge> like ml i was about to say xd 11:27 < bridge> https://mlir.llvm.org/docs/Dialects/PolynomialDialect/ 11:27 < bridge> a dialect for polynomial operations 11:27 < bridge> Solving any problem with a neural network always feels like a cop out to me. It's like saying the best minds of humanity couldn't figure it out. Just make some fuzzy guesses at it 11:28 < bridge> ```llvm 11:28 < bridge> // A constant polynomial in a ring with i32 coefficients and no polynomial modulus 11:28 < bridge> #ring = #polynomial.ring 11:28 < bridge> %a = polynomial.constant <1 + x**2 - 3x**3> : polynomial.polynomial<#ring> 11:28 < bridge> 11:28 < bridge> // A constant polynomial in a ring with i32 coefficients, modulo (x^1024 + 1) 11:28 < bridge> #modulus = #polynomial.int_polynomial<1 + x**1024> 11:28 < bridge> #ring = #polynomial.ring 11:28 < bridge> %a = polynomial.constant <1 + x**2 - 3x**3> : polynomial.polynomial<#ring> 11:28 < bridge> 11:28 < bridge> // A constant polynomial in a ring with i32 coefficients, with a polynomial 11:28 < bridge> // modulus of (x^1024 + 1) and a coefficient modulus of 17. 11:28 < bridge> #modulus = #polynomial.int_polynomial<1 + x**1024> 11:28 < bridge> #ring = #polynomial.ring 11:28 < bridge> %a = polynomial.constant <1 + x**2 - 3x**3> : polynomial.polynomial<#ring> 11:28 < bridge> ``` 11:28 < bridge> but it's working quite well ^^ 11:29 < bridge> it would be weird to not use it just because we can't explain its inner workings 11:29 < bridge> maybe elon musk's ai to solve the universe is a cop out scheme xd but ml definitely has use cases 11:29 < bridge> Indeed. It just doesnt align well with how I think 11:30 < bridge> there are also only so many great minds but millions of gpus 11:31 < bridge> I'd guess there are at least billions of GPUs and at least millions of great minds ^^ 12:03 < bridge> In my mind AI is, by design, an imperfect system. It's not 100% accurate or even mathematically correct, but just a very very good guessing machine, because we can't do better then guessing good. You can throw an AI at problems, where it's known, that no perfect solution exists 12:04 < bridge> That's also why chat AIs haluzinate, never learned the real answer? Make a best efford guess (which might be totally bs) 12:05 < bridge> I guess you're talking baout LLMs? 12:05 < bridge> I guess you're talking about LLMs? 12:05 < bridge> "AI" is quite imprecise 12:05 < bridge> to be more specific, i am talking about neural networks 12:05 < bridge> e.g. support vector machines are very explainable 12:06 < bridge> you can throw a support vector machine at a problem, where it's known to to work so well. Same issue, it just makes the best guess 12:06 < bridge> because a SVN normally just does linear operations even when the problem is parabolic in nature 12:06 < bridge> Afaik it's mathematical fact that a neural network can learn to perform any computable operation perfectly if it has enough neurons 12:07 < bridge> ye it can learn any function 12:07 < bridge> yes, and that's called overfitting 12:07 < bridge> no 12:07 < bridge> I disagree that SVM have the same issues as neural networks 12:07 < bridge> overfitting would be learning a different function 12:07 < bridge> SVM are explainable, you know what they can and cannot do 12:07 < bridge> NN are not explainable 12:07 < bridge> the difference of explainability is what makes NN so annoying 12:07 < bridge> It's such a shame that they don't learn in any logical manner 😄 12:08 < bridge> I am not talking about explainability. A small NN is nothing different then a multilayer-perceptron, which is also explainable. I am talking about unperfect solutions 12:09 < bridge> (a multilayer-perceptron is a small NN to be correct here) 12:09 < bridge> i think he means explainable in a human context, not as in you don't know how it works under the hood 12:09 < bridge> I mean explainable as in you know how it solves tasks, i.e. you know what failure modes to expect, etc. 12:10 < bridge> if you have a tiny NN, it might well be explainable, yes 12:10 < bridge> to be fair, i don't think that there is a 'correct' way to explain how to solve a task 12:10 < bridge> If your model has more parameters then the size of the problem, the model can simply learn every solution, like a map. You don't gain the generalization 12:10 < bridge> sooo heuristics are bad? 12:11 < bridge> heuristics are used in everyday CS, say, starting from A* 12:11 < bridge> or solving SAT problems 12:11 < bridge> yes, but a NN can still learn any function 12:11 < bridge> including the generalized function 12:11 < bridge> yet, I wouldn't call modern SAT solvers bad, even though they can never tackle the general problem 12:12 < bridge> they only tackle the practical problems 12:12 < bridge> no, but @learath2 said he couldn't wrap his brain around why the best brains of humanity couldn't figure it out, so we do the next best thing: guessing good 12:12 < bridge> same for SAT solvers ^^ 12:12 < bridge> SAT solvers are also about guessing good 12:12 < bridge> It's not that I can't wrap my brain around it. It's that it feels like giving up on a problem and just throwing guesses at it instead 12:13 < bridge> I guess that feelign would also apply ot SAT solvers then? 12:13 < bridge> you can kill a lot of problems with them 12:13 < bridge> I do understand these are incredibly hard problems that may even lack solutions 12:14 < bridge> Well, now I can go on a lil vacation 12:14 < bridge> Well, now I can go on a lil vacation with peace in mind 12:14 < bridge> The "gain" from NNs is, that they solve a problem in a linear matter (just matrix multiplications, at least in the easiest cases), while you can technically throw them at problems way worse by nature 12:14 < bridge> Yes, but atleast these do something easy enough to understand guided by heuristics 12:15 < bridge> @heinrich5991 are NN SAT solvers even used in any important context 12:15 < bridge> SAT solvers do also guarantee you the correct solution if you use them correctly 12:15 < bridge> It's closer to A* 12:15 < bridge> The "gain" from NNs is, that they predict in a linear matter (just matrix multiplications, at least in the easiest cases), while you can technically throw them at problems way worse by nature 12:15 < bridge> or is it like a 'only false negative no false positive' thing 12:16 < bridge> NNs generating computer understandable proofs also give you "no false positives" 12:16 < bridge> The "gain" from NNs is, that they predict in a linear manner (just matrix multiplications, at least in the easiest cases), while you can technically throw them at problems way worse by nature 12:16 < bridge> A neural network is more like, you feed it input, you get output, there is some probability that the output is correct provided that your input is within some domain 12:17 < bridge> I'd put that under explainability 12:17 < bridge> since we have no idea how they work, we can't put error bounds on what they do 12:17 < bridge> for a SVM, we can try to do that 12:17 < bridge> This feels even more wrong 😄 Bolt on even more to tame the inherent fallibility of them. Idk they just feel wrong to me. It's a feeling, you can't really explain it away 12:18 < bridge> or a decision tree/forest 12:19 < bridge> I bet there are NN guided SAT solvers out there 12:20 < bridge> The randomness of it all just gives me a sense of general unease. Almost feels like in the future we won't be able to understand anything about computing at all. It'll just be throw data at the newest machine learning paradigm, if it doesn't work, oh well, nothing else we can do 12:22 < bridge> it may just be my opinion but i feel like the growth of new ML techniques will die down somewhat soon (10 years) 12:23 < bridge> there's only so many new things you can do with linear algebra, as of now it just seems like we're only increasing the training compute 12:23 < bridge> It would be such a shame if we'd never see things as beautiful as say djikstras algorithm. There is no beauty in neural networks, they learn in a random manner, they improve with bruteforce conditioning 12:24 < bridge> using ML for fields like mathematics or computing where you need accuracy is usually a losing game, i doubt it'll take over in that sense 12:24 < bridge> I do secretly hope that there is a mathematical great barrier somewhere along the line there. Like the halting problem for computing or the incompleteness theorem 12:26 < bridge> just wait until openai releases chatgpt6 with humanoid girlfriends, you'll change your mind 12:26 < bridge> I'll allow that one 12:27 < bridge> Anyway, these are all very subjective opinions of mine. In reality I do understand very well that some of these problems are way beyond our current level. Perhaps even beyond a mathematical great barrier of their own making them only solvable by an approach like this 12:58 < bridge> mtc guy dreams about this 14:50 < bridge> MTC guy DREAMS about this 15:22 < bridge> oh yeah our amazingly huge forum community lmao 15:22 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1247903495638618163/image.png?ex=6661b832&is=666066b2&hm=8b5717d3f9481bf9404a521fd5e89e33102dc7fab36e846ec10242bd62f4beb8& 15:24 < bridge> :kek: 15:24 < bridge> might want to update the random servermsg's 15:24 < bridge> ,/info should maybe display discord 15:53 < bridge> https://github.blog/2024-06-03-arm64-on-github-actions-powering-faster-more-efficient-build-systems/ 15:53 < bridge> @heinrich5991 @learath2 when ddnet arm test in ci 15:54 < bridge> > GitHub is ecstatic to unveil ArmⓇ-based Linux and Windows runners for GitHub Actions. This new addition to our suite of hosted runners provides power, performance and sustainability improvements for all your GitHub Actions jobs. 15:54 < bridge> I don't even know how well ddnet runs on aarch64 15:54 < bridge> > ecstatic 15:54 < bridge> when less corporate language? 15:54 < bridge> We expect to begin offering Arm runners for open source projects by the end of the year 15:54 < bridge> oh rip 15:55 < bridge> runs nice on my m1 15:56 < bridge> > We expect to begin offering Arm runners for open source projects by the end of the year 15:57 < bridge> why not "happy"? ^^ 15:58 < bridge> They are more than happy, elated even 16:10 < bridge> @robyt3 I'm not a terribly huge fan of your `std::function` use in #8395 they really don't optimize out very well usually and this is in the hot path, have you checked that this gets inlined properly? Perhaps a template `ConsumeEvents` would be more appropriate? 16:10 < bridge> https://github.com/ddnet/ddnet/pull/8395 18:00 < bridge> <0xdeen> Done 18:11 < bridge> Is there a possibility to start envelopes animation by tochin switch tile? 18:13 < bridge> Is there a possibility to start envelopes animation by touching switch tile? 18:18 < bridge> no 18:20 < bridge> maybe u could place a teleporter inside the switch that ports you to a part in the map that looks identically but has some animation. 18:20 < bridge> 18:20 < bridge> 18:20 < bridge> but it wouldnt really start the animation at that exact point 18:25 < bridge> wtf is no login required option 18:25 < bridge> I heard about map to json converter, so maybe it's possible to replace switch tile index by envelopes index xD (no prog sry) 18:26 < bridge> when the horse needs to ask for permission to add to a server that is kinda a login. 18:26 < bridge> 18:26 < bridge> and if u dont want this u can filter those 18:26 < bridge> when the horse needs to ask for permission to go to a server that is kinda a login. 18:26 < bridge> 18:26 < bridge> and if u dont want this u can filter those 18:26 < bridge> that wont do anything but nice try 😄 18:28 < bridge> Can you share that converter? I would try to experiment xd 18:29 < bridge> Q: Do you think a strong root pw is still worth it for a private VPS where SSH does not allow pw login? 18:29 < bridge> I am kinda annoyed typing my pw when calling sudo commands lmao 18:29 < bridge> it wont work trust me 18:29 < bridge> but the converter is called twmap 18:29 < bridge> I have passwordless sudo on servers 18:30 < bridge> https://gitlab.com/Patiga/twmap 18:30 < bridge> 18:30 < bridge> good luck xd 18:30 < bridge> the converter can only do the things the editor does 18:31 < bridge> you won't be able to get new features out of ddnet using the converter 18:31 < bridge> https://rosenzweig.io/blog/vk13-on-the-m1-in-1-month.html 18:33 < bridge> @jupeyy_keks huh, did you know https://www.collabora.com/news-and-blog/news-and-events/introducing-nvk.html ? 18:34 < bridge> yeah nice how all those vk drivers pop up. I hope they can match performance of the closed source drivers at any time. 18:35 < bridge> KoG's Authed servers were supposed to be filtered by that (even tho it has no effect right now because KoG is missing a flag :D) 18:38 < bridge> He probably means to convert the map to json to change all occurrences of a switch tile to smth else or just just switch up some numbers 18:38 < bridge> He probably means to convert the map to json to change all occurrences of a switch tile to smth else or just switch up some numbers 18:39 < bridge> I don't think so, see above 18:39 < bridge> Oh I didn't see that 18:39 < bridge> mb 18:40 < bridge> If you don't care about surroundings you can just use teleporters to teleport you to the position the quad is at 18:40 < bridge> It would be nice feature anyway if someone can do it 18:42 < bridge> Issue is freeze switch tiles does not give visual experience to player 18:42 < bridge> What? 18:42 < bridge> Ohh 18:43 < bridge> Bad lang lvl sry 18:43 < bridge> almost forgot to remind of \#8434 and \#8404 18:43 < bridge> You mean to allow a passage of a map with a switcher 18:44 < bridge> You can always use a bunch of doors I guess xdd 18:45 < bridge> Ye but if I want to use deepfrz or tele tiles instead? 18:45 < bridge> If player activates the switcher tele tiles not hiding 18:46 < bridge> If player activates the switcher deepfrz tiles not hiding 18:46 < bridge> There's still visible and just confusing 19:56 < bridge> I don't know how to check that with the actual binary. According to quick-bench there's only little difference with GCC O1/O2. With clang 12.0, the std::function variant is faster, but with clang 17.0 the one without is faster. https://quick-bench.com/q/6oE7qyBNqcaSV_Vio2VY-Z4yrPE 19:57 < bridge> That sounds absurd 😄 19:57 < bridge> yeah, seems like measurement error with this benchmark tool 19:58 < bridge> Or perhaps compilers just evolved beyond my understanding, that's also possible 19:58 < bridge> is it contructing std::functions on the fly? 19:58 < bridge> or is it just calling them? 19:58 < bridge> If the std functions are getting properly optimized out, I guess it can just be measurement error and all equal 20:00 < bridge> i assume calling 20:00 < bridge> then it might do a heap allocation 20:00 < bridge> i assume constructing 20:00 < bridge> calling it can be optimized to a certain degree.. with loop unrolling 20:09 < bridge> https://godbolt.org/z/5z5YW9WxT mh the stdfun does generate an extra layer of indirection, but in the specific case we have in the code I think we capture, capturing lambdas I think always entail a heap allocation, (though idk if the compiler can see through that too) 20:10 < bridge> eh, idk, I'd be more comfortable with a template there, but it's not very important I guess, compilers do seem to be smarter with `std::function` nowadays 22:14 < bridge> only a heap allocation wrt. std::function, not in the template 22:24 < bridge> Those people will always exist xD 22:24 < bridge> https://github.com/Dealerik/F-DDrace/commit/9003e31a72251cf0a93348587bafbc4377e7fff6 22:28 < bridge> maybe you could relicense F-DDRace to a stricter licnese, the strictest would probably be AGPL 22:29 < bridge> Yes, for the template no mandatory heap allocation 22:30 < bridge> What would that mean ? 22:30 < bridge> would it really help tho? 22:30 < bridge> 22:30 < bridge> i guess this is kinda always legit to do, since his source is public and the lincense wasnt changed 22:30 < bridge> AGPL means that everyone who hosts the mod must publish their source code 22:31 < bridge> there are licenses that require attribution 22:31 < bridge> "F-DDrace is a modification of Teeworlds, developed by fokkonaut." 22:31 < bridge> on the github 22:31 < bridge> Usually not very specific about where the attribution is though 22:31 < bridge> Mh 22:32 < bridge> I thought there were some where attribution is required in the end product 22:32 < bridge> That is hard to put into legalese too. Is just a string in the binary enough? 22:33 < bridge> no. say it must be attribution to the people interacting with the software 22:34 < bridge> apparently you can add extra terms 22:34 < bridge> Does it have to be displayed or only if the user wants to see it? You could have it behind a weird command 22:34 < bridge> APGL has a extra section for such things 22:34 < bridge> there u can require that the string is never removed or changed 22:35 < bridge> in the same way or more prominent than in the original 22:37 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1248012803936354304/message.txt?ex=66621dff&is=6660cc7f&hm=56330a66748e4abc9ba47cfdd663573e25d476f34bb8f88002286a2bbc27435f& 22:37 < bridge> https://youtu.be/IS5ycm7VfXg this is pretty cool 22:42 < bridge> influencers in 10 years: 22:42 < bridge> homemade rtx 5090 23:06 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1248020186611716196/bildo.png?ex=666224df&is=6660d35f&hm=a7a9e1bd00ed4373dcb471e98cac6027ed7d86735894a3c41baf49b1c4822fb8& 23:06 < bridge> well 23:07 < bridge> You just need to do the same size commits 7 times more 23:14 < bridge> you want me to make 7 more translations?!