04:24 < bridge> why are skins case sensitive 05:41 < bridge> https://wiki.gentoo.org/wiki/FAQ#Things_are_really_unstable_when_using_.27-O9_-ffast-math_-fomit-frame-pointer.27_optimizations._What_gives.3F 05:41 < bridge> lol 05:43 < bridge> hi 05:48 < bridge> "-O9" :sweating: 05:49 < bridge> at what O level does the compiler ask an LLM to optimize your code 05:49 < bridge> xd 05:49 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1450349202306498570/image.png?ex=69423674&is=6940e4f4&hm=c385c18dc4979255e2b381c18f3871ae75a4bf6bc5a67a0a350b8760a01d63db& 06:35 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1450360781228085258/Image_1765863345966.jpg?ex=6942413c&is=6940efbc&hm=c089661d100cce58489fc225f359543441ba0e2c2bd52e520b4a196f87288787& 08:05 < bridge> -O-1 πŸ«ƒ 08:23 < bridge> @totar make -O9 a reality 08:24 < bridge> compiler ships a new PC to your house 08:24 < bridge> requires credit card 08:24 < bridge> requires credit card flag 08:24 < bridge> i trust llvm with my card details 08:25 < bridge> @totar btw any ideas on how to improve language models? 08:25 < bridge> no?... 08:26 < bridge> no? 08:26 < bridge> I've never made an LLM 08:26 < bridge> why would I know anything 08:26 < bridge> transformer models are stupid 08:27 < bridge> idk they seem pretty good 08:27 < bridge> people seem to throw transformer at everything now 08:27 < bridge> they just wiggle your data so it kind of looks like what you inputted. 08:28 < bridge> We need some other algorithm that preserves all data. 08:28 < bridge> i don't think bigger tranformer models are the future 08:28 < bridge> isn't the point that they learn something? 08:28 < bridge> i don't think bigger transformer models are the future 08:29 < bridge> yea but right now they just kinda clump their whole training data together no? its not like it knows exactly what it learned 08:30 < bridge> idk I am not qualified to explain how it works 08:30 < bridge> it's just trained to respond correctly 08:30 < bridge> they are much more complicated than regular MLPs 08:31 < bridge> I'm not either, that's probably why i think it's tremendously stupid right now. 08:33 < bridge> but you definitely can't extract all of the training data from the trained weights. 08:33 < bridge> it's so lossy 08:48 < bridge> ppll definitely agree current llm architecture isnt scaling 08:49 < bridge> you can't really losslessly represent the entire internet and all of humanity's knowledge 08:49 < bridge> at least not fast enough in a single model 08:50 < bridge> says who? 08:50 < bridge> how would u do that 08:50 < bridge> i mean it physically has to fit in memory 08:51 < bridge> its like 100PB of raw text at most 08:52 < bridge> probably less no, at least the information ppl care abt 08:52 < bridge> yea 08:54 < bridge> Idk im not super huge ai knower but i'm pretty sure a lot of ppl in the field say transformers arent scaling as they used to 08:54 < bridge> and some big research needs to be done for the next step in intelligence 09:10 < bridge> transformers are scaling with log(log(n)), meaning you need **lots** of data, for it to learn more you need even more data, it's scaling, but badly 09:10 < bridge> they scale logarithmically which is still scaling it's just slow 09:10 < bridge> also they ran out of text, so they can't scale it any more 09:13 < bridge> I like your funny works magic men 11:08 < bridge> Did we have any other client bugs which could be used for advance server-side effects? 11:09 < bridge> The only one I know is how InfClass class chooser worked. 11:09 < bridge> you mean bug abuse by mods? 11:09 < bridge> Although, it has been fixed in 0.7 11:09 < bridge> you mean bug abuse by game modes? 11:09 < bridge> No 11:09 < bridge> Oh yeah 11:09 < bridge> sorry 11:10 < bridge> yeah it's hard to do grasp - animations are synced to the client by faking the server time by some mods 11:10 < bridge> this is how moving blocks are implemented 11:10 < bridge> and this was a topic, because it was hitting a bug between synced/unsynced envelopes 11:12 < bridge> I still think we should add new features for mods to create the same effects, so that we can fix those bugs. 11:12 < bridge> Headshot's Teeware uses the same logic as what InfClass did. 11:12 < bridge> there is a full PR about envelope trigger tiles from me, moving tiles would be the next step 11:14 < bridge> And that logic could be replaced with CNetMsg_Sv_MapSoundGlobal. 11:14 < bridge> That's great. 11:14 < bridge> Another "bug" that one of my zombie mods abused - you can send sounds as the server 11:15 < bridge> but you have no control over it's volumne - so I just send the same sound 100 times and it's awefully loud 11:15 < bridge> :thonk: 11:15 < bridge> exploding zombies literally scare people, ask Teero πŸ’₯ πŸ˜„ 11:15 < bridge> Yeah 11:17 < bridge> The first time I get exploded by it, I had to take down my headphone. 11:17 < bridge> xD 11:17 < bridge> :frozen: 11:17 < bridge> I don't know why I have so much fun with this - maybe I am just a sadist 11:23 < bridge> What? 11:25 < bridge> The friendlist will be highlighted when you select the server that your friends are playing on. 11:25 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1450433595167604757/image.png?ex=6942850c&is=6941338c&hm=fbe2124e76b3c612a995df9e1d684be0704fdf8f431dae5bac5d12a0f9187367& 11:25 < bridge> I had never noticed that at before. 11:32 < bridge> New feature 11:36 < bridge> Well the entire point of machine learning is to compress knowledge into an intermediate, lossy representation. The amazing thing about the lossy representation is that it can fill gaps, so it actually generalizes knowledge. The lossy part is actually very essential and intended. Your idea of lossless things would not be machine learning, but classical algorithms afaict. 11:37 < bridge> yes, machine learning is unreliable 11:37 < bridge> that answer 11:39 < bridge> in a way the fact that it's forced to compress the training data is the only reason it works. Technically a human could learn from something without losing any of the original information. 11:40 < bridge> exactly, the lossy part is essential to the design 11:41 < bridge> I'm not sure if humans have that capability tbh, we also don't remember everything 100% but condense learnings into concepts in our head 11:42 < bridge> that's a very very hard question to answer, we don't know the storage capacity of a human - but since we life in a physical world there must be one 11:42 < bridge> well, compression and loss of data are different things. You could memorize an entire book word for word, and you'd still be able to learn and generalize the concepts in the book. 12:48 < bridge> I think with the current largest LLM models there might be some overfitting going on. Without external monitoring to stop them from infringing copyright, many of them have been shown to reproduce their training data exactly 12:52 < bridge> Problem is that what appears as incompressible in the information theory sense actually does compress quite well for something like an LLM. Assuming you write proper English, there are only so many words that make sense after a given set of words 12:52 < bridge> 12:52 < bridge> Similarly for programming languages, even more so actually given most of them are mostly context free. 12:52 < bridge> 12:52 < bridge> It's a weird compression algorithm of sorts 12:52 < bridge> <0xdeen> I still hope for LLMs to win this fight against copyright 12:53 < bridge> I hope the worlds copyright laws change, because it's currently broken 12:53 < bridge> Really? I feel the opposite. They trained these things on material they do not own. Did not compensate anyone. Are now making billions with it 12:53 < bridge> It just feels wrong 12:54 < bridge> It also feels wrong, that I am not allowed to play music above the content I am creating 12:54 < bridge> The overall concept of copyright is something that can be discussed, but just allowing LLM vendors and LLM users to get a loophole sounds extra wrong to me 12:54 < bridge> like I don't care, monitize it and give them a share on youtube for example 13:25 < bridge> But also investing a lot of money into other companies, most probably Nvidia, where they start to hire more people to work on gpus for llm. 13:25 < chillerbot12> You can see donors and how to donate here https://ddnet.org/funding/ 13:27 < bridge> So what? Do I get a pass to steal from you if I invest it into ddnet? 13:27 < bridge> Yes 13:32 < bridge> if you don't let them violate copyright then they will just move to a country where they can because it's never been more useful to do so, so the laws will probably change to allow LLMs regardless of if they're currently legal or not 13:40 < bridge> well, the laws will probably depend on who stands to benefit the most, if AI is going to eliminate jobs in certain countries and there are no AI companies there then they will probably try to mitigate it 15:56 < bridge> thanks capitalism and globalism 16:40 < bridge> yoyo can a second maintainer give their opinion on this: #11409? 16:40 < bridge> https://github.com/ddnet/ddnet/issues/11409 16:40 < bridge> good bot 16:40 < bridge> :heartw: 17:11 < bridge> @swarfey: don’t use teams in the first place 17:11 < bridge> Always bad 17:13 < bridge> i usually agree but 63 player teams cancel out the bad 17:14 < bridge> i'd usually agree but 63 player teams cancel out the bad 17:15 < bridge> Not really 17:16 < bridge> If you want to do a run with the entire server just use t0 17:21 < bridge> i thought you were trolling. if u use t0 half of the server will just reset on every death and u have the same experience you get on every t0 server 17:22 < bridge> i think its a cool thing (for example for streamers) to be able to have big teams 17:23 < bridge> without being griefed right before finish. especially if the change doesnt interfere with the usual flow of teams 18:02 < bridge> any merges today? 18:02 < bridge> past week only roby's prs merged 18:05 < bridge> Christmas time always has Lower merge rates 18:08 < bridge> wdym? 18:14 < ws-client> **** i wonder if we can get any benefit from preallocating snapshot storage instead of reallocating it all the time https://github.com/ddnet-insta/ddnet-insta/issues/267#issuecomment-3661522840 18:15 < ws-client> **** the snapshot has maximum sizes anyways and yet we reallocate it multiple times a second 18:16 < ws-client> **** @kebscs i am planning to merge your remove super pr after you fixed it :p 18:16 < bridge> sometimes there's little space 18:16 < ws-client> **** @heinrich5991 yea but i rather want to find out at server start if i have too little space 18:16 < ws-client> **** not OOM when there is players playing 18:17 < ws-client> **** nasa coding style 18:17 < ws-client> **** also allocations aren't cheap 18:17 < ws-client> **** after 1 week of running a ddnet based server this seems the only function to show up in heaptrack 18:17 < ws-client> **** i find this worth looking into 18:19 < ws-client> **** @swarfey yes i agree the tendency to suicide in teeworlds is worrying but i don't think teams are the solution 18:19 < ws-client> **** if you want to race with the streamer just don't kill your self. Ppl will learn that they get left behind if they teleport them self to spawn all the time 18:21 < ws-client> **** we should just increase the respawn delay to 10 minutes 18:21 < ws-client> **** ok that was troll now ^ 18:25 < bridge> weren't there a lot of issues with them? i remember the demos made from tee-hee were often missing a bunch of stuff and were throwing a bunch of errors. maybe that was fixed, i haven't used the tool in quite some time. 18:25 < bridge> weren't there a lot of issues with them? i remember the demos made with tee-hee were often missing a bunch of stuff and were throwing a bunch of errors. maybe that was fixed, i haven't used the tool in quite some time. 18:42 < bridge> <01000111g> yes, its not that great 18:43 < bridge> i like this image. 18:43 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1450543830028910732/RDT_20251123_225743722770846822415755.jpg?ex=6942ebb6&is=69419a36&hm=f82bbfec014723bb13df7f9f9df61deeff0609f6fdb022ce75e5639081f95914& 18:44 < bridge> who's the person on the left? 18:47 < bridge> https://lain.fandom.com/wiki/Lain_Iwakura 18:47 < bridge> go watch the anime 18:47 < bridge> yeah that's the kind of thing that sticks around on my list for a few years before I finally watch it :P 18:47 < bridge> already has 18:50 < bridge> > Lain is revealed to be the product of the human collective unconscious that took form in the Wired. 18:50 < bridge> ChillerDragon: was an answer to an earlier message 19:28 < bridge> ddnet developers try not to add/remove/update/change formatters/formats challenge, impossible difficulty 19:29 < bridge> the PR also currently enables **ALL** lints that ruff supports, by default 19:29 < bridge> and only blacklists afterwards 19:29 < bridge> not a good idea 19:32 < bridge> This is my practice^^ 19:36 < bridge> > I thought the quibbles from Robyt would be the most unpleasant, but it turned out that they are much more unpleasant from heinrich5991 :/ 19:36 < bridge> 19:36 < bridge> @byfox and now you know why I told you, to only do it with language files first 19:39 < bridge> And thank you very much for that^^ 20:27 < bridge> Can someone fix labels on this issue? #11423 20:27 < bridge> https://github.com/ddnet/ddnet/issues/11423 20:30 < bridge> I am not sure yet if this is a bug - but should be server and not client 20:34 < bridge> Well the fact that you get remuted is kinda okay but all-chat being spammed is a problem. 20:37 < bridge> I'm not even sure if players should be muted for spamming those commands, maybe we should rate limit them? 20:54 < bridge> Rate limiting the commands like `/team` seems more correct. Muting doesn't make sense to me because mute doesn't actually block the commands. 20:55 < bridge> But there should be some rate limiting so you can't generate chat messages too quickly 20:57 < bridge> Yeah it doesn't block them at all 21:04 < bridge> Surprisingly I don't mind most of the changes this ruff tool makes, except for the weird obsession with `Path.open` 22:06 < bridge> We are missing a linter badly here 22:29 < bridge> "badly" is very speculative. I'd argue everything worked just fine with the brain linter 22:29 < bridge> it's more a nice to have than an urgent need πŸ˜„ 22:29 < bridge> "badly" is very speculative. I'd argue everything worked just fine with the in-brain linter 22:35 < bridge> is that terry davis 22:53 < bridge> The files currently follow no standard and are a pain to work with. I want to work on them but this stops me 23:37 < bridge> https://discord.gg/cQC8QxjTgA 23:54 < bridge> finally fixed my demo library 23:54 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1450622253598244987/image.png?ex=694334c0&is=6941e340&hm=135acdf36114853b63c37596cdaab3a49d2766c3f671b117cd1bfdafc55aa1ef&