00:41 < bridge> !help 03:33 < bridge> Discord had some issue with germans 03:33 < bridge> A ton of germans got flagged and accounts got disabled, including mine 03:34 < bridge> https://www.reddit.com/r/discordapp/s/hkNuyeLil1 03:34 < bridge> :kek: 04:20 < bridge> how to force vote in f2? i dont undestend syntax 04:56 < bridge> Common discord L 07:06 < bridge> `vote yes` or `vote no` 07:54 < bridge> @tsfreddie are we still sure we'd want to use a player flag for something that might as well be a netmsg ? Or is there a reason as to why this won't work 07:54 < bridge> ask heinrich actually 07:55 < bridge> i don't know either 07:55 < bridge> :justatest: 07:55 < bridge> Didn't hear from Heinrich in a while :NotLikeKogasa: 07:55 < bridge> https://github.com/ddnet/ddnet/pull/9357#issuecomment-2546832457 07:56 < bridge> oh! I see 07:56 < bridge> the compatibility thing i get, but i think adding more playerflags might be as easy as unclamp the protocol, it was not a byte, it is just varint but clamped as far as i can understand. 07:56 < bridge> haven't tried it tho 07:57 < bridge> i mean i can try it right now one sec 08:01 < bridge> seems fine to me 08:01 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1319560321437798454/image.png?ex=676667d0&is=67651650&hm=08621cca61099ff3df87ef2235b1783c4cf684cd772ad4abfc88b48edf4876a6& 08:02 < bridge> a version check to make sure you don't send out of bound flags to server with lower version and we are good to go 09:07 < bridge> my account got deactivated for "suspicous activity" and I got a mail that "I got hacked" from discord. 2FA for the win 09:07 < bridge> also I know this has to be bullshit 09:36 < bridge> maybe discord got hacked and they won't acknowledge it 09:38 < bridge> no company does, but they have to by law. But first they need to collect info about what was hacked and by what degree 09:39 < bridge> this can range from the "secrataries ipad" to "all of our infrastructure can be thrown away" 09:53 < ws-client> https://github.com/ddnet/ddnet/blob/b52f1a41a5890fca6da3a877221f7dce69eec60c/src/engine/shared/network.cpp#L361 09:54 < ws-client> does someone know why we shift by 2 and not 4 here? 09:54 < ws-client> according to the libtw2 docs a vital chunk header consists for 3 bytes and the sequence number takes the full third byte and half (4 bits) of the second byte 09:54 < ws-client> https://github.com/heinrich5991/libtw2/blob/master/doc/packet.md 09:58 < ws-client> the mask is a bit hard to read in ddnet ``pData[1] |= (m_Sequence >> 2) & (~((1 << Split) - 1));`` 09:58 < ws-client> in teeworlds 0.6 its just ``pData[1] |= (m_Sequence>>2)&0xf0;`` and ``0xf0`` in binary is ``0b11110000`` so you can clearly see the 4 bit mask 09:59 < ws-client> so shouldn't it also be a 4 bit shift? 10:00 < ws-client> imo sequence numbers up to 255 should only be written to the last byte and only 256 and higher should start leaking into the 2nd byte but from what i can tell the sequence number 64 already starts writing into the second byte 10:14 < ws-client> okay looking at some traffic my ddnet client sent during a 0.6 connection it seems like it actually does not pollute the 2nd byte before it reaches 256 but how 10:14 < ws-client> https://zillyhuhn.com/cs/.1734685973.png 10:17 < ws-client> @heinrich5991 also in the dissector it says ``00.. .... 1111 1111`` for the sequence number. Shouldn't it be ``0000 .... 1111 1111`` ? First 4 then 8? 10:17 < ws-client> https://zillyhuhn.com/cs/.1734686251.png 10:25 < ws-client> oh no that makes no sense the size is not decimal 49 because the second byte is 0x31 the size also leaked into the first byte because it is bigger than 4 bits (15) 10:26 < ws-client> the only flag bit that is set is vital so the first byte of the header should be 0x40 but it is 0x43 (0b01000011) so the last 2 bits are size okay now i understand nothing anymore 10:30 < ws-client> that shit is more encryption than compression ngl 11:22 < bridge> the line of this can be extremely blurry, I know a nintendo puzzle where they encrypted some data with bit operations and your task was to write a decoder 11:25 < bridge> More encoding than encryption, after all I'm not seeing very many large primes or elliptic curves 12:58 < bridge> quantum computer doesn't care about both :troll: 14:17 < bridge> what do you need this for? 14:18 < bridge> you can always printf whatever you want, but the fact that you don't know this and directly want it for a projectile is a bit sus to me 14:22 < bridge> oh i just found this file, my original goal was just to study the laser code, i thought maybe something cool could be made out of it, but before that i tried to change the HUD, it's pretty cool to do that 14:22 < bridge> There were no problems with the HUD, but with the laser I had my first problems understanding the code 14:22 < bridge> laser.cpp is really mostly the laser physics 14:22 < bridge> if you want to change the rendering, then the file is called items.cpp or smth like that 14:23 < bridge> Why change the rendering? Will something change? 14:23 < bridge> I mean what part of the laser do you want to change? 14:23 < bridge> what do you want in first place? 14:24 < bridge> Generally speaking you probably won't have an easy time to understand the laser in full detail. 14:24 < bridge> 14:24 < bridge> The DoBounce method is the most interesting one 14:24 < bridge> just output information about the laser to understand the code by type of direction, energy, bounces, etc. 14:25 < bridge> then simlpy print what you need in the tick function 14:25 < bridge> I tried to understand through chat gpt, and I was very surprised with the laser's work) 14:25 < bridge> but if you already know about direction, energy and bounces. the variables are most likely also called similary to this 14:26 < bridge> yeah lasers are bit hacky, because they need to stay alive while the client renders them, so they don't directly despawn when they are gone/finished. 14:26 < bridge> 14:26 < bridge> and if you look in ddrace code, you see all the 2 trillion extra cases for laser doors, tele lasers etc 14:27 < bridge> which makes it even harder to understand the whole code 14:27 < bridge> maybe first try: 14:27 < bridge> https://github.com/teeworlds/teeworlds/blob/c56fa9e6a20cfc9d7d16502e18c7d7633acdf492/src/game/server/entities/laser.cpp 14:29 < bridge> doors are extra code btw, so you gladly don't have to deal with that 😂 14:42 < bridge> huhu 14:42 < bridge> hai 14:42 < bridge> <_voxeldoesart> ho ho ho 14:42 < bridge> zanta 14:46 < bridge> zaza 17:27 < bridge> hi 17:45 < bridge> @tsfreddie can you give me the static clang format 10 link you mentioned heinrich sent? i can't find it and i want to add it to the readme, new contributors often forget 17:46 < bridge> <_qey> I though he was Chinese 17:46 < bridge> and? :kek: 17:46 < bridge> he a silly billy 17:46 < bridge> <_qey> Chinese from Europe 17:47 < bridge> <_qey> Maybe we shouldn’t combat blockers on non-blocker servers, maybe we should create more blocker maps? I’m tired of banning those idiots 18:03 < bridge> @askll_star this could be helpful for you :P 18:03 < bridge> 18:03 < bridge> our fix_style script is a bully 18:10 < bridge> @ml1xx I've tried. It doesn't give me any warn. 19:42 < bridge> Now it's clear, This message is visible only to you (yourself) I'll change my nickname okay? plz dont ban me... 23:08 < bridge> https://en.wikipedia.org/wiki/Brussels_effect 23:17 < bridge> New openai model has 2700 code forces rating... it's so over 23:17 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1319790862951121006/20241220_161728.jpg?ex=67673e86&is=6765ed06&hm=96e77f25a252a7dc4548b56c69974301610614b9364a27c623e0e641925dc7b4& 23:18 < bridge> I don't see how anyone will be writing code by hand in 8 years 23:19 < bridge> how big are the code tasks? 😄 23:19 < bridge> 2700 is top 0.01% 23:20 < bridge> Rank 150 23:20 < bridge> Hey, more bugs to fix for us who do write by hand :D 23:20 < bridge> :pepeW: 23:20 < bridge> but is that like codewars? 23:20 < bridge> It doesn't matter it crushes every benchmark 23:20 < bridge> yeah but software engineering isn't about _small_ algorithms 23:21 < bridge> from what I've read about AI development it writes a ton of code that LGTM 👍 but actually has a bunch of bugs 23:21 < bridge> SWE bench the one to the left is about big problems 23:22 < bridge> It went from 1900 rating to 2700 rating in 3 months bro 23:22 < bridge> Its over 23:22 < bridge> before my AI can't code teeworlds on it's own, it's not AI xd 23:23 < bridge> How does this rating work? Is it the number of problems solved correctly or does it take into account the performance of the solutions? 23:23 < bridge> I promise you're under estimating how much it can fit in the context window of new models 23:24 < bridge> This isn't the same thing as it was 6 months ago 23:24 < bridge> Can you get a high Elo by solving just a few problems or do you need to solve a lot of them? I'm willing to bet that the 99.9% of users lower than that rating haven't actually solved most of the problems on the platform simply because they don't want to put in the hours 23:24 < bridge> I could dedicate my life and will not achieve 2700 rating 23:25 < bridge> IOI gold medal is 2400 rating 23:25 < bridge> When a metric becomes a target it ceases to be a good metric 23:25 < bridge> i am open to be replaced by an AI. 23:25 < bridge> 23:25 < bridge> But I am not really a to good to be true guy 23:25 < bridge> coding often means re-iterating over existing code 23:25 < bridge> and the current GPT models are not designed for such tasks 23:26 < bridge> You guys are missing the point, it's not longer about if it can do everything a software dev can it's about when. The science is over now it's just money and engineering. 23:26 < bridge> You guys are missing the point, it's no longer about if it can do everything a software dev can it's about when. The science is over now it's just money and engineering. 23:26 < bridge> I am currently not sure if you are serious xD 23:27 < bridge> I am totally serious 2700 cf is absurd 23:27 < bridge> Also a robot being able to solve well established problems is not much more impressive than a robot rephrasing established literature. These things will still choke on novel problems because they lack proper ability to reason. 23:28 < bridge> chatgpt would probably easily beat me in A-Levels exams.. 23:28 < bridge> 23:28 < bridge> I'd still say it's stupid af xD 23:28 < bridge> ok I'll trust you on that 23:28 < bridge> but getting an absurd score on a bunch of microbenchmarks is not super relevant... 23:28 < bridge> I'm not going to pretend to be the chess players in 1960 who said a computer can never beat them at chess because it cannot beat the top 100 players 23:28 < bridge> especially as it has probably read multiple articles about each of those problems 23:28 < bridge> I 100% believe AI is possible 23:28 < bridge> these things have basically the entire internet in their training data 23:29 < bridge> But the way the current models work isn't really good for logical tasks 23:29 < bridge> I absolutely believe proper intelligent AI is possible, but I do NOT believe we will reach anything like that by simply scaling up existing language models 23:29 < bridge> Current models you've used get 600 on code forces or worse probably 23:29 < bridge> I haven't used any models hehe 23:30 < bridge> How am I such a grumpy old man when it comes to computers? I've been like this since I've had a computer basically lol 23:30 < bridge> i guess im late since you already got the pip thing 23:30 < bridge> This is not a micro benchmark code forces was never intended to be an ai benchmark its competitive coding for humans 23:31 < bridge> It existed before chatgpt or llms 23:31 < bridge> no I mean competitive programming tasks are all microbenchmarks 23:31 < bridge> it's actually testing the speed of a single algorithm 23:32 < bridge> https://github.com/ddnet/ddnet/pull/7034#issuecomment-1700805235 but nonetheless 23:32 < bridge> At 2700 level it's not about speed alone 23:32 < bridge> You've never read a 2700 question i promise 23:33 < bridge> Isn't that type of competitive programming *always* about the performance of the solution code? 23:33 < bridge> or, well, assuming you can produce code that solves the problem in the first place 23:33 < bridge> past a point it becomes about solving the problem at all 23:34 < bridge> If youre not good at competitive programming then the questions are about optimizing an easy algorithm 23:34 < bridge> I've competed at a national level but that's not really impressive considering the size of the nation lol 23:35 < bridge> but wouldn't it also be possible that they just have better training dataset now? could still fall apart as soon as new problem arises. 23:35 < bridge> but at least I can say that I was one of like five people there who *weren't* from a fancy maths focused high school 23:35 < bridge> i've heard somewhere that AI benchmarks are basically just cheating dataset. forgot where i've heard that tho. 23:35 < bridge> Code forces rating can only be obtained in live competitions with new questions afaik 23:36 < bridge> yeah these things have read basically the entire internet like three times over 23:36 < bridge> (three hundred?) 23:36 < bridge> (thousand?) 23:36 < bridge> honestly no clue with how ridiculous the scale of computation is at this point 23:37 < bridge> if there are solutions to these problems online these models have read them 23:37 < bridge> Hi @tsfreddie 23:38 < bridge> but like, how new tho. chinese olympiads trainees can also just do problems 24/7 for 6 years and get through a lot of problems not even understanding what they've learned and what algo actually uses in real applications. 23:39 < bridge> There's nothing they can say that will convince you then, they could just write any number they want on the graph if they wanted. I promise you the researchers making $5million a year are aware of dataset leakage 23:39 < bridge> prob now 2700 level but still 23:39 < bridge> prob not 2700 level but still 23:40 < bridge> Hi juppy :heartw: 23:40 < bridge> I mean yeah, it's probably really good at some things, and definitely superhuman at many, but being superhuman at a few development related tasks doesn't really make you a software developer 23:41 < bridge> You won't get to experience these models first hand for a couple years because they are too expensive until improvements to hardware or algorithm is made so you can read the benchmarks and believe them or be surprised in 2 years when they aren't lying 23:41 < bridge> i knew actual people who just memorize like tons of snippets and just do problems with pattern recognitions :justatest: 23:42 < bridge> This just sounds like solving rubik's cubes 23:42 < bridge> that's how a lot of us can skip chinese college entry exams 23:42 < bridge> i was in the same trianing camp since middle school 23:43 < bridge> but i failed tho, so mom sent me to the US since i've only done "coding" without going to actual schools for like 5 years 23:43 < bridge> The benchmark next to it is called "software engineer benchmark" chatgpt 4o gets 6% o3 gets 71% :cat_tired: 23:44 < bridge> where was oW 23:44 < bridge> where was o2 23:44 < bridge> Skipped because there's a company called o2 23:44 < bridge> :justatest: 23:45 < bridge> I guess I should just become a carpenter 23:45 < bridge> (mom also tells me that from time to time lol) 23:45 < bridge> The guy who invented the algorithm for o1 is saying this as well 23:45 < bridge> https://x.com/polynoamial/status/1870172996650053653?t=xlrABqsLiH6yLdsyX1hFMw&s=19 23:46 < bridge> o1 has 1891 is pretty cool too 23:46 < bridge> Now the real question is if it's breaking the established scaling laws or not 23:46 < bridge> too bad im still using supermaven which is usually dumb af if you let it writing a thing from scratch 23:47 < bridge> because that'll really tell you whether "this trajectory will continue" or not 23:47 < bridge> You can just make ASIC and it becomes 1000-2000x faster 23:47 < bridge> do they still do papers for o models 23:48 < bridge> no but there are scaling laws for quality of output too, everyone kinda expects a plateau 23:48 < bridge> they don't tell you much 23:48 < bridge> too bad 23:48 < bridge> (everyone except the VCs lol) 23:48 < bridge> 1900->2700 doesn't look like a plateau to me 23:49 < bridge> but again, i still believe if you seen enough you can get good at codeforces without understanding a lot of it 23:49 < bridge> who understands, please help me how to compile the source code of the game via MSVS? I downloaded and changed the code that I need, BUT I DON'T UNDERSTAND HOW TO COMPILE?!?!?!? 23:49 < bridge> That's why I'm asking if it's breaking the established scaling laws or not 23:49 < bridge> because that's the deciding thing for me on how I'm viewing this 1900->2700 improvement 23:50 < bridge> Scaling laws only apply to pre training 23:50 < bridge> This is not that 23:50 < bridge> Scaling laws say nothing about RL training either 23:50 < bridge> is o3 the diffusion-adjacent thing or they don't tell you like at all 23:51 < bridge> Its a transformer 23:51 < bridge> As usual 23:51 < bridge> transformers are so blursed 23:51 < bridge> i know but there was a non linear transformer thing iirc 23:51 < bridge> that doesn't do next token only 23:52 < bridge> They're not going to tell you anything more detailed other than "we trained it to reason" 23:52 < bridge> i guess so 23:53 < bridge> i can see generation quality get better if the AI doesn't need to cope with already generated nonsense due to bad luck on tempreture alone tho 23:53 < bridge> Can you get good at chess without understanding it if you just play thousands of games and memorize the patterns? 23:54 < bridge> oh o3 plays chess now? 23:54 < bridge> that's cool 23:54 < bridge> It did that since gpt4 turbo 23:55 < bridge> but like is it codeforces level of good 23:55 < bridge> hi from europe 23:55 < bridge> im from europe 23:55 < bridge> hi 23:55 < bridge> We already have chess robots better than humans 23:55 < bridge> im from europe too 23:55 < bridge> Does it get its ass kicked by Stockfish? 23:55 < bridge> If it was better than humans at chess you would just say it's not impressive because it can't beat chess engines 23:56 < bridge> I literally just said that lol 23:56 < bridge> i doubt stockfish is transformer 23:56 < bridge> Meanwhile it does your job because you job is not chess 23:56 < bridge> he works for openai btw 23:56 < bridge> 23:56 < bridge> so if it's true what he says, and this is as impressive as it sounds, then gg 23:56 < bridge> 23:56 < bridge> else it was just marketing and you fall for it 23:56 < bridge> 23:56 < bridge> xd 23:56 < bridge> Meanwhile it does your job because your job is not chess 23:56 < bridge> I know he works for openai, he is the lead researcher who designed o1 23:57 < bridge> It was his idea 23:57 < bridge> developers please help 23:57 < bridge> hom compile game source? 23:57 < bridge> why 23:58 < bridge> but yeah I guess I do need to consider carpentry or something if I can't make it in the business without dealing with this garbage 23:58 < bridge> read cpp tutorials and you'll understand 23:58 < bridge> and read this https://github.com/ddnet/ddnet/blob/master/README.md 23:59 < bridge> i mean i'm pretty sure coders are losing jobs regardless how good AI gets, cuz the business people are pretty much convinced already. 23:59 < bridge> I only posted his message because I trust him more than any of the other openai employees to be honest about his own algorithm