00:10 < bridge> Full-time work on Github is probably cheating? 00:10 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1179922434833535016/Screenshot_2023-12-01_at_00.09.47.png?ex=657b8b66&is=65691666&hm=08f941e5dccaf8caf431c01afce260f54d09ac7a5b354c04955680be3fb94127& 00:16 < ChillerDragon> Is it grap share time? 00:16 < ChillerDragon> https://zillyhuhn.com/cs/.1701386257.png 00:18 < bridge> whats in your guy's opinion the best way to parse json stuff into client? - never did it, wanna do it. 00:18 < ChillerDragon> in the ddnet client? 00:18 < ChillerDragon> there is a json parser already 00:18 < bridge> y 00:18 < ChillerDragon> look at the server browser code on how its used 00:18 < bridge> aight - ty 02:42 < bridge> man it's already 1st December 04:20 < bridge> I'm implementing some commands, how do I keep them stored in settings_ddnet.cfg? 08:08 < bridge> 10+ / day on average is crazy 08:08 < bridge> very nice grid 😁 09:20 < bridge> FFR 09:35 < bridge> @jupeyy_keks do aoc 09:35 < bridge> https://adventofcode.com/ 09:35 < bridge> i got such a beautiful answer of first in rust 09:35 < bridge> but damn doing this in asm will be hard 09:36 < bridge> i need a hashmap xddd 09:36 < bridge> it has so much text, i'm too lazy to read it all 09:41 < bridge> ok gpt solved the puzzle. so not worth my time 09:41 < bridge> 😏 09:44 < bridge> boring 09:45 < bridge> well i need a C guy or girl to do it 09:45 < bridge> so i can beat it 09:46 < bridge> @jupeyy_keks for ffr u can read my blog 09:46 < bridge> 😬 09:46 < bridge> joking ill look later its gym time 10:07 < bridge> gm 10:45 < bridge> today i had a reverse deja vu again: 10:45 < bridge> I thought i've done exactly the same before but came to a different conclusion in the end 11:02 < bridge> xd 11:54 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180099535930920970/image.png?ex=657c3056&is=6569bb56&hm=7291c83854ad470226e7683bc5fb5505416ae3d239852e52e86de07cf1a67cf4& 11:54 < bridge> @deen does ur work squash commits 11:54 < bridge> i dont remember if squashed count as 1 here 11:55 < bridge> chiller probs just does lot of small commits xd 11:55 < bridge> mine squashes 11:57 < bridge> I can squash, but usually don't. Each commit should make sense in isolation 12:03 < bridge> ic 12:03 < bridge> i think i follow more the philosophy of, a pr itself should make sense in isolatio, so inside it i make progress commits 12:03 < bridge> then they get squashed to the final clean version 12:03 < bridge> or rebased 12:04 < bridge> ofc some stuff is to complex 12:04 < bridge> omg a new firefox update 12:04 < bridge> ill do it tomorrow xd 12:04 < bridge> I follow a repo should make sense in isolation 12:04 < bridge> u mean a pr 12:05 < bridge> The full repo 12:05 < bridge> xddd 12:10 < bridge> My shower me was super clever today and noticed my current concept is really too overcomplicated and bloated. Now I'm super demotivated to fix it 12:11 < bridge> @ryozuki you have to do some stuff for ffr xd 12:11 < bridge> I'm out of ideas 12:15 < bridge> xd 12:16 < bridge> https://without.boats/blog/three-problems-of-pinning/ 12:17 < bridge> https://mcyoung.xyz/2023/11/27/simd-base64/ 12:25 < bridge> https://blog.marcocantu.com/blog/2023-november-turbopascal40.html 12:25 < bridge> xd 12:25 < bridge> Turbo Pascal was introduced by Borland in November 1983. It's officially turning 40 years old this month. 12:27 < bridge> I mean this is kinda rust related considered rust took quite a few concepts from it 😉 12:29 < bridge> Gpt truly ruined this industry forever 12:30 < bridge> I'll switch professions and become a baker 12:31 < bridge> Just like the baker will be useless, the coder will be. So you are free to do what you like 12:33 < bridge> Chatgpt is awful at anything food related 12:33 < bridge> Atleast for now maybe someone already has foodgpt in training 😄 12:34 < bridge> This is annoying with `git bisect`, have to use `--first-parent` for that 12:34 < bridge> Ask it what chemicals you need for making a mix that appeals to the human taste xd 12:35 < bridge> My main insight about AoC today was how to make a shebang for a SQL file 😄 https://github.com/def-/adventofcode-2023 12:35 < bridge> Well ai is ofc also trained for real motions at some point 12:35 < bridge> It fails miserably at trying to make anything balanced. It'll get the ingredients right but the ratios and portions will be all over the place 12:35 < bridge> oh lol 12:35 < bridge> 2kg of sugar lesgo 12:36 < bridge> General purpose robots taking over baking is the last thing I'd be worried about. Robotics is by far the jankest of all tech 12:36 < bridge> true 12:37 < bridge> Software cooler :kek: 12:38 < bridge> But you cant flex to normies with it 12:38 < bridge> Used to be. Now everyone is just using gpt for everything 12:39 < bridge> I dunno. I didn't underestimate it. Nature is basically nothing else than robotic+ ai 12:39 < bridge> Wouldn't 12:40 < bridge> Natures way of locomotion has nothing to do with the robotics we currently have access to. Robots have like 1 point in dex while humans have 100 12:41 < bridge> Special purpose robots will definitely take over most jobs but those don't need any ai sauce 12:41 < bridge> Yes but is that really a point against robotics? 12:41 < bridge> It doesn't have the disadvantage of evolution 12:42 < bridge> Yes, the strict limits on their motion is a huge issue. Muscles are far more flexible than servos 12:42 < bridge> That's not directly the point. A kitchen ai must still learn to act differently on different scenarios and normal programming simply is bad at this 12:43 < bridge> Making food in mass production ofc already happens 12:43 < bridge> The cutting edge general purpose robots have 1/10th of a humans output folding a shirt e.g. 12:44 < bridge> I mean that it's not happening tomorrow is clear. But let's say the next 10 years 12:44 < bridge> If you really want to shoehorn ai into it ai orchestrating a bunch of special purpose robots would probably be the goal unless a significant leap in robotics is upon us that I completely missed 12:46 < bridge> Sadly these hard coded robots are often not very flexible and thus only useful for very specific designed tasks.. thats nothing bad, and often probably more efficient. 12:46 < bridge> 12:46 < bridge> But the robot that will do your house works will not be this i guess^^ 12:47 < bridge> So i guess the interest in this general purpose designs will sky rocket this century 12:47 < bridge> I don't expect any robot to take over my housework completely for atleast 2 decades. Honestly they are faaar too clumsy and slow 12:47 < bridge> And decade 12:48 < bridge> Though yeah it only takes a handful of breakthroughs for that to change 12:48 < bridge> We still have 5 years 12:48 < bridge> Until Detroit become human takes place 12:48 < bridge> Xdd 12:49 < bridge> Sag wait 15 years 12:49 < bridge> 2038 12:49 < bridge> Ah wait 15 years 12:49 < bridge> You seem far too excited for what will probably be a catastrophic event for the working class 😄 12:50 < bridge> It's the only solution against human slavery 12:50 < bridge> i don't see a better one 12:50 < bridge> But yeah. I'm team machine 12:50 < bridge> If you think the people with the money will let us live in some startrek utopia you will be very surprised 12:50 < bridge> Humans have the disadvantage of being a nature being 12:51 < bridge> Too much hate inside us 12:51 < bridge> Hard to predict 12:52 < bridge> Not extremely hard to extrapolate from how the elite has acted in the last 2 millennia 12:53 < bridge> That's true, but if the ai is more intelligent than a human, we don't know what happens anyway.. why should it not be able to overthink whatever it was trained with 12:54 < bridge> They'll live in supercities with their ai robots and we'll live in slums and be used as court jesters 12:54 < bridge> Possible 12:54 < bridge> Maybe you will be one of them 12:54 < bridge> And change it 12:55 < bridge> Not everyone is a psychopath like musk and consorts 12:55 < bridge> Probably. So I'd rather the ai revolution chill for a little bit so I don't have to wear one of those jester outfits 12:56 < bridge> Oh you meant I'd be one of the megarich? I'd just blast myself off to mars so I don't die in the inevitable civil war 12:56 < bridge> Yes why not.. just gotta marry rich girl xdd 12:57 < bridge> I feel like it's more likely the poor ones will simply be sent to Mars 😂 12:57 < bridge> Who wants to be there xdd 12:57 < bridge> Ooor we can chill with all the AI and I can live a normal life then you can make all the cool robots after I die 12:58 < bridge> That's actually viable. Just ship all the poors of to some mining camp on another planet. Not that we'd need the ore, just so they are out of the way and have work 12:59 < bridge> Without ai you will lose 50% of your remaining awake life time to your job 12:59 < bridge> @learath2 when aoc 12:59 < bridge> C'est la vie 13:00 < bridge> Well c'est la social contract under capitalism to be more exact 13:00 < bridge> Xd 13:00 < bridge> nice 13:00 < bridge> Oh maybe this year I can follow it to the end. I don't have anything to do 13:01 < bridge> go 13:01 < bridge> do it in C 13:01 < bridge> share it 13:01 < bridge> And I wanted to practice my algs a bit 13:01 < bridge> i bench 13:01 < bridge> I was thinking Python this year 13:01 < bridge> :c 13:01 < bridge> I can do C too sure 13:01 < bridge> does c has btrees in std 13:02 < bridge> the program should load the input from a file 13:02 < bridge> called input.txt 13:02 < bridge> :kekW: 13:02 < bridge> xd 13:03 < bridge> You want btree, you make btree 13:03 < bridge> Honestly, not much of a point benching what I'd make against properly optimized data structures 13:04 < bridge> true 13:04 < bridge> Most proper libraries use hand vectorized code. I can't compete with that every day on every task 😄 13:04 < bridge> i was going to also use a optimized allocator and hashing algo 13:04 < bridge> tryharding 13:04 < bridge> @learath2 how hard is it to use mimalloc on C 13:04 < bridge> well just thinking about linking and stuff makes me already pain 13:05 < bridge> It should just be linking it in and it works 13:05 < bridge> I bet you can even just ld preload it like with jemalloc 13:07 < bridge> You don't need python. Gpt is really good in it xddd 13:07 < bridge> Ok sry i quickly rq 13:07 < bridge> ❯ hyperfine -N -w 5 ./target/release/rust-aoc 13:07 < bridge> Benchmark 1: ./target/release/rust-aoc 13:07 < bridge> Time (mean ± σ): 1.2 ms ± 0.1 ms [User: 1.0 ms, System: 0.2 ms] 13:07 < bridge> Range (min … max): 1.1 ms … 1.7 ms 2654 runs 13:07 < bridge> hmm 13:08 < bridge> 1.1ms sounds acceptable for like anything 😄 13:08 < bridge> Nice are we doing a competition about speed? 13:08 < bridge> yes 13:08 < bridge> I wonder if gpt can beat ryo 13:08 < bridge> this is not a optimized version 13:08 < bridge> its my first naive solution 13:08 < bridge> but i guess its p optimized 13:09 < bridge> im using a btreemap 13:09 < bridge> let me put everything in proper iterators 13:09 < bridge> i wanna see if it changes 13:09 < bridge> All the optimization juice is in here 13:09 < bridge> Atleast all the significant part 13:10 < bridge> ```py 13:10 < bridge> 13:10 < bridge> def sum_calibration_values(calibration_document): 13:10 < bridge> total_sum = 0 13:10 < bridge> for line in calibration_document: 13:10 < bridge> digits = [char for char in line if char.isdigit()] 13:10 < bridge> if digits: 13:10 < bridge> first_digit = digits[0] 13:10 < bridge> last_digit = digits[-1] 13:11 < bridge> two_digit_number = int(first_digit + last_digit) 13:11 < bridge> total_sum += two_digit_number 13:11 < bridge> return total_sum 13:11 < bridge> 13:11 < bridge> # Example usage: 13:11 < bridge> calibration_document = [ 13:11 < bridge> "1abc2", 13:11 < bridge> "pqr3stu8vwx", 13:11 < bridge> "a1b2c3d4e5f", 13:11 < bridge> "treb7uchet" 13:11 < bridge> ] 13:11 < bridge> 13:11 < bridge> result = sum_calibration_values(calibration_document) 13:11 < bridge> print("Total sum of calibration values:", result) 13:11 < bridge> ``` 13:11 < bridge> this was from gpt 13:11 < bridge> pls make sure it loads a file called input.txt 13:11 < bridge> i dont even know if it works for non example input xD 13:11 < bridge> and only outputs the result number 13:12 < bridge> do i look like a pyson dev. 13:12 < bridge> gpt is down for me, cant do it 13:13 < bridge> I can't wait for the future where the company just sends everyone home when gpt goes down because no one can code without it anymore 13:13 < bridge> same 13:13 < bridge> and at home i do the real fun stuff 13:14 < bridge> what awesome times that will be 13:15 < bridge> Ur ofc not getting paid for those hours 13:15 < bridge> Imagine thinking capitalists will pay you for nonproductive hours :xDe: 13:16 < bridge> i dont think you understand how that works 13:16 < bridge> if you have a skill nobody has. you can do whatever you want. because you are basically their golden shovel 13:16 < bridge> without you they can't even dig 13:17 < bridge> if you have a job that is very common, then u the real slave 13:17 < bridge> that's where capitalism shows it's evilness 13:17 < bridge> my iterator version is slower :o 13:17 < bridge> || ```rust 13:17 < bridge> let result = input.lines().fold(0u32, |acc, line| { 13:17 < bridge> let indices: BTreeMap<_, _> = numbers 13:17 < bridge> .keys() 13:17 < bridge> .flat_map(|x| line.match_indices(*x)) 13:17 < bridge> .collect(); 13:17 < bridge> 13:17 < bridge> let first = numbers.get(indices.first_key_value().unwrap().1).unwrap(); 13:17 < bridge> let last = numbers.get(indices.last_key_value().unwrap().1).unwrap(); 13:17 < bridge> 13:17 < bridge> acc + first * 10 + last 13:17 < bridge> }); 13:17 < bridge> ``` || 13:18 < bridge> A skill in prompting chatgpt is much more replaceable than actually being able to code 13:18 < bridge> || ```rust 13:18 < bridge> let mut result = 0; 13:18 < bridge> let mut indices = BTreeMap::new(); 13:18 < bridge> for line in input.lines() { 13:18 < bridge> for x in numbers.keys() { 13:18 < bridge> indices.extend(line.match_indices(x)); 13:18 < bridge> } 13:18 < bridge> 13:18 < bridge> let first = numbers.get(indices.first_key_value().unwrap().1).unwrap(); 13:18 < bridge> let last = numbers.get(indices.last_key_value().unwrap().1).unwrap(); 13:18 < bridge> 13:18 < bridge> result += first * 10 + last; 13:18 < bridge> indices.clear(); 13:18 < bridge> } 13:18 < bridge> ``` || 13:18 < bridge> this is faster 13:18 < bridge> 1.2ms vs 1.5ms 13:18 < bridge> unsafe on the unwrap changes nothing 13:19 < bridge> Have you run it a couple more times? 13:19 < bridge> correct, but the fact you still have to think is already smth most can't do 😂 13:19 < bridge> hyperfine does that 13:19 < bridge> it ran it 2700 times 13:19 < bridge> Benchmark 1: ./target/release/rust-aoc 13:19 < bridge> Time (mean ± σ): 1.5 ms ± 0.1 ms [User: 1.2 ms, System: 0.2 ms] 13:19 < bridge> Range (min … max): 1.3 ms … 1.9 ms 2245 runs 13:19 < bridge> iterator 13:19 < bridge> Benchmark 1: ./target/release/rust-aoc 13:19 < bridge> Time (mean ± σ): 1.2 ms ± 0.1 ms [User: 1.0 ms, System: 0.2 ms] 13:19 < bridge> Range (min … max): 1.1 ms … 2.6 ms 2706 runs 13:19 < bridge> for loop 13:20 < bridge> Curious, maybe check the assembly generated? 13:20 < bridge> can u send me the input file? without account i cant seem to access it 13:21 < bridge> i make you the fastest ever 13:21 < bridge> but ur computer specs is diferent 13:21 < bridge> i give u the code 13:21 < bridge> https://gist.github.com/edg-l/5c65631ed4d9e736187058a955f97556 13:21 < bridge> The fastest ever is just puts the result 13:21 < bridge> result is 54277 13:22 < bridge> (note it does nto work for others cuz each gets their own input) 13:22 < bridge> `puts("54277")` beat this one nerd 13:22 < bridge> xd 13:22 < bridge> why call puts use a syscall 13:23 < bridge> Portable 😄 13:23 < bridge> using ahash on the numbers hashmap doesnt help 13:24 < bridge> @ryozuki what is the assignment to numbers in your example? 13:24 < bridge> you didnt send full code xd 13:25 < bridge> ``` 13:25 < bridge> let mut numbers = HashMap::with_capacity(18); 13:25 < bridge> numbers.insert("one", 1); 13:25 < bridge> numbers.insert("two", 2); 13:25 < bridge> numbers.insert("three", 3); 13:25 < bridge> numbers.insert("four", 4); 13:25 < bridge> numbers.insert("five", 5); 13:25 < bridge> numbers.insert("six", 6); 13:25 < bridge> numbers.insert("seven", 7); 13:25 < bridge> numbers.insert("eight", 8); 13:25 < bridge> numbers.insert("nine", 9); 13:25 < bridge> numbers.insert("1", 1); 13:25 < bridge> numbers.insert("2", 2); 13:25 < bridge> numbers.insert("3", 3); 13:25 < bridge> numbers.insert("4", 4); 13:25 < bridge> numbers.insert("5", 5); 13:25 < bridge> numbers.insert("6", 6); 13:25 < bridge> numbers.insert("7", 7); 13:25 < bridge> numbers.insert("8", 8); 13:25 < bridge> numbers.insert("9", 9); 13:25 < bridge> ``` 13:25 < bridge> xd 13:25 < bridge> || ```rust 13:25 < bridge> use std::{ 13:25 < bridge> collections::{BTreeMap, HashMap}, 13:25 < bridge> error::Error, 13:25 < bridge> }; 13:25 < bridge> 13:25 < bridge> use mimalloc::MiMalloc; 13:25 < bridge> 13:25 < bridge> #[global_allocator] 13:25 < bridge> static GLOBAL: MiMalloc = MiMalloc; 13:25 < bridge> 13:26 < bridge> fn main() -> Result<(), Box> { 13:26 < bridge> let input = std::fs::read_to_string("input.txt")?; 13:26 < bridge> 13:26 < bridge> let mut numbers = HashMap::with_capacity(18); 13:26 < bridge> numbers.insert("one", 1); 13:26 < bridge> numbers.insert("two", 2); 13:26 < bridge> numbers.insert("three", 3); 13:26 < bridge> numbers.insert("four", 4); 13:26 < bridge> numbers.insert("five", 5); 13:26 < bridge> numbers.insert("six", 6); 13:26 < bridge> numbers.insert("seven", 7); 13:26 < bridge> numbers.insert("eight", 8); 13:26 < bridge> numbers.insert("nine", 9); 13:26 < bridge> numbers.insert("1", 1); 13:26 < bridge> numbers.insert("2", 2); 13:26 < bridge> numbers.insert("3", 3); 13:26 < bridge> numbers.insert("4", 4); 13:26 < bridge> numbers.insert("5", 5); 13:26 < bridge> numbers.insert("6", 6); 13:26 < bridge> numbers.insert("7", 7); 13:26 < bridge> full code 13:26 < bridge> lmao 13:26 < bridge> oh wait 13:26 < bridge> rayon 13:26 < bridge> its threads time 13:27 < bridge> Did you find out why slower? 13:28 < bridge> This task is so quick that I would think spinning up a thread and synchronization overhead might make it worse 13:29 < bridge> one thing i noted is 13:29 < bridge> im doing the sum in the fold body 13:29 < bridge> while i can just return the result 13:29 < bridge> and do .sum 13:29 < bridge> ``` 13:29 < bridge> let result: u32 = input.lines().par_bridge().fold(|| 0u32, |acc, line| { 13:29 < bridge> let indices: BTreeMap<_, _> = numbers 13:29 < bridge> .keys() 13:29 < bridge> .flat_map(|x| line.match_indices(*x)) 13:29 < bridge> .collect(); 13:29 < bridge> 13:29 < bridge> let first = numbers.get(indices.first_key_value().unwrap().1).unwrap(); 13:29 < bridge> let last = numbers.get(indices.last_key_value().unwrap().1).unwrap(); 13:29 < bridge> 13:29 < bridge> first * 10 + last 13:29 < bridge> }).sum(); 13:29 < bridge> ``` 13:29 < bridge> this is faster 13:29 < bridge> rayon 13:30 < bridge> 1.1ms 13:30 < bridge> wait 13:30 < bridge> i dont need fold at all 13:30 < bridge> ```rust 13:30 < bridge> let result: u32 = input.lines().par_bridge().map(|line| { 13:30 < bridge> let indices: BTreeMap<_, _> = numbers 13:30 < bridge> .keys() 13:30 < bridge> .flat_map(|x| line.match_indices(*x)) 13:30 < bridge> .collect(); 13:30 < bridge> 13:30 < bridge> let first = numbers.get(indices.first_key_value().unwrap().1).unwrap(); 13:30 < bridge> let last = numbers.get(indices.last_key_value().unwrap().1).unwrap(); 13:30 < bridge> 13:30 < bridge> first * 10 + last 13:30 < bridge> }).sum(); 13:30 < bridge> ``` 13:30 < bridge> same speed 13:30 < bridge> xd 13:30 < bridge> 1.1 13:30 < bridge> without rayon 1.5ms 13:31 < bridge> Wasnt your attempt without iterators also 1.1? 13:31 < bridge> 1.2 13:31 < bridge> lol was about to 14:09 < bridge> ok i took your solution and tried to remove the btree: 14:09 < bridge> 14:09 < bridge> ```rs 14:09 < bridge> let result: i32 = input 14:09 < bridge> .par_lines() 14:09 < bridge> .map(|line| { 14:09 < bridge> let matches = numbers.keys().map(|x| line.match_indices(x)); 14:09 < bridge> let m = matches.flatten().peekable(); 14:09 < bridge> 14:09 < bridge> let (min, max) = match m.minmax_by_key(|x| x.0) { 14:09 < bridge> MinMaxResult::NoElements => todo!(), 14:09 < bridge> MinMaxResult::OneElement(min) => (min, min), 14:09 < bridge> MinMaxResult::MinMax(min, max) => (min, max), 14:09 < bridge> }; 14:09 < bridge> 14:09 < bridge> let first = numbers.get(min.1).unwrap(); 14:09 < bridge> let last = numbers.get(max.1).unwrap(); 14:09 < bridge> 14:09 < bridge> first * 10 + last 14:09 < bridge> }) 14:09 < bridge> .sum(); 14:09 < bridge> ``` 14:09 < bridge> 14:09 < bridge> i wonder if that makes any difference at all 14:09 < bridge> i also used itertools 14:09 < bridge> ah 14:10 < bridge> oh 14:10 < bridge> i used par_bridge 14:10 < bridge> not par lines 14:10 < bridge> @jupeyy_keks 1.1ms 14:10 < bridge> same 14:10 < bridge> rip xdd 14:11 < bridge> maybe 2700 runs are not enough anyway 14:11 < bridge> too much noise 14:11 < bridge> or the input is too small 😄 14:11 < bridge> hyperfine accounts for that 14:11 < bridge> it does warmup too 14:11 < bridge> oh ok 14:11 < bridge> Benchmark 1: ./target/release/rust-aoc 14:11 < bridge> Time (mean ± σ): 1.1 ms ± 0.1 ms [User: 1.3 ms, System: 3.1 ms] 14:11 < bridge> Range (min … max): 1.0 ms … 2.2 ms 2495 runs 14:12 < bridge> well then heap allocations are simply too cheap when filled 14:12 < bridge> ❯ hyperfine -N -w 5 -r 10000 ./target/release/rust-aoc 14:12 < bridge> Benchmark 1: ./target/release/rust-aoc 14:12 < bridge> Time (mean ± σ): 1.1 ms ± 0.1 ms [User: 1.2 ms, System: 3.0 ms] 14:12 < bridge> Range (min … max): 1.0 ms … 4.0 ms 10000 runs 14:12 < bridge> i forced 10k 14:12 < bridge> tho filling the btree should have been slower 😮 14:13 < bridge> probs not significant enough, since we talk about 2-3 elements 14:13 < bridge> > Currently, our implementation simply performs naive linear search. This provides excellent performance on small nodes of elements which are cheap to compare. However in the future we would like to further explore choosing the optimal search strategy based on the choice of B, and possibly other factors. Using linear search, searching for a random element is expected to take B * log(n) comparisons, which is generally worse than a BST. In practice, 14:13 < bridge> omg 14:13 < bridge> mimalloc is actually slower 14:14 < bridge> he had that quote ready xdd 14:14 < bridge> Benchmark 1: ./target/release/rust-aoc 14:14 < bridge> Time (mean ± σ): 971.1 µs ± 110.5 µs [User: 1355.7 µs, System: 2347.9 µs] 14:14 < bridge> Range (min … max): 794.9 µs … 2960.6 µs 10000 runs 14:14 < bridge> my solution 14:14 < bridge> without mimalloc 14:14 < bridge> i need to try a bump allocator 14:14 < bridge> Benchmark 1: ./target/release/rust-aoc 14:14 < bridge> Time (mean ± σ): 927.1 µs ± 102.4 µs [User: 1284.5 µs, System: 2129.9 µs] 14:14 < bridge> Range (min … max): 774.1 µs … 3706.6 µs 10000 runs 14:14 < bridge> @jupeyy_keks ur solution 14:15 < bridge> using lto on both obviously 14:15 < bridge> codegen units 1 too 14:15 < bridge> and native march 14:15 < bridge> lmao, so even slower 14:16 < bridge> ok lower min 15:17 < bridge> how can i create popup in ddnet 15:17 < bridge> is there's a tutorial or something? 15:17 < bridge> you mean a GUI popup? 15:17 < bridge> yes 15:17 < bridge> or like a notification? 15:18 < bridge> like the confirm popup 15:18 < bridge> i guess the easiest is too search the string that was used in any confirm popup and copy that code 😄 15:19 < bridge> ye but the way to render :.), maybe put the function in OnRender? 15:19 < bridge> yeah put it somewhere that is called onrender or render 15:20 < bridge> if you want to have ingame popups that harder tho 15:20 < bridge> dunno where you currently add this popup 15:21 < bridge> is it possible to add it in OnInit ? XDD 15:21 < bridge> i wanna like a welcome popup 15:21 < bridge> || welcome to my bot client 😬 || @jupeyy_keks 15:22 < bridge> funny 15:22 < bridge> 99% of time its true 😬 15:23 < bridge> yea bro i will put an aimbot in that popup who knows 15:23 < bridge> creating popups will gimme a knowledge how to destroy the game hahahha 15:24 < bridge> :brownbear: 15:24 < bridge> thats not what i meant bro 15:24 < bridge> the popup is unrelated 15:24 < bridge> anyway its a joke 15:24 < bridge> cuz most random ppl coming asking for advice are always doing client side code and often making bots 15:25 < bridge> yup but i think not about popups right? 15:25 < bridge> u could add it to the welcome popup 15:56 < bridge> https://github.com/osimon8/CombinatorC 15:56 < bridge> @jupeyy_keks aoc 15:56 < bridge> ill try it 15:57 < bridge> Lool so cool 15:57 < bridge> https://github.com/misprit7/computerraria 15:58 < bridge> @chairn for ur class 16:06 < bridge> @jupeyy_keks 861.2 µs 16:06 < bridge> using PGO 16:13 < bridge> Does it run inside factario or just for modding? 16:14 < bridge> ? it makes a factorio blueprint 16:14 < bridge> which is a ofiical game thing 16:14 < bridge> factorio is turing complete with the signals it has 16:14 < bridge> a blueprint allows u to paste structures 16:14 < bridge> but you used it for the aoc program or what 16:14 < bridge> no 16:14 < bridge> wanted to try 16:14 < bridge> i am bit confused why u pinged me and said aoc 16:14 < bridge> xd 16:14 < bridge> but idk how i would give it input 16:14 < bridge> xd 16:14 < bridge> i got motivated sry xd 16:15 < bridge> i bet u could get another 4ns by using a vec for the key() iteration 16:15 < bridge> xd 16:16 < bridge> xd 16:19 < bridge> 883.6 µs without pgo 16:19 < bridge> so yes 16:19 < bridge> not a vec but a array 16:19 < bridge> ill try perfect hashing now 16:23 < bridge> seems same 16:37 < bridge> @ryozuki what is your exact command line with hyperfine? 16:37 < bridge> hyperfine -N -w 5 -r 10000 ./target/release/rust-aoc 16:37 < bridge> ah 16:38 < bridge> no cargo integration? xd 16:38 < bridge> cargo would bloat 16:38 < bridge> i run the bin direct xd 16:38 < bridge> whoa 16:39 < bridge> this game still has things to develop? 16:39 < bridge> xd 16:39 < bridge> no it's 100% perfect and bug free 16:39 < bridge> https://github.com/ddnet/ddnet/issues 16:39 < bridge> Suppose I'm fairly new 16:39 < bridge> hi fairly new im dad 16:39 < bridge> sorry 16:40 < bridge> Tho I did play first in 2011 16:40 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180171567423041617/image.png?ex=657c736c&is=6569fe6c&hm=40ee99c57ebda043370d665815a95cd3b8d5adbd31bc28591515219d7f142c27& 16:40 < bridge> :NekoEvil: 16:40 < bridge> wtf your pc is so fast xD 16:40 < bridge> Benchmark 1: ./target/release/aoc 16:40 < bridge> Time (mean ± σ): 5.5 ms ± 0.4 ms [User: 4.9 ms, System: 29.8 ms] 16:40 < bridge> Range (min … max): 3.6 ms … 8.3 ms 10000 runs 16:40 < bridge> ``` 16:40 < bridge> [profile.release] 16:40 < bridge> lto = true 16:40 < bridge> codegen-units = 1 16:41 < bridge> ``` 16:41 < bridge> + mold 16:41 < bridge> @jupeyy_keks maybe it depends on the file load? 16:41 < bridge> well it should be cached 16:41 < bridge> with the warmup 16:41 < bridge> but im on gentoo 16:41 < bridge> everything natively compiled 16:41 < bridge> and also my ssd is really fast i think 16:41 < bridge> thats slow lol 16:41 < bridge> i use 16:41 < bridge> `hyperfine -N -w 5 -r 10000 ./target/release/aoc` 16:41 < bridge> same as you? 16:42 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180172049528934490/image.png?ex=657c73df&is=6569fedf&hm=db7d6fd57010c2a2e36c083790a6c2df842646e254d51c6858f82c24d4a1fb15& 16:42 < bridge> well i have 16 cores. maybe rayon is bad in this case 😄 16:42 < bridge> i try without parallel 16:42 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180172165799231498/image.png?ex=657c73fa&is=6569fefa&hm=fd9c2212ae693ebfd8c8e7a3a4374a5dd8aa84cf7d9d2e81cb3835e71055e596& 16:42 < bridge> i got 16 16:43 < bridge> xd 16:43 < bridge> well 16:43 < bridge> 16 threads 16:43 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180172242387226654/image.png?ex=657c740d&is=6569ff0d&hm=e680ae01c43c9be169d5f063b2834c0e5aac0f61ff3f544becc944e1a69d9ca7& 16:43 < bridge> ah it could be 16:43 < bridge> too much paralel 16:43 < bridge> u can limit rayon 16:43 < bridge> rayon::ThreadPoolBuilder::new().num_threads(4).build_global().unwrap(); 16:43 < bridge> ok single threaded not much better 16:43 < bridge> put 16 16:43 < bridge> just a bit better 16:43 < bridge> me without rayon its 1.2ms 16:43 < bridge> @jupeyy_keks i guess u found a argument to use gentoo 16:44 < bridge> native kernel 16:44 < bridge> xDD 16:44 < bridge> i doubt that this will help in this case 16:44 < bridge> but i am surprised my pc sucks so hard 16:44 < bridge> wait 16:44 < bridge> 3000 series ryzen are ofc slower 16:44 < bridge> u use 3.5ghz 16:44 < bridge> than 5000 series 16:44 < bridge> i have CPU: AMD Ryzen 7 5800X (16) @ 4.85 GHz 16:44 < bridge> 4.85ghz 16:44 < bridge> normal 16:44 < bridge> yeah, but benchmarks show ur cpu is like 20% faster 16:44 < bridge> not 200% faster xD 16:45 < bridge> xd 16:45 < bridge> @jupeyy_keks did u use march native 16:45 < bridge> no, but no matter which flag, it would not explain such a difference 16:45 < bridge> maybe my RAM XMP profile was discarded or smth 16:46 < bridge> oh wait 16:46 < bridge> iwasnt using march native 16:46 < bridge> ok same shit 16:48 < bridge> the funny thing is, my "System" time is even lower than yours 16:48 < bridge> 16:48 < bridge> but user time much higher 16:48 < bridge> ok but single threaded 16:48 < bridge> lmao 16:48 < bridge> multi threaded is sucks ass 16:49 < bridge> wdym 16:49 < bridge> urs is in ms 16:49 < bridge> mine is in us 16:49 < bridge> yes 16:49 < bridge> my sys time is lower 16:49 < bridge> i dunno 16:49 < bridge> all your benchmarks are like 3ms 16:49 < bridge> mine is: 16:49 < bridge> 16:49 < bridge> Benchmark 1: ./target/release/aoc 16:49 < bridge> Time (mean ± σ): 3.5 ms ± 0.4 ms [User: 2.8 ms, System: 0.6 ms] 16:49 < bridge> Range (min … max): 3.1 ms … 9.9 ms 10000 runs 16:49 < bridge> single threaded 16:49 < bridge> ahh 16:49 < bridge> u mean among my benches 16:49 < bridge> xd 16:50 < bridge> ah 16:50 < bridge> this is faster 16:50 < bridge> then nvm 16:50 < bridge> is sys time kernel time? 16:50 < bridge> i assume so 16:51 < bridge> ok whatever, my pc sucks apparently 16:51 < bridge> that also explains why i dont have 10k FPS in ddnet anymore 16:51 < bridge> f 16:51 < bridge> i bet some fixes for security vuln kicked in xD 16:51 < bridge> spectre 2000 16:51 < bridge> or whatever their names are 16:51 < bridge> https://gist.github.com/edg-l/c4bb02a5171a7190dcc72830ac4d5576 16:51 < bridge> just so u know t his is my code 16:53 < bridge> Benchmark 1: ./target/release/aoc 16:53 < bridge> Time (mean ± σ): 3.4 ms ± 0.2 ms [User: 6.3 ms, System: 11.8 ms] 16:53 < bridge> Range (min … max): 2.8 ms … 5.2 ms 10000 runs 16:53 < bridge> f 16:54 < bridge> Benchmark 1: ./target/release/rust-aoc 16:54 < bridge> Time (mean ± σ): 884.2 µs ± 91.8 µs [User: 1415.1 µs, System: 2001.2 µs] 16:54 < bridge> Range (min … max): 769.5 µs … 5621.4 µs 10000 runs 16:55 < bridge> @jupeyy_keks just to know 16:55 < bridge> what ur fstab 16:55 < bridge> do u have noatime 16:56 < bridge> ``` 16:56 < bridge> UUID=8E37-E91C /boot vfat defaults,noatime 0 2 16:56 < bridge> UUID=6739c2cc-4a15-4a30-b223-bafcdff6688f / ext4 defaults,noatime 0 1 16:56 < bridge> 16:56 < bridge> # sda1 is the 2tb ssd partition for linux 16:56 < bridge> UUID=c598fbd0-b87c-4e69-9fb6-8b2fd0624f24 /data1 ext4 defaults,noatime 0 2 16:56 < bridge> 16:56 < bridge> UUID=f0146921-8313-4099-bb81-7c87675cbbfe /data2 ext4 defaults,noatime 0 2 16:56 < bridge> 16:56 < bridge> UUID=d2359c6c-99c0-4f4f-9225-5605bed37399 none swap sw 0 0 16:56 < bridge> 16:56 < bridge> /dev/cdrom /mnt/cdrom auto noauto,user 0 0 16:56 < bridge> 16:56 < bridge> tmpfs /tmp tmpfs rw,nosuid,noatime,nodev,size=16G,mode=1777 0 0 16:56 < bridge> tmpfs /var/tmp/portage tmpfs size=14G,uid=portage,gid=portage,mode=775,nosuid,noatime,nodev 0 0 16:56 < bridge> ``` 16:56 < bridge> this mine 16:56 < bridge> # /etc/fstab: static file system information. 16:56 < bridge> # 16:56 < bridge> # Use 'blkid' to print the universally unique identifier for a 16:56 < bridge> # device; this may be used with UUID= as a more robust way to name devices 16:56 < bridge> # that works even if disks are added and removed. See fstab(5). 16:56 < bridge> # 16:56 < bridge> # 16:56 < bridge> # / was on /dev/nvme0n1p2 during installation 16:56 < bridge> UUID=6c4f1ad8-7a33-4fc5-9725-6e2c0cc27c1f / ext4 errors=remount-ro 0 1 16:57 < bridge> # /boot/efi was on /dev/nvme0n1p1 during installation 16:57 < bridge> UUID=B5AB-A475 /boot/efi vfat umask=0077 0 1 16:57 < bridge> # swap was on /dev/nvme0n1p3 during installation 16:57 < bridge> #UUID=2d0dd124-e35f-41fe-8074-b1c98eb3c1b5 none swap sw 0 0 16:57 < bridge> /swapfile none swap sw 0 0 16:57 < bridge> UUID=1c6d88fc-0f21-492f-a17c-28d4c4ae33ef /media/jupeyy/SSD_NVME/ ext4 defaults 0 0 16:57 < bridge> #UUID=8dc275c9-a78f-44bf-8a4c-aa95bef3ce44 /media/jupeyy/SSD_SMALL ext4 defaults 0 0 16:57 < bridge> WTF 16:57 < bridge> lmao 16:57 < bridge> xD 16:57 < bridge> xzddd 16:57 < bridge> @jupeyy_keks bruv 16:57 < bridge> u need noatime 16:57 < bridge> atime = access time 16:57 < bridge> it slows down a lot 16:57 < bridge> https://opensource.com/article/20/6/linux-noatime 16:57 < bridge> i used include_str! now 16:57 < bridge> if u mean for reading files 16:57 < bridge> ye 16:58 < bridge> same? 16:58 < bridge> yeah it makes no diff 16:58 < bridge> f 16:58 < bridge> then yes 16:58 < bridge> cpu 16:58 < bridge> whats ur ram speed 16:58 < bridge> 3200 16:58 < bridge> same 16:58 < bridge> CL 16 16:58 < bridge> or was 3600 16:58 < bridge> i forgot 16:59 < bridge> oh 16:59 < bridge> CL14 even 16:59 < bridge> does the terminal matter? xdd 16:59 < bridge> i doubt 16:59 < bridge> cuz no prints 16:59 < bridge> i ran it inside vscode xD 16:59 < bridge> also hyperfine -N option disables shell 16:59 < bridge> for more real perf 17:00 < bridge> i ran it inside vs too 17:00 < bridge> @jupeyy_keks when add christmas on to ur vulkan pfp 17:00 < bridge> @murpi no christmas themed discord pic? 17:01 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180176747623350272/image.png?ex=657c783f&is=656a033f&hm=1cf99b8aa78e40547007f738aef5b5134c8df0b6161adc5eaba8514cf448c6bd& 17:01 < bridge> i try taskset --cpu-list 1 now 17:02 < bridge> maybe my CPU is simply bad at this specific task xD 17:02 < bridge> now i got: 17:02 < bridge> Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. 17:02 < bridge> xd 17:03 < bridge> but it's useless to do it like this anyway. force mt program on 1 core xD 17:03 < bridge> Do you have one? 17:04 < bridge> i bet insanity has one 17:13 < bridge> @ryozuki i found reason 17:13 < bridge> but don't laugh 17:13 < bridge> i pasted the input twice 17:14 < bridge> interestingly the multi threaded version is still around as fast 17:15 < bridge> there comes the 16 core power 17:15 < bridge> 😏 17:18 < bridge> so when i force it to only use 4 cores: 17:18 < bridge> Benchmark 1: ./target/release/aoc 17:18 < bridge> Time (mean ± σ): 1.4 ms ± 0.1 ms [User: 2.2 ms, System: 0.6 ms] 17:18 < bridge> Range (min … max): 1.2 ms … 1.8 ms 1000 runs 17:18 < bridge> 16 cores is simply too much overkill in this case 17:19 < bridge> or wait 17:19 < bridge> does rayon not use 32threads default? 17:20 < bridge> yeah nvm: 17:20 < bridge> 4 threads > 16 threads >>> 32 threads 17:20 < bridge> the CXX is simply to unefficient then 17:20 < bridge> murpi knew that already 17:20 < bridge> the CCX is simply to unefficient then 17:38 < bridge> @jupeyy_keks 1 hour and im free of work 17:39 < bridge> for a entire week 17:39 < bridge> :NekoEvil: 17:47 < bridge> <_voxeldoesart> i wonder what ryo will do for the week off 17:58 < bridge> @ryozuki I did my very best to make it as disgusting as possible for you, I got 0.5ms 17:58 < bridge> https://paste.pr0.tips/UjA?c behold 17:59 < bridge> how much time did u spend 18:00 < bridge> oh u used a match xDDD 18:00 < bridge> 20 minutes handcrafting the state machine, another 20 debugging it 😄 18:00 < bridge> lmao 18:00 < bridge> goto 18:00 < bridge> Someone smarter than me would generate the state machine too 18:01 < bridge> I did my very best to sprinkle more of them around but sadly wasn't able to get more in there 18:01 < bridge> However, observe the ugly found label right under the switch, it's the very best 18:02 < bridge> xd 18:02 < bridge> i will just say 18:02 < bridge> very readable code 18:02 < bridge> congrats 18:02 < bridge> Albeit it's not extremely useful here, it's Aho-Corasick algorithm 18:03 < bridge> nice to know 18:03 < bridge> for now u win 18:03 < bridge> i dont think i have the will rn to do smth like that xdd 18:03 < bridge> I'm sure there is a rust crate to pregenerate and run a Aho-Corasick FSM 18:04 < bridge> Or maybe you can abuse one of those parser lexer generators, they might internally implement it 18:04 < bridge> wait 18:04 < bridge> 0.5ms is 500us right 18:05 < bridge> https://crates.io/crates/rust-fsm 18:05 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180192910583533588/image.png?ex=657c874c&is=656a124c&hm=1ed9d076452c101018bcf79104e2be3cd73f33cba58e033ac17d3bb59fa69b84& 18:05 < bridge> wtf itexists 18:05 < bridge> https://crates.io/crates/aho-corasick 18:05 < bridge> Aha, that would beat mine probably, it has simd magic 18:06 < bridge> Here is my state machine 😄 18:06 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180193138317463703/IMG_0752.png?ex=657c8783&is=656a1283&hm=94c2870f8b4a88b1fe61d0211ba788bad3eb4f3366f58ca18822f809aefdc227& 18:06 < bridge> LOL so simple to use aswell 18:07 < bridge> But does it build the FSM at compile time? I'd guess it does since that's kinda the point 18:11 < bridge> Oh actually, I ran it on my input on my pc, maybe it is worse than your rayon version 18:12 < bridge> oh 18:12 < bridge> let me test it 18:12 < bridge> @learath2 how to have raw copy of ur 18:12 < bridge> it copies nums 18:12 < bridge> oh 18:12 < bridge> Just remove the ?c at the end of the link 18:12 < bridge> it doesnt 18:13 < bridge> ok it does but at start 18:13 < bridge> @learath2 tell me how u want me to compile it for max perf 18:13 < bridge> Just go for a `cc -O3` I didn't think much about it 18:14 < bridge> ok result is correct 18:14 < bridge> (I'm out of context, I just clicked your pastebin out of curiosity and I noticed that) your `get1()` function doesn't actually check for buffer overflow: it checks for `bufsz > 0` but bufsz doesn't get updated. I'd change the while-loop line to `while ((c = getc(f)) != EOF && c != '\n' && bufsz-- > 1)` 18:14 < bridge> Benchmark 1: ./target/release/rust-aoc 18:14 < bridge> Time (mean ± σ): 887.7 µs ± 70.8 µs [User: 1408.4 µs, System: 2021.9 µs] 18:14 < bridge> Range (min … max): 773.8 µs … 2301.4 µs 10000 runs 18:14 < bridge> 18:14 < bridge> Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. 18:14 < bridge> 18:14 < bridge> Benchmark 2: ./a.out 18:14 < bridge> Time (mean ± σ): 425.8 µs ± 41.1 µs [User: 311.5 µs, System: 68.1 µs] 18:14 < bridge> Range (min … max): 352.6 µs … 993.3 µs 10000 runs 18:14 < bridge> 18:14 < bridge> Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. 18:14 < bridge> 18:14 < bridge> Summary 18:14 < bridge> ./a.out ran 18:14 < bridge> 2.08 ± 0.26 times faster than ./target/release/rust-aoc 18:14 < bridge> @learath2 urs is faster rn 18:16 < bridge> Oh yep, good catch, I just assumed all input is sane at some point forgot to remove bufsz 18:16 < bridge> Oh you can compare 2 with hyperfine, wow that's a cool feature 18:17 < bridge> thats the point of hyperfine 18:17 < bridge> https://github.com/sharkdp/hyperfine 18:18 < bridge> 👍 anyways good luck 🙂 18:19 < bridge> so that's the time to beat 18:19 < bridge> i got it bro 18:19 < bridge> for m,e 18:19 < bridge> Benchmark 1: ./target/release/rust-aoc 18:19 < bridge> Time (mean ± σ): 563.1 µs ± 57.9 µs [User: 382.8 µs, System: 129.7 µs] 18:19 < bridge> Range (min … max): 461.9 µs … 1623.0 µs 10000 runs 18:19 < bridge> rayon doesnt help here 18:20 < bridge> ui, close already 18:21 < bridge> ill remove a hashmap 18:21 < bridge> with a match 18:22 < bridge> <_voxeldoesart> what r u guys doing 18:22 < bridge> fighting 18:22 < bridge> <_voxeldoesart> ok 18:23 < bridge> It should probably just match or beat mine just with that change tbf, the compiler is probably smart enough to generate a jumptable just without the blue edges in mine which there aren't enough to matter in this problem 18:23 < bridge> ❯ hyperfine -N -w 500 -r 10000 ./target/release/rust-aoc 18:23 < bridge> Benchmark 1: ./target/release/rust-aoc 18:23 < bridge> Time (mean ± σ): 526.5 µs ± 52.1 µs [User: 353.5 µs, System: 123.5 µs] 18:23 < bridge> Range (min … max): 436.9 µs … 2036.4 µs 10000 runs 18:23 < bridge> without unsafe 18:24 < bridge> http://paste.pr0.tips/eX4?rust 18:25 < bridge> same speed with unsafe xd 18:25 < bridge> The default configuration optimizes for less space usage, but at the expense of longer search times. To change the configuration, use AhoCorasickBuilder. 18:25 < bridge> ok wait 18:31 < bridge> ok now let me try rust 18:31 < bridge> My turn on the rust machine 18:31 < bridge> ok yeah 526us 18:31 < bridge> the crate is built by burntsushi 18:31 < bridge> the author of the regex crate 18:31 < bridge> he is a pro 18:31 < bridge> among pros 18:31 < bridge> Which crate? 18:31 < bridge> https://github.com/BurntSushi/aho-corasick 18:32 < bridge> Oh, were you using that for your 526us result? 18:32 < bridge> burntsushi is like a 2nd dtolnay 18:32 < bridge> yes 18:33 < bridge> im 2 lazy to make it myself 18:33 < bridge> xd 18:33 < bridge> did u even check learaths solution for correctness? xd 18:33 < bridge> ye 18:33 < bridge> doubter can't believe I outperformed the machine 18:33 < bridge> @learath2 did u know about this from uni? 18:33 < bridge> Though it is probably just some rust overhead ngl 18:33 < bridge> It's unlikely my hand drawn FSM is better than a generated one 18:34 < bridge> @learath2 my guess is maybe cuz utf8 18:34 < bridge> no 18:34 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180200323869061220/image.png?ex=657c8e34&is=656a1934&hm=96ebea2d1843e1c9e6da45b7d25acae8f46a8692f5392d3d4aa34c81e3c85513& 18:34 < bridge> nice 18:34 < bridge> Nooo, I knew there existed an optimal algorithm to find a set of small strings in a larger string 18:34 < bridge> i used clang instead of cc 18:35 < bridge> So clang made it worse? 18:35 < bridge> oh true 18:35 < bridge> looks like clangg is slower 18:35 < bridge> it is indeed 18:35 < bridge> im trying gcc 18:35 < bridge> esp with lto 18:36 < bridge> 421 gcc 18:36 < bridge> but im only using -o· 18:36 < bridge> o3 18:36 < bridge> no lto 18:36 < bridge> What would even be LTO'd? I guess the printf 18:37 < bridge> now native march 18:37 < bridge> no change 18:37 < bridge> xd 18:37 < bridge> lto is 2us slower 18:38 < bridge> noise 18:38 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180201296653996052/flamegraph.svg?ex=657c8f1c&is=656a1a1c&hm=c1bcee50b7dc1c07cbfc809a054e9132dcf31ac9939ce9f5d5c8a826bfc52c9a& 18:38 < bridge> rust flamegraph 18:38 < bridge> doesn't this imply that they both spend almost the same time in userspace? so they mainly differ in time spent by system calls? 18:39 < bridge> ok not useful xd 18:39 < bridge> tbh idk, but what u say makes sense 18:39 < bridge> So it does? Because Learath version's doesn't read the whole file at once 18:40 < bridge> idk xD 18:40 < bridge> (lol "Learath verison's" -> "Learath's version") 18:40 < bridge> time man page 18:40 < bridge> These statistics consist of (i) the elapsed real time between invocation and termination, (ii) the user CPU time (the sum of the tms_utime and tms_cutime values in a 18:40 < bridge> struct tms as returned by times(2)), and (iii) the system CPU time (the sum of the tms_stime and tms_cstime values in a struct tms as returned by times(2)). 18:40 < bridge> %S Total number of CPU-seconds that the process spent in kernel mode. 18:40 < bridge> 18:40 < bridge> %U Total number of CPU-seconds that the process spent in user mode. 18:40 < bridge> I find it hard to believe that my naive character by character read of the file is more optimal than the newfangled space age methods rust uses 18:41 < bridge> dude 18:42 < bridge> have u ever looked at your solution 18:42 < bridge> it's completely unreadable. ofc it might be slightly faster 18:42 < bridge> let me try to buffer 18:42 < bridge> the file read 18:43 < bridge> Isn't it that generally complicated algorithms become fast for big enough inputs? 18:46 < bridge> buffering doesnt help 18:47 < bridge> hmm, a bit surprising I'd say 18:47 < bridge> well i guess the way i did it 18:47 < bridge> i should bypass strings totally 18:47 < bridge> xd 18:48 < bridge> I mean, because besides reading the file, system/OS isn't called in another way, right? 18:51 < bridge> @learath2 ah 18:51 < bridge> the state machine is not at compile time 18:52 < bridge> its built once at runtime 18:52 < bridge> Ah that might be the extra 100 18:53 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180204921681875004/image.png?ex=657c927c&is=656a1d7c&hm=a9822533427b55fcc6ef7d931f9e57503e4b2d944aa0d47cfffb28d222574bde& 18:53 < bridge> Though the similar times spent in user code suggests to me that the rust generated one is far faster, but rust reads the file wrong 18:53 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180205041299247205/image.png?ex=657c9298&is=656a1d98&hm=d98a9a415093221c24eb332813065883c2fa563803e483a2d82eeb4ee15a9614& 18:53 < bridge> i guess 18:53 < bridge> i could do it like C but im lazy rn 18:54 < bridge> did u rewrite it in rust or what? 18:54 < bridge> Anyway, it wouldn't surprise me, I'm not a machine, I can't generate a perfect FSM 😄 18:54 < bridge> let me try panic = abort 18:54 < bridge> what? 18:54 < bridge> i used the aho-corasick crate 18:55 < bridge> didnt help xd 18:55 < bridge> what is that? 😄 18:55 < bridge> u missed t he entire convo? XD 18:55 < bridge> @jupeyy_keks its how lea got 420 18:55 < bridge> and i reduced down to 520 18:56 < bridge> a algo 19:01 < bridge> @learath2 https://en.wikipedia.org/wiki/Commentz-Walter_algorithm 19:01 < bridge> is this faster? 19:02 < bridge> > Comparing the Aho-Corasick to the Commentz-Walter Algorithm yields results with the idea of time complexity. Aho-Corasick is considered linear O(m+n+k) where k is the number of matches. Commentz-Walter may be considered quadratic O(mn). The reason for this lies in the fact that Commentz-Walter was developed by adding the shifts within the Boyer–Moore string-search algorithm to the Aho-Corasick, thus moving its complexity from linear to quadrati 19:02 < bridge> It's far more complicated for me to do by hand so I didn't even look into it 19:03 < bridge> It technically has quadratic complexity but it does perform better in most cases 19:48 < bridge> @ryozuki what was the new way of compiling regexes once? 19:49 < bridge> i think u can compile regex in a static context in rust now 19:49 < bridge> let me see 19:49 < bridge> ```rust 19:49 < bridge> fn some_helper_function(haystack: &str) -> bool { 19:49 < bridge> static RE: Lazy = Lazy::new(|| Regex::new(r"...").unwrap()); 19:49 < bridge> RE.is_match(haystack) 19:49 < bridge> } 19:49 < bridge> ``` 19:50 < bridge> ```rust 19:50 < bridge> use { 19:50 < bridge> once_cell::sync::Lazy, 19:50 < bridge> regex::Regex, 19:50 < bridge> }; 19:50 < bridge> ``` 19:50 < bridge> TIL this is a way to do imports! 19:51 < bridge> @jupeyy_keks my FFR contribution 19:51 < bridge> also it works with cfg 19:53 < bridge> https://paste.pr0.tips/xWq?rust behold my abomination 19:53 < bridge> It's god awful but I had the urge to make it for some reason 19:54 < bridge> Ignore the unused imports, it has seen some stuff 20:16 < bridge> hm, I wonder what is blocking 20:16 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180225962248515694/image.png?ex=657ca614&is=656a3114&hm=1587d1318a29878b57ebcc971826fe6e29f439209cf8e76c9222a3a61a9f479e& 20:16 < bridge> I thought rusts compile time facilities were top notch 20:18 < bridge> where u her 20:18 < bridge> heard 20:18 < bridge> its known rust compile time is not up there to c++ 20:19 < bridge> yet 20:23 < bridge> Can someone retry #7552 ? 20:23 < bridge> https://github.com/ddnet/ddnet/pull/7552 20:31 < bridge> but procmacros have access to the entire AST no? 20:32 < bridge> and can't you like run actual rust code at compile time too? 20:33 < bridge> yes 20:33 < bridge> yes xd 20:33 < bridge> i guess ppl didnt do that yet 20:33 < bridge> @learath2 someone did this for maps 20:33 < bridge> anmd perfect hashes 20:33 < bridge> https://github.com/rust-phf/rust-phf 20:34 < bridge> I should read the dragon book sometime. I noticed I code automata really weirdly, maybe there is some insight there as to a "standard" way to write one 20:34 < bridge> i want to own the dragon book 20:34 < bridge> the book of compilers 20:34 < bridge> BUT 20:34 < bridge> IT COSTS 150€ 20:34 < bridge> THANKS ACADEMIA! 20:35 < bridge> Thanks Publishers* 20:35 < bridge> @learath2 let me ask a friend who likes a lot automatas 20:35 < bridge> if he knows a nic book 20:35 < bridge> coworker* 20:40 < bridge> @learath2 got my answer 20:40 < bridge> https://www.amazon.com/Automata-Theory-Algorithmic-Javier-Esparza/dp/0262048639/ref=sr_1_1?keywords=Automata+Theory%3A+An+Algorithmic+Approach&qid=1701459608&sr=8-1 20:40 < bridge> https://www7.in.tum.de/~esparza/autoskript.pdf 20:40 < bridge> free read 20:40 < bridge> xd 20:40 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1180232063123734528/image.png?ex=657cabc3&is=656a36c3&hm=9b8e5e1f5e4fd9f15579b30351158641be558b7671cdc94a8f71242c618a15dd& 20:40 < bridge> and this one 20:41 < bridge> http://aleteya.cs.buap.mx/~jlavalle/papers/distribuidos/algorithms-on-strings.9780521848992.33360.pdf 20:41 < bridge> @learath2 this one has string algorithms like the one u used today 20:44 < bridge> this coworker of mine is always reading books 20:44 < bridge> he read so many xd 20:46 < bridge> Bookworm 20:48 < bridge> Tries are key to pattern matching automata 20:52 < bridge> https://125-problems.univ-mlv.fr/ 20:54 < bridge> Might be good to do a couple 21:22 < bridge> https://blog.cloudflare.com/cloudflare-gen-12-server-bigger-better-cooler-in-a-2u1n-form-factor/ 22:53 < bridge> has anyone ever used or knows cling ? 22:54 < bridge> https://github.com/root-project/cling 23:13 < ChillerDragon> interesting chairn but seems bloat 23:13 < ChillerDragon> i did not find cling in apt and rage quitted build from source after 2 minutes since i just cba 23:14 < ChillerDragon> i wrote my own C repl in bash and it gets the job done for what i need a C repl which is mostly testing oneliners and maybe reusing one variable 23:22 < bridge> i tried to build it, but it just segfaults 23:22 < bridge> given it's from CERN, i guess it's not maintained anymore, but there should be a working version somewhere 23:23 < bridge> i found an apt repo with it, but i don't trust the repo