04:02 < bridge> Do you need financial support? Are you tired living from paycheck to paycheck… wanna say goodbye to 9to5? Wanna make $2000 daily $5000 weekly before your next pay check arrives? 04:02 < bridge> 04:03 < bridge> THEN ASK HOW....📥 04:03 < bridge> https://t.me/+svoBgWzahsxjZWQ8 07:27 < bridge> There was a discussion about 0.7 lately… why was it even made? 09:00 < bridge> Good catch pro! Yea sure I will \:) 09:00 < bridge> (@Robyt3) 11:14 < bridge> https://www.stableattribution.com/ 11:15 < bridge> idk how good it is tho 11:46 < bridge> Bro if these images are really from ai it's completely insane 11:49 < bridge> My question would be if these are human guided. AI with a humam filter can actually generate incredible art 12:48 < bridge> After the last ddnet pull I am receiving 12:48 < bridge> 12:48 < bridge> ``` 12:48 < bridge> /server/src/game/server/gamecontext.cpp: In member function ‘OnClientConnected’: 12:48 < bridge> /server/src/game/server/gamecontext.cpp:1544:100: warning: ‘operator delete’ called on unallocated object ‘gs_PoolDataCPlayer’ [-Wfree-nonheap-object] 12:48 < bridge> 1544 | m_apPlayers[ClientID] = new(ClientID) CPlayer(this, NextUniqueClientID, ClientID, StartTeam); 12:48 < bridge> | ^ 12:48 < bridge> /server/src/game/server/player.cpp:18:1: note: declared here 12:48 < bridge> 18 | MACRO_ALLOC_POOL_ID_IMPL(CPlayer, MAX_CLIENTS) 12:48 < bridge> | ^ 12:48 < bridge> ``` 12:48 < bridge> 12:48 < bridge> any idea how to fix? 12:49 < bridge> After the last ddnet pull I am receiving during cmake build 12:49 < bridge> 12:49 < bridge> ``` 12:49 < bridge> /server/src/game/server/gamecontext.cpp: In member function ‘OnClientConnected’: 12:49 < bridge> /server/src/game/server/gamecontext.cpp:1544:100: warning: ‘operator delete’ called on unallocated object ‘gs_PoolDataCPlayer’ [-Wfree-nonheap-object] 12:49 < bridge> 1544 | m_apPlayers[ClientID] = new(ClientID) CPlayer(this, NextUniqueClientID, ClientID, StartTeam); 12:49 < bridge> | ^ 12:49 < bridge> /server/src/game/server/player.cpp:18:1: note: declared here 12:49 < bridge> 18 | MACRO_ALLOC_POOL_ID_IMPL(CPlayer, MAX_CLIENTS) 12:49 < bridge> | ^ 12:49 < bridge> ``` 12:49 < bridge> 12:49 < bridge> any idea how to fix? 13:10 < bridge> Sounds like fails to see the overloaded delete operator. Maybe a false positive 13:11 < bridge> What is your GCC version? 13:53 < bridge> gcc version 11.3.0 15:01 < bridge> Mh Debian seems to only have 10 or 12 15:07 < bridge> Its compiled on Ubuntu 15:07 < bridge> ``` 15:07 < bridge> Distributor ID: Ubuntu 15:07 < bridge> Description: Ubuntu 22.04.1 LTS 15:07 < bridge> Release: 22.04 15:07 < bridge> Codename: jammy 15:07 < bridge> ``` 15:14 < bridge> @Jupeyy_Keks I want to continue with twgpu (once I stitched together a demo reader with libtw2) and some things are still a mystery to me 15:14 < bridge> in particular: how do you work around not having array texture bindings in the web? Array textures just seem so damn comfy. I would like to use them for tilemaps and in the future skins of tees. do you do a draw call for each skin/tee? Currently I hack around using array textures for tilemaps by using those 3D textures, but that will fall apart once I introduce mipmaps, since 3D/cube textures also collapse on the z axis in mip maps 15:16 < bridge> do I need to create textures atlases? they seem annoying to work with 15:17 < bridge> -> texture atlases also sound like they will be really annoying to create mip maps for, if that is even a thing, I'll have a look 15:18 < bridge> yeah okay its possible with spacing but it still sounds weird to work with 15:34 < bridge> There are no texture arrays? Use them for tile rendering.. Emscripten is gles3 compatible and they managed to make it work(as e.g. the webclient works) 15:34 < bridge> Is that a wgpu limitation or actual webgl limitation? 15:34 < bridge> 15:34 < bridge> For skins I split them on the cpu BCS gl1.x 15:39 < bridge> Generally I have to say I am not really a fan of texture atlasses. They don't improve performance, they might make files a bit smaller and reduce io a bit. But they also make customization much harder 15:39 < bridge> I don't think u can easily mix them with skins anyway. Body is bigger than feet etc. I'd suggest for skin rendering split them or use integer sampling in shader. Or you need to render many tees at once on the GPU and want to put skins as whole in a texture atlasses? else I don't see a real use case? 15:39 < bridge> Also different sized skins make this harder again. 15:39 < bridge> Updating the texture can be costly as it is one chunk of memory. 15:39 < bridge> 15:39 < bridge> For tile maps they are a perfect fit tho^^ 15:40 < bridge> ah, would you suggest a draw call for each tee? the transformations of the body parts is probably better calculated on the cpu, right? 15:42 < bridge> Well if cpu isn't a hard limit that is probably easier. If™️ we have skins similar to 0.7 u might have to do it anyway. But since u target gles3 u can defs simply use integer sampling 15:43 < bridge> In glsl 15:44 < bridge> do you mean with integer sampling that I sample for an integer type instead of a float? 15:45 < bridge> Yeah basically u sample the texels itself 15:45 < bridge> isn't float better to get the interpolation? 15:49 < bridge> Yeah it's harder . But floats have a disadvantage as u not know the mipmap and even if u know. U won't get clamped edges 15:49 < bridge> Then u have texture bleeding 15:49 < bridge> hm true one draw call for each tee probably results in easier code. one of my problems is that I still have zero clue which operations are expensive to which degree. I assumed that one draw call for all tees would just be much much more efficient and cleaner. for 0.7 skins it would be possible to create a texture array for each body part I suppose 15:50 < bridge> I thought glsl offers good functions. I can't find them rn tho. 15:50 < bridge> okay I guess I'll just not even start with texture atlases as they will probably cause more harm than good 15:50 < bridge> so far all my shaders are in wgsl, although wgpu also has optional support for spirv and glsl 15:52 < bridge> Problem is. U somehow have to tell the GPU which tee uses which skins. And this requires either building up some buffer/CMD every frame. A draw call might carry extra dependencies about which sampler to use. But in the end it's probs not too much difference 15:52 < bridge> Smaller textures also might be more cache friendly 15:53 < bridge> I'd say if u don't create too many calls a draw call is very cheap 15:53 < bridge> A draw call without state change is basically free 15:53 < bridge> oh interesting 15:54 < bridge> "lifehack: get more fps by always using the same skin as your race partner" 15:55 < bridge> you should add that to the gfx troubleshooting guide xd 15:55 < bridge> Yeah. Interesting would be if there would already be finished shaders for such stuff. Custom sampling. But i guess not many ppl want this 15:56 < bridge> Yeah. It already uses the same vertices tho. Depending on how clever the driver is. The GPU keeps them in a fast cache 15:56 < bridge> Esp. Since it's only very few vertices 15:57 < bridge> if you mean clamp modes and sampling type, those are all properties of the sampler, only partly defined in the shader? 15:58 < bridge> I really need to figure out if texture arrays are a thing in the web, I'll have a look 15:58 < bridge> In the end a sampler is also just a higher level component to help u out with this stuff 15:58 < bridge> U could use a uniform buffer and do all stuff your own 15:59 < bridge> oh wow, so the sampler maps pretty directly to shader code? I had no idea 16:01 < bridge> It could be possible GPUs have extra instructions to do it faster 16:01 < bridge> But considering how fast cuda cores are for many things. U could probs do stuff like this your own too 16:02 < bridge> They are 16:02 < bridge> The webgl2 limits mention max texture array layers 16:06 < bridge> hrmpf yes also found it 16:06 < bridge> I guess I'll open an wgpu issue, if its not too complicated I might try to implement it myself 16:07 < bridge> Only problem is probably that they have many backends don't they?^^ 16:07 < bridge> wdym? 16:07 < bridge> Webgl shading language follows gles3. So you could fetch single texels 16:08 < bridge> Don't they support dx11 on breakup 16:08 < bridge> Desktop 16:08 < bridge> but wgpu won't allow me to create array texture sampler on webgl2 16:08 < bridge> I mean since u said, if it's easy to implement^^ 16:10 < bridge> https://github.com/gfx-rs/wgpu#supported-platforms 16:11 < bridge> they do group the backends into different support levels 16:11 < bridge> In their dev docs they mention texture arrays 16:11 < bridge> the texture binding as well 16:12 < bridge> I mean in ogl u bind a sampler to a texture slot 16:13 < bridge> So only the texture has to support it 16:13 < bridge> The sampler only does thing like interpolation 16:13 < bridge> Ddnet VK code only uses like 9 samplers total 16:13 < bridge> They are used for all textures 16:14 < bridge> In the shader u call texture with a vec3 iirc 16:14 < bridge> wait I am confused. 16:16 < bridge> in wgpu, a bind group consists of binding resources (https://docs.rs/wgpu/latest/wgpu/enum.BindingResource.html). only TextureViewArray requires the feature enabled. but since textures can be array by themselves, you only need a `BindingResource::Texture` 16:16 < bridge> aaaaaaaah 16:16 < bridge> I think I always missed that difference 16:16 < bridge> so it is already supported by wgpu 16:17 < bridge> :sendhelp: 16:18 < bridge> Yeah sounds reasonable 16:18 < bridge> I guess TextureViewArray would be easier to work with, since then we would have individual allocations and don't have to update the entire thing 16:19 < bridge> but it also just sounds fine with Textures that are arrays themselves 16:19 < bridge> Mh does it work like that? 16:20 < bridge> is there such a concept in graphics as a growable buffer/texture abstraction? ^^ like double or smth the size of the array whenever we reach the limit and leave the unused stuff uninitialised 16:20 < bridge> Spec only guarantees like 16 texture bindings 16:21 < bridge> the way I understand it is that it is one binding consisting of references to the different textures 16:22 < bridge> Not really. GPU copying is probably not the fastest. The memory is very specialized on fast reading of large chunks 16:22 < bridge> That would be surprising 16:23 < bridge> From the name it sounds like it's the texture view for a texture array 16:23 < bridge> Every texture has a texture view 16:23 < bridge> In case the internal image is compressed or stuff like that 16:23 < bridge> that is what threw me off every time 16:23 < bridge> From VK point of view at least 16:24 < bridge> `TextureViewArray(&'a [&'a TextureView]),` indicates however that we can use texture views from different textures 16:24 < bridge> Yeah true 16:25 < bridge> but its so very confusing, you might be right and my previous understand might be correct 16:25 < bridge> Why not call Ur array of texture view then xd 16:26 < bridge> It's confusing. And to me the only logical explanation is that they wanted to abstract away multi texture bindings 16:26 < bridge> But it has a completely different max limit per shader stage 16:26 < bridge> Better be careful with it 16:26 < bridge> https://docs.rs/wgpu/latest/wgpu/enum.BindingType.html 16:26 < bridge> when we define the layout, there are no array variants, the texture is described by https://docs.rs/wgpu/latest/wgpu/enum.TextureViewDimension.html 16:26 < bridge> this does indicate that we need `BindingResource::ViewArray` 16:26 < bridge> aaaarg 16:27 < bridge> Usually there is simply a depth Attribute 16:28 < bridge> Is there none for texture views 16:28 < bridge> Array layer referring to webgpu 16:30 < bridge> Are they simply missing access functionality? 16:30 < bridge> there is https://docs.rs/wgpu/latest/wgpu/struct.TextureViewDescriptor.html#structfield.array_layer_count 16:30 < bridge> okay I'll open up an issue directly coz this is getting too weird 16:30 < bridge> But what exactly is the problem 16:31 < bridge> U have textures that have a layer count and a texture view that allows to say what layers you want to view 16:31 < bridge> That's all you need 16:32 < bridge> One texture 2d array has one texture view 16:32 < bridge> The texture view is simply a pre defined memory pointer 16:33 < bridge> hm 16:33 < bridge> so a texture_view_array is simply an entirely different thing where you can throw together any kinds of texture views from different textures together 16:33 < bridge> Yes 16:34 < bridge> I think this is simply multi texturing 16:34 < bridge> The name is just bad 16:34 < bridge> `#![deny(clippy::missing_const_for_fn)]` add this on ur main module and rust will tell u to add const to functions that can have it, pretty pog 16:35 < bridge> but then how do you define the layout, when the `BindingType` doesn't differentiate between texture2darray and textureviewarray 16:35 < bridge> U use a single texture view 16:35 < bridge> There is nothing like a viewarray 16:36 < bridge> A texture view is the same for all possible textures 16:36 < bridge> It already defines what of the texture u want to view 16:36 < bridge> The view array they have in the enum struct is just a variant for multi texturing 16:37 < bridge> It had nothing to do with the texture type 16:37 < bridge> ah you are right, the arrays are one layer above the `BindingType` 16:37 < bridge> https://docs.rs/wgpu/latest/wgpu/struct.BindGroupLayoutEntry.html#structfield.count 16:37 < bridge> okay now everything seems to make sense, thank you :) 16:38 < bridge> I guess the only problem with using a texture2darray is that every layer must have the same size 16:39 < bridge> but I heard you have a strong opinion on skin size maximums, so I guess I can just scale them to the same size and it'll be just fine 16:40 < bridge> ah that is if I use texture2darrays for the skins, but it does sound better to me rn 16:40 < bridge> I'll see when I get there 16:43 < bridge> If you want to do it in one draw call. How do u solve the transparency problem 16:44 < bridge> which one would that be? getting the limbs of the tees in the correct drawing ordeR? 16:45 < bridge> Yeah drawing order and blending 16:48 < bridge> hmm 16:52 < bridge> ```#![forbid(unsafe_code)] 16:52 < bridge> #![deny(warnings)] 16:52 < bridge> #![deny(clippy::missing_const_for_fn)] 16:52 < bridge> #![deny(clippy::nursery)] 16:52 < bridge> #![deny(clippy::pedantic)] 16:52 < bridge> ``` 16:52 < bridge> i added this on a project and fixed everything 16:52 < bridge> im a madman 16:52 < bridge> embrace idiomatic 16:52 < bridge> :BASEDDEPT: 16:54 < bridge> @Jupeyy_Keks do vertex buffers loop around if the index exceeds the buffer length? 16:54 < bridge> No unsafe is hard 16:54 < bridge> depends on what u do 16:55 < bridge> I doubt xd 16:55 < bridge> Vulkan 16:55 < bridge> What else 16:55 < bridge> well then yes 16:55 < bridge> my project doesnt interact with ffi or drivers 16:55 < bridge> xd 16:55 < bridge> Ez 16:55 < bridge> redox os should become popular 16:56 < bridge> since the kernel is rust 16:56 < bridge> rustlib is rust 16:56 < bridge> Nah 16:56 < bridge> Micro kernels suck 16:56 < bridge> https://github.com/redox-os/relibc 16:56 < bridge> https://gitlab.redox-os.org/redox-os/relibc 16:57 < bridge> @Jupeyy_Keks using unsafe is not something inherently bad 16:57 < bridge> if u gotta use it u gotta use it 16:57 < bridge> just add a safety comment 16:57 < bridge> actually there is a lint to require u adding a safety comment 16:57 < bridge> Depends 16:58 < bridge> Can the version of opengl affect the quality of the game? 16:58 < bridge> Yes 16:58 < bridge> Gl1 ugly 16:58 < bridge> after all unsafe is there cuz the borrow checker is not perfect 16:58 < bridge> but better than nothing 16:58 < bridge> open gl 1.x/ 16:58 < bridge> open gl 1.x? 16:58 < bridge> or 1.0 16:58 < bridge> Yes and 2.0 16:58 < bridge> and 3 16:58 < bridge> 1.x 16:58 < bridge> just use vulkan 16:59 < bridge> 3.0 is the first good one 16:59 < bridge> directx is ugly too 16:59 < bridge> like microsoft 16:59 < bridge> I've missed feelings about this 17:00 < bridge> well its also cuz the borrow checker cant ensure safety of ffi 17:00 < bridge> and native 17:00 < bridge> If I had a vulkan would the pixels be less visible when zoomed in? 17:00 < bridge> but the good thing about unsafe is safety encapsulation 17:00 < bridge> Sure u know where the bad memory access is. But if that results in having to resign every higher level stuff, then this unsafe was a design problem and not only unsafe.. kinda extrem case. But yeah 17:01 < bridge> ? Xd 17:01 < bridge> Is your monitor broken? 17:02 < bridge> vulkan does not provide higher-quality textures, those come from the maps/skins 17:02 < bridge> no 17:02 < bridge> quality strange 17:03 < bridge> ok 17:03 < bridge> Show screenshots 17:03 < bridge> And show what is wrong 17:03 < bridge> 1 sec 17:05 < bridge> @Jupeyy_Keks it looked better in my head, one draw call for each tee does look cleaner to implement, off the top of my hat, I'd say 17:05 < bridge> - a texture2darray with all skins pngs 17:05 < bridge> - a constant vertex array with the vertices for the different limbs, including position, uv 17:05 < bridge> - a tee-specifc vertex array with the step size { vertices of each limb }, which has the tranformation matrices for each limb, and maybe we also include the skin index here 17:06 < bridge> Besides ash requires unsafe for every VK call anyway. I indeed only had one use case yet for unsafe 17:06 < bridge> And that was a performance reason 17:06 < bridge> ye i think ash is the low level 17:06 < bridge> what about vulkano 17:06 < bridge> I don't want to use it 17:06 < bridge> thats a valid reason, provided u know its safe 17:06 < bridge> Learning yet another high level graphics api sucks xd 17:06 < bridge> xd 17:07 < bridge> but ye ash is all unsafe cuz it has to be 17:07 < bridge> its unsafe to call any vulkan api 17:07 < bridge> I created a ray tracer with vulkano for uni once, it was pleasant to work with iirc 17:07 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1072186851915481158/image.png 17:07 < bridge> not with the ray-tracing extension tho 17:08 < bridge> i need to get into vulkan and graphics again 17:08 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1072186983323029686/image.png 17:08 < bridge> its always kidna fun 17:09 < bridge> why do i see these pixels? 17:09 < bridge> 2nd screenshot? And which renderer there? 17:09 < bridge> 1.5.0 17:09 < bridge> opengl 17:10 < bridge> Are u that one guy again tw? 17:10 < bridge> Btw 17:10 < bridge> Armenia xd 17:10 < bridge> yes 17:10 < bridge> Delete this message 17:11 < bridge> It resizes the body 17:12 < bridge> If I had a modern graphics card with Vulkan would those pixels be less visible? 17:12 < bridge> BCS gl1 does not require to support non power of two textures 17:12 < bridge> Yes 17:12 < bridge> a. ok 17:12 < bridge> U can also try a higher resolution skin 17:13 < bridge> My maximum resolution is 1366x768 17:14 < bridge> Do u have a github account? 17:14 < bridge> yes 17:15 < bridge> nick eeetadam 17:15 < bridge> https://github.com/ddnet/ddnet-data-svg/actions/runs/3610226025 17:16 < bridge> There u can download high res vanilla skins 17:16 < bridge> ok 17:19 < bridge> I downloaded what next? 17:19 < bridge> Unpack and put skins in skins directory 17:20 < bridge> U can open it by clicking it in the tee settings 17:25 < bridge> Do you need financial support? Are you tired living from paycheck to paycheck… wanna say goodbye to 9to5? Wanna make $2000 daily $5000 weekly before your next pay check arrives?THEN ASK HOW....📥 17:25 < bridge> https://t.me/+svoBgWzahsxjZWQ8 17:25 < bridge> (@Jupeyy_Keks) 17:43 < bridge> @Jupeyy_Keks i found a meme 17:43 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1072195907363287101/6hefag1ufjga1.webp 17:50 < bridge> where i will check when the player hook in character entity? 17:51 < bridge> no, do it in gamecore 17:52 < bridge> but you can add a field to the character core 17:53 < bridge> @heinrich5991 how to check if player is authed in the gamecore? 17:54 < bridge> add a field to the character core 17:54 < bridge> bool m_Authed 17:54 < bridge> then set that flag from outside 17:54 < bridge> hmm 17:55 < bridge> from player e.g., or character.cpp 17:55 < bridge> i will try 18:00 < bridge> its sad 18:03 < bridge> @heinrich5991 it worked!! thanks, but i'm setting on tick, there an way to set only when authed state changes? 18:05 < bridge> I don't know how 18:06 < bridge> setting it each tick is okayish 18:06 < bridge> yes, thank you 19:35 < bridge> after i've turn on vsync on my laptop by misstake i got blacksccren.. any help 19:35 < bridge> after i've turn on vsync on my laptop by misstake i got blackscreen.. any help 19:39 < bridge> ``` 19:39 < bridge> "gfx_vsync 0" 19:39 < bridge> ``` 19:39 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1072224936967548948/image.png 19:39 < bridge> and also update your GPU driver 19:39 < bridge> vsync shouldnt create black screens 19:39 < bridge> already did 19:39 < bridge> still getting black screen 19:39 < bridge> u did what i posted in the screenshot? 19:40 < bridge> not using stream 19:40 < bridge> not using steam 19:40 < bridge> all drivers fine 19:40 < bridge> then add it to the end of ddnet_settings.cfg 19:40 < bridge> sec 19:40 < bridge> yeah totally fine 19:41 < bridge> but without `"` in the cfg file 19:41 < bridge> ik 19:42 < bridge> well still blackscreen 19:42 < bridge> whats ur hardware and opearting system 19:42 < bridge> e 19:42 < bridge> i3 4gb 19:42 < bridge> i3 4gb ram 19:43 < bridge> i mean it was working, then i press on it by misstake and it was already black xd 19:43 < bridge> try 19:43 < bridge> 19:43 < bridge> ``` 19:43 < bridge> gfx_gl_major 2 19:43 < bridge> gfx_gl_minor 0 19:43 < bridge> gfx_backend opengl 19:43 < bridge> ``` 19:44 < bridge> in ddnet_settings.cfg? 19:44 < bridge> yes 19:44 < bridge> also try to click "restart" in windows shutdown context menu 19:44 < bridge> that makes a clean restart without caches 19:44 < bridge> im using linux 19:44 < bridge> mh ok 19:45 < bridge> that makes stuff easier 19:45 < bridge> veri good! 19:45 < bridge> 🙂 19:45 < bridge> Hello, does anyone know how to put an lib on ddnet? 19:45 < bridge> open a terminal in ddnet directory and type `./DDNet "gfx_backend opengl; gfx_gl_major 1"` 19:46 < bridge> its too hard to explain, best is your first try to understand c++ ecosystem and cmake 19:46 < bridge> when i save it and start the client from console the settings getting reseted 19:46 < bridge> lemme try 19:46 < bridge> Could you give me an example? 19:46 < bridge> see the cmake directory 19:47 < bridge> there u see how SDL2 was added 19:47 < bridge> or opus 19:48 < bridge> well worked, also one question too. im only getting about 30fps ingame.. any idea what i could do? 19:48 < bridge> @Jupeyy_Keks so much things 19:48 < bridge> you mean this? 19:48 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1072227318933758083/image.png 19:48 < bridge> well you only said i3 19:48 < bridge> 19:48 < bridge> what i3? there are like 20 versions of i3s 19:48 < bridge> i just want to add D++ 19:48 < bridge> no 19:49 < bridge> on what operating system are u? 19:49 < bridge> linux 19:49 < bridge> this? 19:49 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1072227597586550784/image.png 19:49 < bridge> i3 M 330 2.13GHz x 2 19:50 < bridge> uff thats old 19:50 < bridge> i guess u have to live with 30fps 19:50 < bridge> 19:50 < bridge> 19:50 < bridge> turn on entities, lower resolution 19:50 < bridge> search in arch wiki 19:51 < bridge> they often give help for old hardware 19:51 < bridge> i guess if i get 60 im fine 19:51 < bridge> xd 19:51 < bridge> im playing on my pc 19:51 < bridge> but also nice when i can chill in bed and play there 19:51 < bridge> can u give me a link 19:51 < bridge> google can give u a link 19:52 < bridge> i hate google 19:52 < bridge> thats your problem 19:52 < bridge> and google me 19:59 < bridge> @Jupeyy_Keks wow, only switched from vulkan to opengl and now i have over 300 fps 19:59 < bridge> XD 20:00 < bridge> ah then u probably have a software renderer installed 20:00 < bridge> probably lavapipe 20:00 < bridge> but that also means u didnt do all these commands right 20:01 < bridge> bcs as u see they clearly reset to opengl 20:01 < bridge> because the config reset when i start ddnet 20:06 < bridge> well im fine, played a bit with the settings.. got about 800 fps now 20:06 < bridge> epic gamer moment 20:24 < bridge> pls, help 20:38 < bridge> https://github.com/cleverca22/not-os 20:38 < bridge> @Jupeyy_Keks lel 20:47 < bridge> generate tw os out of it 20:47 < bridge> i installed nix on my debian vps 20:47 < bridge> ill tinker with it 20:47 < bridge> one of the great thinks about nix is it doesnt require nixos 20:47 < bridge> and nix-shell is awesome 21:48 < bridge> Can someone help? 21:54 < bridge> it's probably too hard to explain for me 21:56 < bridge> I just wanted to add DPP(D++ for discord web hooks) to DDNet 22:00 < bridge> do you only want to send web hooks? 22:00 < bridge> webhooks are simple https requests 22:00 < bridge> you can simply do them with the built-in https client 22:00 < bridge> can you give me an simple example? @heinrich5991 22:01 < bridge> what OS are you on? 22:02 < bridge> linux 22:03 < bridge> quick tldr, but dont ping or dm later to help u (i wont answer): you need to create a FindLibraryName.cmake under cmake/ and then look at how another lib is added, like miniupnp, and u should figure it out easily 22:03 < bridge> u need to know cmake 22:03 < bridge> "easily" 22:04 < bridge> I can't do it "easily" either 22:04 < bridge> well its easy for me 22:04 < bridge> xd 22:04 < bridge> https://discord.com/developers/docs/resources/webhook#execute-webhook 22:04 < bridge> depends on the library 22:04 < bridge> if ur findcmake doesnt work usually u need to add .so to the hint name 22:04 < bridge> i spend 30 mins sometimes withh that 22:04 < bridge> do you have a test discord webhook @newlesstee? 22:04 < bridge> i have created an web hook 22:05 < bridge> but i want to use, i find the D++ that can help me 22:05 < bridge> but i dont know how to add in CMakeLists 22:05 < bridge> can you share the URL, but replace the secrets with something else 22:05 < bridge> 22:06 < bridge> this is how i added libsodium 22:06 < bridge> https://discord.com/api/webhooks/10722288278472198/WJ_ttmFzObJUevzn9L-QxUMVeFt8Fp24wZIba 22:06 < bridge> @heinrich5991 22:07 < bridge> now type something like this in a terminal `curl https://discord.com/api/webhooks/10722288278472198/WJ_ttmFzObJUevzn9L-QxUMVeFt8Fp24wZIba --data '{"content":"test"}' -H 'Content-Type: application/json'` 22:08 < bridge> i want to do this in the DDNet server 22:08 < bridge> u should be able 22:08 < bridge> ddnet has http 22:08 < bridge> yes, I understand. do you understand the `curl` request? 22:08 < bridge> yes 22:08 < bridge> I'm trying to show you that it's simple HTTPS requests, we can do DDNet server next 22:08 < bridge> I see 22:08 < bridge> hm 22:10 < bridge> try calling this function: https://github.com/ddnet/ddnet/blob/911bd0e69a8fee9b02f0cea26d50ecc2f000bc3c/src/engine/shared/http.h#L187 22:11 < bridge> and then pass that to `Engine()->AddJob()` 22:11 < bridge> @heinrich5991 how to pass the json as char? 22:12 < bridge> yes 22:12 < bridge> `"{\"content\":\"test\"}"` 22:34 < bridge> @heinrich5991 how to pass the type of std::unique_ptr in AddJob? 22:36 < bridge> you might be able to just pass it 22:36 < bridge> While I don't find this way of butting your head against C++ to learn it very efficient, you are probably looking for something along the lines of `make_unique` 22:36 < bridge> We have many examples of it in the code 22:37 < bridge> they already have `std::unique_ptr` 22:37 < bridge> Or maybe just a `HttpGet` 22:37 < bridge> learning c++ the ddnet way 22:37 < bridge> new book by learath 22:37 < bridge> but need `std::shared_ptr` 22:37 < bridge> but `CHttpRequest` is derived from `IJob` 22:37 < bridge> `error: cannot convert ‘std::unique_ptr’ to ‘std::shared_ptr’` 22:37 < bridge> try `std::move(your_unique_ptr)` 22:38 < bridge> try `Engine()->AddJob(std::move(your_unique_ptr))` 22:39 < bridge> I would never suggest anyone in a million years to take this approach, C/C++ just aren't very suited to this. Though I know we disagree on this 😄 22:39 < bridge> xD 22:40 < bridge> this is a opportunity for a shameless plug 22:40 < bridge> @newlesstee https://edgarluque.com/blog/intro-to-ddnet/ 22:40 < bridge> maybe this helps 22:40 < bridge> its not related to http tho 22:41 < bridge> I know I'm going the wrong way, but eventually I'll learn 22:41 < bridge> Difficult things make me motivated 22:41 < bridge> good 22:41 < bridge> we got a future dev here 22:41 < bridge> > This version of rustup fixes a warning incorrectly saying that signature verification failed for Rust releases. The warning was due to a dependency of Rustup including a time-based check preventing the use of SHA-1 from February 1st, 2023 onwards. 22:41 < bridge> > 22:41 < bridge> > Unfortunately Rust's release signing key uses SHA-1 to sign its subkeys, which resulted in all signatures being marked as invalid. Rustup 1.25.2 temporarily fixes the problem by allowing again the use of SHA-1. 22:41 < bridge> using SHA1… 22:42 < bridge> in 2015 many ppl used sha1 22:42 < bridge> Where is the Engine()? i only see in client side 22:42 < bridge> but ye shame 22:43 < bridge> ignore the Engine part, just call `AddJob` as before 22:43 < bridge> Like in an ideal world just the signatures `inline std::unique_ptr HttpGet(...)` and `virtual void AddJob(std::shared_ptr)` should just take you on one detour to `class CHttpRequest : public IJob` at that point the whole api should be quite clear 22:43 < bridge> learath writing a bible 22:44 < bridge> https://blog.torproject.org/arti_111_released/ 22:44 < bridge> tor being rewritten in rust, something I can get behind 🙂 22:44 < bridge> That is only possible if you know the different smart pointer types 22:44 < bridge> i sent this some time ago 22:44 < bridge> https://discord.com/channels/252358080522747904/293493549758939136/1070772474041618472 22:44 < bridge> 22:44 < bridge> ryo always faster 22:44 < bridge> on the rust hype train 22:44 < bridge> im the internet observer 22:45 < bridge> So, how will the proper fix be I wonder? Maybe splitting the upgrade line in 2? 22:45 < bridge> AddJob isnt declared 22:45 < bridge> how did you get this error? 22:45 < bridge> what code did you write? 22:46 < bridge> do a git diff and show us xd 22:46 < bridge> ```c++ 22:46 < bridge> std::unique_ptr pWebhook = HttpPostJson("https://discord.com/api/webhooks/10723566778798/WJ_ttmFbJUevzn9L-QxUMVeFt8FpwraxpfFSHlvJsXMwYba", "{\"content\": \"New player joined!\"}"); 22:46 < bridge> AddJob(std::move(pWebhook)); 22:46 < bridge> ``` 22:48 < bridge> in which function is this? 22:49 < bridge> DDRace.cpp (i dont know if i can change this file but i'm only testing) 22:50 < bridge> `void CGameControllerDDRace::OnPlayerConnect(CPlayer *pPlayer)` 22:50 < bridge> @heinrich5991 22:50 < bridge> try `GameServer()->Engine()->AddJob(…)` 22:50 < bridge> I'm guessing you meant a different reply 😄 22:51 < bridge> i love you xD 22:51 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1072273306104897566/image.png 22:51 < bridge> thanks ❤️ 22:52 < bridge> what is "upgrade line"? depending on the signing situation it's a bit hard and you need to maintain two signing paths 22:52 < bridge> but maybe it's easy and you can just upgrade everything to sha256 22:52 < bridge> another question is why someone chose sha1 after 2015 22:52 < bridge> it was known-broken well before that 22:53 < bridge> As in what we have with update2.ddnet.tw update3.ddnet.tw etc. Split the line into two, one where you can upgrade all the way up to the last version with a sha1 signature, have a version that supports both signatures there and then future versions only support sha256 22:56 < bridge> Can't change src/engine code? I tried to change some things and it didn't compile 22:57 < bridge> you can change src/engine code as well 23:07 < bridge> what function of str replace an character? 23:08 < bridge> do you want to escape something for JSON? 23:08 < bridge> we got `JsonEscape` for that 23:10 < bridge> @heinrich5991 i want to replace this char: ` 23:10 < bridge> to \` 23:10 < bridge> to \\` 23:11 < bridge> you mean `"` to `\"` 23:11 < bridge> ? 23:11 < bridge> ` 23:11 < bridge> why? 23:11 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1072278455770693652/image.png 23:11 < bridge> https://cdn.discordapp.com/attachments/293493549758939136/1072278456018153472/image.png 23:12 < bridge> ah 23:12 < bridge> `\`` 23:12 < bridge> yes 23:12 < bridge> but ` \` ` 23:12 < bridge> but " is good too 23:12 < bridge> escaping ` doesn't seem to work in discord 23:12 < bridge> hm 23:13 < bridge> ` just an test\` ` 23:13 < bridge> dont work 😳 23:19 < bridge> https://www.acepace.net/integerQuiz/ 23:19 < bridge> mandatory test for C/C++ developers 23:19 < bridge> got three mistakes ^^ 23:21 < bridge> how to remove it from char? 23:22 < bridge> manually, go through the string 23:22 < bridge> and copy it over to a new buffer 23:23 < bridge> but skip the ``` ` ``` 23:23 < bridge> but skip the ``` 23:23 < bridge> ` 23:23 < bridge> ``` 23:23 < bridge> omg 23:23 < bridge> there no method for this? I can create one and send in PR? 23:25 < bridge> yes, I think that would be fine 23:28 < bridge> hello anyone know how to fix drop fps ingame +lag" 23:28 < bridge> laptop i7 12gen 16ram but i lag to much ty/ 23:34 < bridge> all correct 23:34 < bridge> 23:34 < bridge> but SCHAR_MAX == CHAR_MAX was luck xD 23:34 < bridge> never heard of schar_max 23:35 < bridge> guess signed char 23:35 < bridge> yup 23:35 < bridge> I went into the quiz and thought I should be able to get all correct 23:35 < bridge> ok one false the last 23:35 < bridge> xD 23:36 < bridge> but in some questions I forgot some rule of C ^^ 23:36 < bridge> like int promotion 23:36 < bridge> i mean like 90% of the questions are about that xD 23:36 < bridge> ``` 23:36 < bridge> unsigned short multiply(unsigned short a, unsigned short b) { 23:36 < bridge> return a * b; 23:36 < bridge> } 23:36 < bridge> ``` 23:37 < bridge> yeah tricky 23:37 < bridge> we even had a bug in a pr like that in ddnet 23:37 < bridge> ah ^^ 23:38 < bridge> ok deen has 1k prs xD 23:38 < bridge> not going to find it 23:38 < bridge> but it was smth in system.cpp 23:50 < bridge> did anyone suggest the ability to set a SRV record to get the port ? 23:50 < bridge> did anyone suggested the ability to set a SRV record to get the port ? 23:52 < bridge> discords markdown parser is a bit of a doozy