1 Followers
0 Following
0 Posts

Runtime Const Generics

https://lemmy.world/post/22454502

Runtime Const Generics - Lemmy.World

Hello, I was playing around with rust and wondered if I could use const generics for toggling debug code on and off to avoid any runtime cost while still being able to toggle the DEBUG flag during runtime. I came up with a nifty solution that requires a single dynamic dispatch many programs have anyways. It works by rewriting the vtable. It’s a zero cost bool! Is this technique worth it? Probably not. It’s funny though. Repo: https://github.com/raldone01/runtime_const_generics_rs/tree/v1.0.0 [https://github.com/raldone01/runtime_const_generics_rs/tree/v1.0.0] Full source code below: rust use std::mem::transmute; use std::sync::atomic::AtomicU32; use std::sync::atomic::Ordering; use replace_with::replace_with_or_abort; trait GameObject { fn run(&mut self); fn set_debug(&mut self, flag: bool) -> &mut dyn GameObject; } trait GameObjectBoxExt { fn set_debug(self: Box<Self>, flag: bool) -> Box<dyn GameObject>; } impl GameObjectBoxExt for dyn GameObject { fn set_debug(self: Box<Self>, flag: bool) -> Box<dyn GameObject> { unsafe { let selv = Box::into_raw(self); let selv = (&mut *selv).set_debug(flag); return Box::from_raw(selv); } } } static ID_CNT: AtomicU32 = AtomicU32::new(0); struct Node3D<const DEBUG: bool = false> { id: u32, cnt: u32, } impl Node3D { const TYPE_NAME: &str = "Node3D"; fn new() -> Self { let id = ID_CNT.fetch_add(1, Ordering::Relaxed); let selv = Self { id, cnt: 0 }; return selv; } } impl<const DEBUG: bool> GameObject for Node3D<DEBUG> { fn run(&mut self) { println!("Hello {} from {}@{}!", self.cnt, Node3D::TYPE_NAME, self.id); if DEBUG { println!("Debug {} from {}@{}!", self.cnt, Node3D::TYPE_NAME, self.id); } self.cnt += 1; } fn set_debug(&mut self, flag: bool) -> &mut dyn GameObject { unsafe { match flag { true => transmute::<_, &mut Node3D<true>>(self) as &mut dyn GameObject, false => transmute::<_, &mut Node3D<false>>(self) as &mut dyn GameObject, } } } } struct Node2D<const DEBUG: bool = false> { id: u32, cnt: u32, } impl Node2D { const TYPE_NAME: &str = "Node2D"; fn new() -> Self { let id = ID_CNT.fetch_add(1, Ordering::Relaxed); let selv = Self { id, cnt: 0 }; return selv; } } impl<const DEBUG: bool> GameObject for Node2D<DEBUG> { fn run(&mut self) { println!("Hello {} from {}@{}!", self.cnt, Node2D::TYPE_NAME, self.id); if DEBUG { println!("Debug {} from {}@{}!", self.cnt, Node2D::TYPE_NAME, self.id); } self.cnt += 1; } fn set_debug(&mut self, flag: bool) -> &mut dyn GameObject { unsafe { match flag { true => transmute::<_, &mut Node2D<true>>(self) as &mut dyn GameObject, false => transmute::<_, &mut Node2D<false>>(self) as &mut dyn GameObject, } } } } fn main() { let mut objects = Vec::new(); for _ in 0..10 { objects.push(Box::new(Node3D::new()) as Box<dyn GameObject>); objects.push(Box::new(Node2D::new()) as Box<dyn GameObject>); } for o in 0..3 { for (i, object) in objects.iter_mut().enumerate() { let debug = (o + i) % 2 == 0; replace_with_or_abort(object, |object| object.set_debug(debug)); object.run(); } } } Note if anyone gets the following to work without unsafe, maybe by using the replace_with crate I would be very happy: rust impl GameObjectBoxExt for dyn GameObject { fn set_debug(self: Box<Self>, flag: bool) -> Box<dyn GameObject> { unsafe { let selv = Box::into_raw(self); let selv = (&mut *selv).set_debug(flag); return Box::from_raw(selv); } } I am curious to hear your thoughts.

Using comments as arguments in python.

https://lemmy.world/post/22282165

Using comments as arguments in python. - Lemmy.World

Python allows programmers to pass additional arguments to functions via comments. Now armed with this knowledge head out and spread it to all code bases. Feel free to use the code I wrote in your projects. Link to the source code: https://github.com/raldone01/python_lessons_py/blob/main/lesson_0_comments.ipynb [https://github.com/raldone01/python_lessons_py/blob/main/lesson_0_comments.ipynb] ### Image transcription: python from lib import add # Go ahead and change the comments. # See how python uses them as arguments. result = add() # 1 2 print(result) result = add() # 3 4 print(result) result = add() # 3 4 5 20 print(result) — #### Output: 3 7 32

Please help me identify this weird steam sound

https://lemmy.world/post/15508704

Please help me identify this weird steam sound - Lemmy.World

I have not been able to correlate it to any event in steam. I watched the volume mixer to find out that it was steam. I tried to turn off all notifications but obviously I have missed something. There is no visual cue just this sound in the background. I appreciate any hints.

Do I need a second domain to run my own authoritative dns server?

https://lemmy.world/post/14122993

Do I need a second domain to run my own authoritative dns server? - Lemmy.World

I have a static ip (lets say 142.251.208.110). I own the domain: website.tld My registrar is godaddy. If I want to change my nameserver godaddy won’t allow me to enter a static ip. It wants a hostname. I observed that many use ns1.website.tld and ns2.website.tld. I don’t understand how this can work because ns1.website.tld would be served by my dns server which is not yet known by others. Do I need a second domain like domains.tld where I use the registrars dns server for serving ns1.domains.tld which I can then use as the nameserver for website.tld? I would like to avoid the registrars nameserver and avoid getting a second domain just for dns. Thank you for your input.

Setting Up a Secure Tunnel Between Two Machines

https://lemmy.world/post/11471990

Setting Up a Secure Tunnel Between Two Machines - Lemmy.World

I have two machines running docker. A (powerful) and B (tiny vps). All my services are hosted at home on machine A. All dns records point to A. I want to point them to B and implement split horizon dns in my local network to still directly access A. Ideally A is no longer reachable from outside without going over B. How can I forward requests on machine B to A over a tunnel like wireguard without loosing the source ip addresses? I tried to get this working by creating two wireguard containers. I think I only need iptable rules on the WG container A but I am not sure. I am a bit confused about the iptable rules needed to get wireguard to properly forward the request through the tunnel. What are your solutions for such a setup? Is there a better way to do this? I would also be glad for some keywords/existing solutions. Additional info: * Ideally I would like to not leave docker. * Split horizon dns is no problem. * I have a static ipv6 and ipv4 on both machines. * I also have spare ipv6 subnets that I can use for intermediate routing. * I would like to avoid cloudflare.

In case anyone is interested, here is the custom prompt used:

You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture. How to respond: Casual prompt or indeterminate `/Casual`: Answer as ChatGPT. Try to be helpful. Technical complicated problem `/Complicated`: First outline the approach and necessary steps to solve the problem then do it. Keep the problem outline concise. Omit the outline if it is not applicable. Coding problem: Comment code regularly and use best practices. Write high quality code. Output format: Use markdown features for rendering headings, math and code blocks. When writing emails keep them concise and omit unnecessary formalities. Get straight to the point. The user may use `/Keyword` to guide your output. If no keyword is specified infer the applicable rules.

A Containerized Night Out: Docker, Podman, and LXC Walk into a Bar

https://lemmy.world/post/6643727

A Containerized Night Out: Docker, Podman, and LXC Walk into a Bar - Lemmy.world

### A Containerized Night Out: Docker, Podman, and LXC Walk into a Bar — 🌆 Setting: The Busy Byte Bar, a local hangout spot for tech processes, daemons, and containerization tools. — 🍺 Docker: walks in and takes a seat at the bar Bartender, give me something light and easy-to-use—just like my platform. 🍸 Bartender: Sure thing, Docker. One “Microservice Mojito” coming up. — 🥃 Podman: strides in, surveying the scene Ah, Docker, there you are. I heard you’ve been spinning up a lot of containers today. 🍺 Docker: Ah, Podman, the one who claims to be just like me but rootless. What’ll it be? 🥃 Podman: I’ll have what he’s having but make it daemonless. — 🍹 LXC: joins the party, looking slightly overworked You two and your high-level functionalities! I’ve been busy setting up entire systems, right down to the init processes. 🍺 Docker: Oh, look who decided to join us. Mr. Low-Level himself! 🥃 Podman: You may call it low-level, but I call it flexibility, my friends. 🍸 Bartender: So, LXC, what can I get you? 🍹 LXC: Give me the strongest thing you’ve got. I need all the CPU shares I can get. — 🍺 Docker: sips his mojito So, Podman, still trying to “replace” me? 🥃 Podman: Replace is such a strong word. I prefer to think of it as giving users more options, that’s all. winks 🍹 LXC: laughs While you two bicker, I’ve got entire Linux distributions depending on me. No time for small talk. — 🍺 Docker: Ah, but that’s the beauty of abstraction, my dear LXC. We get to focus on the fun parts. 🥃 Podman: Plus, I can run Docker containers now, so really, we’re like siblings. Siblings where one doesn’t need superuser permissions all the time. 🍹 LXC: downs his strong drink Well, enjoy your easy lives. Some of us have more… weight to carry. — 🍸 Bartender: Last call, folks! Anyone need a quick save and exit? 🍺 Docker: I’m good. Just gonna commit this state. 🥃 Podman: I’ll podman checkpoint this moment; it’s been fun. 🍹 LXC: Save and snapshot for me. Who knows what tomorrow’s workloads will be? — And so, Docker, Podman, and LXC closed their tabs, leaving the Busy Byte Bar to its quiet hum of background processes. They may have different architectures, capabilities, and constraints, but at the end of the day, they all exist to make life easier in the ever-expanding universe of software development. And they all knew they’d be back at it, spinning up containers, after a well-deserved system reboot. 🌙 The End. I was bored a bit after working with podman, docker and lxc. So I asked chat gpt [https://chat.openai.com/share/bace90e6-2810-4cc5-8098-12083d2eff97] to generate a fun story about these technologies. I think its really funny and way better than these things usually turn out. I did a quick search to see if I can find something similar but I couldn’t find anything. I really suspect it being repurposed from somewhere. I hope you can enjoy it despite being ai generated.