0 Followers
0 Following
6 Posts
Enterprise misery - Lemmy.World

Lemmy

I think Deno made a huge mistake. (Node compatibility)

https://lemmy.world/post/24147152

I think Deno made a huge mistake. (Node compatibility) - Lemmy.World

# I think Deno made a huge mistake. Deno intended to be the redo of ‘Javascript outside the browser’, making it simpler while getting rid of the legacy. When Deno was announced in 2020, Deno was its own thing. Deno bet hard on ESM, re-used web APIs and metas wherever possible, pushed for URL imports instead of node_modules, supported executing typescript files without tsx or tsconfig.json and so on. However since 2022, Deno is trying to imitate Node more and more, and this is destroying Deno’s ecosystem. # Users’ Perspective “If Deno implemented Node APIs and tried to imitate Node and NPM ways of doing things, existing libraries and frameworks written using Node will automatically work in Deno and thus adopting Deno will be easier.” I don’t know who said this, someone must have said this. What has happened instead, is that Deno trying to imitate Node has disincentivized formation of any practical ecosystem for Deno, while the existing libraries and frameworks are unreliable when used with Deno. I tried using Next.js via Deno some time back, and Next.js dev server crashed when Turbopack is enabled. There is a workaround [https://github.com/denoland/deno/issues/26584], so for the time being that issue is solved. But today there is another issue, type checking (and LSP) for JSX is broken. This is my experience with using Node libraries with Deno. Every hour of work is accompanied with another hour (sometimes more) of troubleshooting the libraries themselves. I think this is the consequence of trying to imitate something you are not. Deno is trying to be compatible with Node. but there are gaps in the said compatibility. I think achieving compatibility with Node is hard, and the gaps in compatibility will stay for a long time. For example, at the time of writing, FileHandle.readLines is not implemented in Deno. ts import fs from 'node:fs/promises'; const hd = await fs.open(Deno.args[0]); for await (const line of hd.readLines()) { console.log("Line: ", line); } The above script crashes despite having no issues with Typescript. $ deno check test.ts Check file://path/to/test.ts $ deno run -R test.ts input.txt error: Uncaught (in promise) TypeError: hd.readLines(...) is not a function or its return value is not async iterable for await (const line of hd.readLines()) { ^ at file://path/to/test.ts:4:29 $ Using NPM libraries is also typically accompanied with a complete disregard for Deno’s security features. You just end up running deno with -A all the time. # Library devs’ Perspective Deno 1.0 is released, and library devs are excited to join the ecosystem. Projects like drollup [https://github.com/cmorten/deno-rollup], denodb [https://github.com/eveningkid/denodb], drizzle-deno [https://github.com/shreyascodes-tech/drizzle-deno] are started, But then Deno announces Node and NPM compatibility and all that momentum is gone. Now, it seems like Deno’s practical ecosystem is limited to first party libraries like @std and Fresh, libraries on JSR, and a small subset of libaries on NPM that works on Deno. If you look at the situation from library or framework dev’s perspective, it all seems reasonable. Most of them are not new to Javascript; they are much more familiar with Node than with Deno. When Deno is announced, some of them might want to contribute to Deno’s ecosystem. But then Deno announces Node and NPM compatibility, and now there is not enough incentive to develop software for Deno. It doesn’t matter that Node compatibility is spotty, because they’d rather just go back to using Node like they’re used to. Supporting multiple runtimes is painful. If you want to understand the pain, ask anyone who tried to ship any cross platform application written in C or C++. # Deno should have promoted its own API If the competition is trying to be more like Node, Node is the winner. There is a lesson to be learned here. If you are trying to replace a legacy system, don’t re-implement the same legacy system. Instead, put the burden of backwards-compatibility on the legacy system. Deno aimed to uncomplicate Javascript. (Deno’s homepage literally says that.) By trying to mimic Node, Deno has unintentionally put Node’s complexity problem at the center of the stage. And now, it cannot be removed. Instead of being a brand new thing, Deno ended up being a less reliable variant of Node. Deno should have supported its own API on top of Node instead. Since Deno controls its API, supporting its own API on Node would be simpler than supporting Node APIs. For library and framework developers, libraries made for Deno would work on Node and there would be no need to support multiple runtimes. This would have resulted in a much larger ecosystem of software made for Deno which is more reliable and free of Node’s legacy.

I made a library similar to Testcontainers, but works differently.

https://lemmy.world/post/21332159

I made a library similar to Testcontainers, but works differently. - Lemmy.World

Testcontainers is a library that starts your test dependencies in a container and stop them after you are done using them. Testcontainers needs Docker socket access for mounting within its reaper, so I made a (for now minimal) different library that does not need Docker socket access. It also works with daemonless Podman.

Don't use any clicking scripts.

https://lemmy.world/post/18660904

Don't use any clicking scripts. - Lemmy.World

Don't use any clicking scripts.

https://lemmy.world/post/18660900

Don't use any clicking scripts. - Lemmy.World

Self terminating container images for unit testing

https://lemmy.world/post/18453076

Self terminating container images for unit testing - Lemmy.World

I got average monthly ratings for games on Wine AppDB, and seems like something happened in 2016.

https://lemmy.world/post/18029616

I got average monthly ratings for games on Wine AppDB, and seems like something happened in 2016. - Lemmy.World

I took each rating for games on Wine Application Database, mapped them to numbers (Garbage -> 1, Bronze -> 2, Silver -> 3, Gold -> 4, Platinum -> 5) and plotted a monthly average.

Give your JS codebase what it deserves!

https://lemmy.world/post/17809299

Give your JS codebase what it deserves! - Lemmy.World

Testing a routing protocol using network namespaces

https://lemmy.world/post/16174809

Testing a routing protocol using network namespaces - Lemmy.World

How exactly does linux use prefix length assigned to network interface?

https://lemmy.world/post/11910769

How exactly does linux use prefix length assigned to network interface? - Lemmy.World

I was exploring direct links between machines, and basically failed to break something. I assigned IP address 192.168.0.1/24 to eth0 in two ways. A. Adding 192.168.0.1/24 as usual # ip addr add 192.168.0.1/24 dev eth0 # ping -c 1 192.168.0.2 PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data. 64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.051 ms --- 192.168.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms # B: Adding 192.168.0.1/32 and adding a /24 route # ip addr add 192.168.0.1/32 dev eth0 # # 192.168.0.2 should not be reachable. # ping -c 1 192.168.0.2 ping: connect: Network is unreachable # # But after adding a route, it is. # ip route add 192.168.0.0/24 dev eth0 # ping -c 1 192.168.0.2 PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data. 64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.053 ms --- 192.168.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms # Does this mean that adding an IP address with prefix is just a shorthand for adding the IP address with /32 prefix and adding a route afterwards? That is, does the prefix length has no meaning and the real work is done by the route entries? Or is there any functional difference between the two methods? Here is another case, these two nodes can reach each other via direct connection (no router in between) but don’t share a subnet. Node 1: # ip addr add 192.168.0.1/24 dev eth0 # ip route add 192.168.1.0/24 dev eth0 # # Finish the config on Node B # nc 192.168.1.1 8080 <<< "Message from 192.168.0.1" Response from 192.168.1.1 Node 2: # ip addr add 192.168.1.1/24 dev eth0 # ip route add 192.168.0.0/24 dev eth0 # # Finish the config on Node A # nc -l 0.0.0.0 8080 <<< "Response from 192.168.1.1" Message from 192.168.0.1