واحد ماذابيه يبعد على
google Chrome
لبرشة أسباب، و سبب جديد ظهر:
google Chrome
صبّ لواحد
#AI #model
يوزن 4 جيڨاوات ماغير إستشارة
https://www.thatprivacyguy.com/blog/chrome-silent-nano-install/
ال
#Linux #Kernel
عندو
#Guide
لصبّان و إستعمال
#Filesystems
جدد
https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git/commit/?h=vfs-7.2.misc&id=b34d597faae60a4c89235205478497b975e86bc5
مشروع في هولندا لمراكز عمل يستعملوا
#Linux
و ال
#FOSS
(المصدر بالهولندي)
https://vng.nl/artikelen/digitale-werkplek-geen-memo-maar-demo-werkplekken-in-open-source

SSD death ☠️💀

I am busy rebuilding the data from an SSD which died all of a sudden from one moment to the next without giving any warnings in any manner in any way.

I always monitor SMART output of SSD drives & mechanical spinners (HDD)

I've not seen any smart output indicating imminante dead

This drive has acted like your girlfriend when she's just not in a good mood and without explanation says nothing to you in the morning, for hours.

This hard crash, means that the S.M.A.R.T. monitoring hardware didn't have proper Communications with the integrated circuits on the SSD.

The drive was always powered many times a month, never left without power for more than a week or so, thus that has not been a contributing factor to the SSD catastrophic failure

The drive itself is fairly small.

The data on this dead SSD I've backed up on remote drives connected in JBOD format to machines which I have running remotely.

No ZFS on my backup machines

  • I want to run ZFS native
  • that means running a BSD OS on those machines
  • that also means I will need to backup / restore all data on those drives meaning
  • I will need 200% of the used HDD / SSD space on those machines
  • I need patience for that backup
  • I need expensive extra HDD's for that project
  • I won't pay USD 300 for a USD 120 HDD

NO ZFS under these global SSD / HDD market prices

Sources:

  • Moi
  • man ls(1)
  • man lsd(1)
  • man cp(1)
  • man smartctl(8)
  • man zfs(8)

#HDD #SSD #crash #no #warning #on #TV #filesystems #remote #backup #network #JBOD #SMART #programming #bacula

@RandamuMaki @alicemcalicepants
I think we have the basis of an age verification system we can all get behind, here.

#fileSystems #ageVarification #socialMedia

@Thorsted @chronodm @archivist_Liz @beet_keeper Interesting article! So that's how Apple did it "back then".

I like their idea -and believe it should be revived. But differently. RDF-based in the end.

I claim that unlimited xattr capable #filesystems and taking "related annotated objects" seriously as default/common storage paradigm, it would evolve stable similar to, but better then DOS8.3-to-UTF+Emojis in filenames across the Internet. ❤️ ⭐

Let's chat! You got my email?

New article: Inside ZFS 🔥

A walk through the three layers (SPA, DMU, DSL), the 128-byte block pointer that makes the whole pool a Merkle tree, the uberblock ring, and why snapshots are O(1).

👉 https://internals-for-interns.com/posts/zfs-filesystem/

#ZFS #Filesystems

ZFS | Internals for Interns

In the previous article , we explored Btrfs—a copy-on-write filesystem built around a single kind of B-tree, where every file, extent, checksum and chunk mapping lives as a tagged item in some tree, and snapshots fall out of the reference-counted extent design. Btrfs took a lot of inspiration from an older system that pioneered most of these ideas: ZFS. ZFS started life at Sun Microsystems in the mid-2000s and now lives on as OpenZFS, ported to Linux, FreeBSD, illumos, and macOS. From the outside it solves the same problems as Btrfs—pooled storage, copy-on-write, snapshots, checksums, integrated RAID—but the shape underneath is genuinely different. Where Btrfs leaned on one universal B-tree node format and a single key shape, ZFS leans on something else entirely: a 128-byte block pointer that fully describes the block it points to, and a strict three-layer architecture stacked on top of it.

Internals for Interns

Spent the weekend deep in ZFS internals 🤓

Next up in the filesystems series: how a 128-byte block pointer turns the entire pool into a Merkle tree, and why snapshots are basically free (just copy a pointer + stamp a TXG).

SPA ↔ DMU ↔ DSL, uberblocks, vdevs, ARC, ZIL.

Drops Monday

#ZFS #Filesystems

@jamesh

Of course no it does not. Passing the root directory and the working directory as file descriptors takes two descriptors, and openat2() only has one descriptor parameter.

The idea is that application mode code explicitly passes in all of the things that would normally be internally referenced from fields in the process structure, such as the root directory, the working directory, and the user credential set. And everything then just proceeds per the #Unix namei of old.

We've been frustratingly close to this for decades, and no-one has quite invented it.

With it, @swick's privileged server program opens the root directory, opens the working directory, opens/receives a credentials descriptor, and then just calls the syscall with the client-supplied paths. All of the TOCTOU problems with path normalization vanish. All of the multi-client parallel sete[ug]id and chdir synchronization problems vanish.

#filesystems

@swick

I've long thought that there's a hole that needs filling, that does what the original #Unix namei does but allows application mode code to supply everything necessary as (opaque) open descriptors: the root directory, the working directory, and the security credentials.

Frustratingly, Unix openat(), Windows NT's NtCreateFile(), and #Hurd's dir_lookup() all come close but all miss a final piece of the puzzle in different ways. openat() misses, for example, a descriptor for the root directory and something like NT's process token handles for security processing. NT has odd ideas about current directories.

This way, server processes could simply make use of the kernel's own already existing logic to handle not traversing '..' over a changed root, following symbolic links, and checking security using client credentials.

There's so much reinvention of this wheel that would have been resolved decades ago if it had only been exposed as a system call.

#filesystems

#AWS introduced #S3Files, letting users mount an Amazon S3 bucket and access data through a standard file system interface.

Applications can read and write files with standard file operations, while the system translates them into S3 requests - allowing compute services to work directly with S3-stored data.

Find out more: https://bit.ly/4mGE3Vy

#InfoQ #CloudComputing #FileSystems #DataStorage

Kent Overstreet released Bcachefs 1.38 on Saturday from DKMS (an out-of-tree kernel module path), the second post-mainline-removal release. The on-mount allocator deadlock that stuck users through three releases is finally fixed. Journal pipelining moves from a 16-entry cap to 256. An accidental quadratic snapshot-table grow path is gone. Mainline status moves committee meetings. The 1.38 changelog moves your fleet off the deadlock blocking writes today.

#Linux #FOSS #Filesystems #OpenSource