lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250112052743.GH1323402@mit.edu>
Date: Sun, 12 Jan 2025 00:27:43 -0500
From: "Theodore Ts'o" <tytso@....edu>
To: "Artem S. Tashkinov" <aros@....com>
Cc: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: Spooling large metadata updates / Proposal for a new API/feature
 in the Linux Kernel (VFS/Filesystems):

On Sat, Jan 11, 2025 at 09:17:49AM +0000, Artem S. Tashkinov wrote:
> Hello,
> 
> I had this idea on 2021-11-07, then I thought it was wrong/stupid, now
> I've asked AI and it said it was actually not bad, so I'm bringing it
> forward now:
> 
> Imagine the following scenarios:
> 
>  * You need to delete tens of thousands of files.
>  * You need to change the permissions, ownership, or security context
> (chmod, chown, chcon) for tens of thousands of files.
>  * You need to update timestamps for tens of thousands of files.
> 
> All these operations are currently relatively slow because they are
> executed sequentially, generating significant I/O overhead.
> 
> What if these operations could be spooled and performed as a single
> transaction? By bundling metadata updates into one atomic operation,
> such tasks could become near-instant or significantly faster. This would
> also reduce the number of writes, leading to less wear and tear on
> storage devices.

As Amir has stated, pretty much all journalled file systems will
combine a large number of file system operations into a single
transation, unless there is an explicit request via an fsync(2) system
call.  For example, ext4 in general only closes a journal transaction
every five seconds, or there isn't enough space in the journal
(athough in practice this isn't an issue if you are using a reasonably
modern mkfs.ext4, since we've increased the default size of the
journal).

The reason why deleting a large number of files, or changing the
permissions, ownership, timestamps, etc., of a large number of files
is because you need to read the directory blocks to find the inodes
that you need to modify, read a large number of inodes, update a large
number of inodes, and if you are deleting the inodes, also update the
block allocation metadata (bitmaps, or btrees) so that those blocks
are marked as no longer in use.  Some of the directory entries might
be cached in the dentry cache, and some of the inodes might be cached
in the inode cache, but that's not always the case.

If all of the metadata blocks that you need to read in order to
accomplish the operation are already cached in memory, then what you
propose is something that pretty much all journaled file systems will
do already, today. That is, the modifications that need to be made to
the metadata will be first written to the journal first, and only
after the journal transaction has been committed, will the actual
metadata blocks be written to the storage device, and this will be
done asynchronously.

In pratice, the actual delay in doing one of these large operations is
the need to read the metadata blocks into memory, and this must be
done synchronously, since (for example), if you are deleting 100,000
files, you first need to know which inodes for those 100,000 files by
reading the directory blocks; you then need to know which blocks will
be freed by deleting each of those 100,000 files, which means you will
need to read 100,000 inodes and their extent tree blocks, and then you
need to update the block allocation information, and that will require
that you read the block allocation bitmaps so they can be updated.

> Does this idea make sense? If it already exists, or if there’s a reason
> it wouldn’t work, please let me know.

So yes, it basically exists, although in practice, it doesn't work as
well as you might think, because of the need to read potentially a
large number of the metdata blocks.  But for example, if you make sure
that all of the inode information is already cached, e.g.:

   ls -lR /path/to/large/tree > /dev/null

Then the operation to do a bulk update will be fast:

  time chown -R root:root /path/to/large/tree

This demonstrates that the bottleneck tends to be *reading* the
metdata blocks, not *writing* the metadata blocks.

Cheers,

				- Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ