[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8a395f69-ce4a-418a-b4a9-30ed83e0fbef@gmx.com>
Date: Mon, 13 Jan 2025 07:41:53 +0000
From: "Artem S. Tashkinov" <aros@....com>
To: Theodore Ts'o <tytso@....edu>
Cc: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: Spooling large metadata updates / Proposal for a new API/feature
in the Linux Kernel (VFS/Filesystems):
My use case wasn't about the journal per se, I'm not using it for my
ext4 partitions.
Correct me if I'm wrong but I was thinking about this:
== Issue number one ==
Let's say you chmod two files sequentially:
What's happening:
1) the kernel looks up an inode
2) the kernel then updates the metadata (at the very least one logical
block is being written, that's 4K bytes)
ditto for the second file.
Now let's say we are updating 10000 files.
Does this mean that at least 40MB of data will be written, when probably
less than 500KB needs to be written to disk?
== Issue number two ==
At least when you write data to the disk, the kernel doesn't flush it
immediately and your system remains responsive due to the use of dirty
buffers.
For operations involving metadata updates, the kernel may not have this
luxury, because the system must be in a consistent state even if it's
accidentally or intentionally powered off.
So, metadata updates must be carried out immediately, and they can bring
the system to a halt while flushing the above 40MB of data, as opposed
to the 500KB that needs to be updated in terms of what is actually being
updated on disk.
So, the feature I'm looking for would be to say to the kernel: hey I'm
about to batch 10000 operations, please be considerate, do your thing in
one fell swoop while optimizing intermediate operations or writes to the
disk, and there's no rush, so you may as well postpone the whole thing
as much as you want.
Best regards,
Artem
On 1/12/25 5:27 AM, Theodore Ts'o wrote:
> On Sat, Jan 11, 2025 at 09:17:49AM +0000, Artem S. Tashkinov wrote:
>> Hello,
>>
>> I had this idea on 2021-11-07, then I thought it was wrong/stupid, now
>> I've asked AI and it said it was actually not bad, so I'm bringing it
>> forward now:
>>
>> Imagine the following scenarios:
>>
>> * You need to delete tens of thousands of files.
>> * You need to change the permissions, ownership, or security context
>> (chmod, chown, chcon) for tens of thousands of files.
>> * You need to update timestamps for tens of thousands of files.
>>
>> All these operations are currently relatively slow because they are
>> executed sequentially, generating significant I/O overhead.
>>
>> What if these operations could be spooled and performed as a single
>> transaction? By bundling metadata updates into one atomic operation,
>> such tasks could become near-instant or significantly faster. This would
>> also reduce the number of writes, leading to less wear and tear on
>> storage devices.
>
> As Amir has stated, pretty much all journalled file systems will
> combine a large number of file system operations into a single
> transation, unless there is an explicit request via an fsync(2) system
> call. For example, ext4 in general only closes a journal transaction
> every five seconds, or there isn't enough space in the journal
> (athough in practice this isn't an issue if you are using a reasonably
> modern mkfs.ext4, since we've increased the default size of the
> journal).
>
> The reason why deleting a large number of files, or changing the
> permissions, ownership, timestamps, etc., of a large number of files
> is because you need to read the directory blocks to find the inodes
> that you need to modify, read a large number of inodes, update a large
> number of inodes, and if you are deleting the inodes, also update the
> block allocation metadata (bitmaps, or btrees) so that those blocks
> are marked as no longer in use. Some of the directory entries might
> be cached in the dentry cache, and some of the inodes might be cached
> in the inode cache, but that's not always the case.
>
> If all of the metadata blocks that you need to read in order to
> accomplish the operation are already cached in memory, then what you
> propose is something that pretty much all journaled file systems will
> do already, today. That is, the modifications that need to be made to
> the metadata will be first written to the journal first, and only
> after the journal transaction has been committed, will the actual
> metadata blocks be written to the storage device, and this will be
> done asynchronously.
>
> In pratice, the actual delay in doing one of these large operations is
> the need to read the metadata blocks into memory, and this must be
> done synchronously, since (for example), if you are deleting 100,000
> files, you first need to know which inodes for those 100,000 files by
> reading the directory blocks; you then need to know which blocks will
> be freed by deleting each of those 100,000 files, which means you will
> need to read 100,000 inodes and their extent tree blocks, and then you
> need to update the block allocation information, and that will require
> that you read the block allocation bitmaps so they can be updated.
>
>> Does this idea make sense? If it already exists, or if there’s a reason
>> it wouldn’t work, please let me know.
>
> So yes, it basically exists, although in practice, it doesn't work as
> well as you might think, because of the need to read potentially a
> large number of the metdata blocks. But for example, if you make sure
> that all of the inode information is already cached, e.g.:
>
> ls -lR /path/to/large/tree > /dev/null
>
> Then the operation to do a bulk update will be fast:
>
> time chown -R root:root /path/to/large/tree
>
> This demonstrates that the bottleneck tends to be *reading* the
> metdata blocks, not *writing* the metadata blocks.
>
> Cheers,
>
> - Ted
Powered by blists - more mailing lists