lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 21 Apr 2021 12:57:39 -0400
From:   "Theodore Ts'o" <tytso@....edu>
To:     Jan Kara <jack@...e.cz>
Cc:     Christoph Hellwig <hch@...radead.org>,
        Zhang Yi <yi.zhang@...wei.com>, linux-ext4@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, adilger.kernel@...ger.ca,
        yukuai3@...wei.com
Subject: Re: [RFC PATCH v2 7/7] ext4: fix race between blkdev_releasepage()
 and ext4_put_super()

On Wed, Apr 21, 2021 at 03:46:34PM +0200, Jan Kara wrote:
> 
> Indeed, after 12 years in kernel .bdev_try_to_free_page is implemented only
> by ext4. So maybe it is not that important? I agree with Zhang and
> Christoph that getting the lifetime rules sorted out will be hairy and it
> is questionable, whether it is worth the additional pages we can reclaim.
> Ted, do you remember what was the original motivation for this?

The comment in fs/ext4/super.c is I thought a pretty good explanation:

/*
 * Try to release metadata pages (indirect blocks, directories) which are
 * mapped via the block device.  Since these pages could have journal heads
 * which would prevent try_to_free_buffers() from freeing them, we must use
 * jbd2 layer's try_to_free_buffers() function to release them.
 */

When we modify a metadata block, we attach a journal_head (jh)
structure to the buffer_head, and bump the ref count to prevent the
buffer from being freed.  Before the transaction is committed, the
buffer is marked jbddirty, but the dirty bit is not set until the
transaction commit.

At that back, writeback happens entirely at the discretion of the
buffer cache.  The jbd layer doesn't get notification when the I/O is
completed, nor when there is an I/O error.  (There was an attempt to
add a callback but that was NACK'ed because of a complaint that it was
jbd specific.)

So we don't actually know when it's safe to detach the jh from the
buffer_head and can drop the refcount so that the buffer_head can be
freed.  When the space in the journal starts getting low, we'll look
at at the jh's attached to completed transactions, and see how many of
them have clean bh's, and at that point, we can release the buffer
heads.

The other time when we'll attempt to detach jh's from clean buffers is
via bdev_try_to_free_buffers().  So if we drop the
bdev_try_to_free_page hook, then when we are under memory pressure,
there could be potentially a large percentage of the buffer cache
which can't be freed, and so the OOM-killer might trigger more often.

Now, if we could get a callback on I/O completion on a per-bh basis,
then we could detach the jh when the buffer is clean --- and as a
bonus, we'd get a notification when there was an I/O error writing
back a metadata block, which would be even better.

So how about an even swap?  If we can get a buffer I/O completion
callback, we can drop bdev_to_free_swap hook.....

	     	      			- Ted

	  
						

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ