lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 17 Sep 2009 11:22:53 +0200
From:	Jan Kara <jack@...e.cz>
To:	Jens Axboe <jens.axboe@...cle.com>
Cc:	Jan Kara <jack@...e.cz>, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, chris.mason@...cle.com,
	hch@...radead.org, tytso@....edu, akpm@...ux-foundation.org,
	trond.myklebust@....uio.no
Subject: Re: [PATCH 10/16] writeback: splice dirty inode entries to default
	bdi on bdi_destroy()

On Wed 16-09-09 20:29:32, Jens Axboe wrote:
> On Wed, Sep 16 2009, Jan Kara wrote:
> > On Wed 16-09-09 15:24:48, Jens Axboe wrote:
> > > We cannot safely ensure that the inodes are all gone at this point
> > > in time, and we must not destroy this bdi with inodes having off it.
> > > So just splice our entries to the default bdi since that one will
> > > always persist.
> >   I'd at least add a comment like
> > "XXX: This is probably a bug but let's workaround it for now."
> >   And either remove the code or update the comment when this gets resolved.
> 
> Thinking more about it - what if inodes happen to get requeued on the
> final flush? bdi_destroy() likely can't wait for flushing to happen,
> since (depending on the tear down cycle), part of the device may be gone
> already.
  In WB_SYNC_ALL mode, no inodes can get requeued. You are guaranteed that
after writeback returns, all data & inodes that were dirty at the time of
the call are on disk. So unless the filesystem is alive during
bdi_destroy() nothing should be left on bdi lists.

> And I did mean __sync_filesystem() -> __sync_blockdev(), since this is
> what happen on last close. So it could also be that we need something
> stronger to guarantee that things are really gone.
  Well, __sync_filesystem() should do all that is needed before calling
__sync_blockdev(). But doing that with live filesystem is obviously racy...
If by last close, you mean the time when we unmount the filesystem and thus
drop the bdev reference, then umount should have synced all the inodes
(actually, you'd see "Busy inodes on umount" message if it did not) so that
probably works right.

									Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ