lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200710170806.56650.nickpiggin@yahoo.com.au>
Date:	Wed, 17 Oct 2007 08:06:56 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	"Eric W. Biederman" <ebiederm@...ssion.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Christian Borntraeger <borntraeger@...ibm.com>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	"Theodore Ts'o" <tytso@....edu>
Subject: Re: [PATCH] rd: Mark ramdisk buffers heads dirty

On Wednesday 17 October 2007 05:06, Eric W. Biederman wrote:
> Nick Piggin <nickpiggin@...oo.com.au> writes:
> > On Tuesday 16 October 2007 08:42, Eric W. Biederman wrote:
> >> I have not observed this case but it is possible to get a dirty page
> >> cache with clean buffer heads if we get a clean ramdisk page with
> >> buffer heads generated by a filesystem calling __getblk and then write
> >> to that page from user space through the block device.  Then we just
> >> need to hit the proper window and try_to_free_buffers() will mark that
> >> page clean and eventually drop it.  Ouch!
> >>
> >> To fix this use the generic __set_page_dirty_buffers in the ramdisk
> >> code so that when we mark a page dirty we also mark it's buffer heads
> >> dirty.
> >
> > Hmm, so we can also have some filesystems writing their own buffers
> > out by hand (clear_buffer_dirty, submit buffer for IO). Other places
> > will do similarly dodgy things with filesystem metadata
> > (fsync_buffers_list, for example).
> >
> > So your buffers get cleaned again, then your pages get cleaned.
>
> So I just took a little bit of time to look at and think about
> the path you are referring to, and I don't see a problem.
>
> The rule with the buffer dirty bit is that you first clear it
> and then you submit the write.  When the write request finally
> makes it's way to rd.c we copy the data if necessary and call
> set_page_dirty.  Which will then mark the page and the buffer
> dirty again.

Oh, maybe you're right. I didn't see it redirty the page there.


> In essence the ramdisk code just attempts to lock buffers in
> ram by setting their dirty bit, just like we do for pages
> in ramfs.

Yeah, which is half the reason why its so complicated. Logically
it should just hold another reference on the pages rather than
interfere with pagecache state, but it can't do that because it
doesn't always know when a new page is inserted.


> > While I said it was a good fix when I saw the patch earlier, I think
> > it's not closing the entire hole, and as such, Christian's patch is
> > probably the way to go for stable.
>
> I thought through the logic in try_to_free_buffers and it actually
> makes sense to me now.  mark_buffer_dirty sets the page dirty bit
> so dirty buffers reside on dirty pages.  When we submit I/O we
> aren't guaranteed to submit all of the dirty buffers for a page
> at once, so we don't clear the page dirty bit.  With the result
> that we can end up with pages with the dirty bit set but all of
> their buffers are clean.

Yep.


> Since we rather deliberately allow truly clean pages to be dropped
> from the ramdisk overriding try_to_free_buffer_pages looks wrong
> because then except for invalidate we can not remove buffers
> from truly clean pages.

Yeah, if your fix works I guess it is better to use it and converge
code rather than diverge it even more.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ