lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1zlyhayq2.fsf@ebiederm.dsl.xmission.com>
Date:	Wed, 17 Oct 2007 21:27:49 -0600
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Chris Mason <chris.mason@...cle.com>
Cc:	Christian Borntraeger <borntraeger@...ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Nick Piggin <nickpiggin@...oo.com.au>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	"Theodore Ts'o" <tytso@....edu>, stable@...nel.org
Subject: Re: [PATCH] rd: Mark ramdisk buffers heads dirty

Chris Mason <chris.mason@...cle.com> writes:

> On Wed, 2007-10-17 at 17:28 -0600, Eric W. Biederman wrote:
>> Chris Mason <chris.mason@...cle.com> writes:
>> 
>> > So, the problem is using the Dirty bit to indicate pinned.  You're
>> > completely right that our current setup of buffer heads and pages and
>> > filesystpem metadata is complex and difficult.
>> >
>> > But, moving the buffer heads off of the page cache pages isn't going to
>> > make it any easier to use dirty as pinned, especially in the face of
>> > buffer_head users for file data pages.
>> 
>> Let me specific.  Not moving buffer_heads off of page cache pages,
>> moving buffer_heads off of the block devices page cache pages.
>> 
>> My problem is the coupling of how block devices are cached and the
>> implementation of buffer heads, and by removing that coupling
>> we can generally make things better.  Currently that coupling
>> means silly things like all block devices are cached in low memory.
>> Which probably isn't what you want if you actually have a use
>> for block devices.
>> 
>> For the ramdisk case in particular what this means is that there
>> are no more users that create buffer_head mappings on the block
>> device cache so using the dirty bit will be safe.
>
> Ok, we move the buffer heads off to a different inode, and that indoe
> has pages.  The pages on the inode still need to get pinned, how does
> that pinning happen?

> The problem you described where someone cleans a page because the buffer
> heads are clean happens already without help from userland.  So, keeping
> the pages away from userland won't save them from cleaning.
>
> Sorry if I'm reading your suggestion wrong...

Yes.  I was suggesting to continue to pin the pages for the page
cache pages block device inode, and having the buffer cache live
on some other inode.  Thus not causing me problems by adding clean
buffer_heads to my dirty pages.

Eric

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists