lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1wsthcatk.fsf@ebiederm.dsl.xmission.com>
Date:	Sat, 20 Oct 2007 23:10:15 -0600
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	Christian Borntraeger <borntraeger@...ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	"Theodore Ts'o" <tytso@....edu>, stable@...nel.org
Subject: Re: [PATCH] rd: Use a private inode for backing storage

Nick Piggin <nickpiggin@...oo.com.au> writes:

> On Saturday 20 October 2007 08:51, Eric W. Biederman wrote:
>> Currently the ramdisk tries to keep the block device page cache pages
>> from being marked clean and dropped from memory.  That fails for
>> filesystems that use the buffer cache because the buffer cache is not
>> an ordinary buffer cache user and depends on the generic block device
>> address space operations being used.
>>
>> To fix all of those associated problems this patch allocates a private
>> inode to store the ramdisk pages in.
>>
>> The result is slightly more memory used for metadata, an extra copying
>> when reading or writing directly to the block device, and changing the
>> software block size does not loose the contents of the ramdisk.  Most
>> of all this ensures we don't loose data during normal use of the
>> ramdisk.
>>
>> I deliberately avoid the cleanup that is now possible because this
>> patch is intended to be a bug fix.
>
> This just breaks coherency again like the last patch. That's a
> really bad idea especially for stable (even if nothing actually
> was to break, we'd likely never know about it anyway).

Not a chance.  The only way we make it to that inode is through block
device I/O so it lives at exactly the same level in the hierarchy as
a real block device.  My patch is the considered rewrite boiled down
to it's essentials and made a trivial patch.

It fundamentally fixes the problem, and doesn't attempt to reconcile
the incompatible expectations of the ramdisk code and the buffer cache.

> Christian's patch should go upstream and into stable. For 2.6.25-6,
> my rewrite should just replace what's there. Using address spaces
> to hold the ramdisk pages just confuses the issue even if they
> *aren't* actually wired up to the vfs at all. Saving 20 lines is
> not a good reason to use them.

Well is more like saving 100 lines.  Not having to reexamine complicated
infrastructure code and doing things the same way ramfs is.  I think
that combination is a good reason.  Especially since I can do with a
16 line patch as I just demonstrated.  It is a solid and simple
incremental change.

Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ