lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F9F3254.8040107@linaro.org>
Date:	Mon, 30 Apr 2012 17:46:12 -0700
From:	John Stultz <john.stultz@...aro.org>
To:	Dave Chinner <david@...morbit.com>
CC:	Dave Hansen <dave@...ux.vnet.ibm.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Android Kernel Team <kernel-team@...roid.com>,
	Robert Love <rlove@...gle.com>, Mel Gorman <mel@....ul.ie>,
	Hugh Dickins <hughd@...gle.com>,
	Rik van Riel <riel@...hat.com>,
	Dmitry Adamushko <dmitry.adamushko@...il.com>,
	Neil Brown <neilb@...e.de>,
	Andrea Righi <andrea@...terlinux.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
Subject: Re: [PATCH 2/3] fadvise: Add _VOLATILE,_ISVOLATILE, and _NONVOLATILE
 flags

On 04/30/2012 05:08 PM, Dave Chinner wrote:
> On Mon, Apr 30, 2012 at 02:07:16PM -0700, John Stultz wrote:
>> On 04/27/2012 06:36 PM, Dave Chinner wrote:
>>> That's my concern - that persistent filesystems will have different
>>> behaviour to in-memory filesystems. They *must* be consistent in
>>> behaviour w.r.t. to stale data exposure, otherwise we are in a world
>>> of pain when applications start to use this. Quite frankly, I don't
>>> care about performance of VOLATILE ranges, but I care greatly
>>> about ensuring filesystems don't expose stale data to user
>>> applications....
>>>
>> I think we're in agreement with the rest of this email, but I do
>> want to stress that the performance of volatile ranges will become
>> more ciritical, as in order for folks to effectively use them, they
>> need to be able to mark and unmark ranges any time they're not using
>> the data.
> Performance is far less important than data security. Make it safe
> first, then optimise performance. As it is, the initial target of
> tmpfs - by it's very nature of returning zeros for regions not
> backed by pages - is safe w.r.t. stale data exposure, so it will not
> be slowed down by using an fallocate "best effort" hole-punching
> interface.  The performance of other filesystems is something that
> the relevant filesystem developers can worry about....

Again, I think we're quite in agreement about the issue of stale data.  
I just want to make sure you understand that the marking and unmarking 
paths will need to be fast if they are to attract users.


>> So if the overhead is too great for marking and unmarking pages,
>> applications will be less likely to "help out".  :)
> Devil's Advocate: If the benefit of managing caches in such a manner
> is this marginal, then why add the complexity to the kernel?
>
I'm not saying the benefit is marginal. When we are resource constrained 
(no swap) and we need to free memory, having regions pre-marked by 
applications is a great benefit, as we can immediately take those marked 
volatile ranges (as opposed to memory notifiers, where we request 
applications to free memory themselves).  Being able to free chunks of 
application memory, rather then killing the application provides a 
better experience/overall system performance.  However, if applications 
feel the marking and unmarking is too costly, they are less likely to 
mark the freeable ranges as volatile.

So only if no consideration for performance is given,  in that case 
there'd be no benefit to adding the interface.

thanks
-john





--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ