lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 9 Oct 2012 11:16:18 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	John Stultz <john.stultz@...aro.org>
Cc:	Anton Vorontsov <anton.vorontsov@...aro.org>,
	Pekka Enberg <penberg@...nel.org>,
	Leonid Moiseichuk <leonid.moiseichuk@...ia.com>,
	KOSAKI Motohiro <kosaki.motohiro@...il.com>,
	Minchan Kim <minchan@...nel.org>,
	Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
	Colin Cross <ccross@...roid.com>,
	Arve Hj?nnev?g <arve@...roid.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, linaro-kernel@...ts.linaro.org,
	patches@...aro.org, kernel-team@...roid.com
Subject: Re: [RFC] vmevent: Implement pressure attribute

On Mon, Oct 08, 2012 at 06:42:16PM -0700, John Stultz wrote:
> On 10/08/2012 02:46 AM, Mel Gorman wrote:
> >On Sun, Oct 07, 2012 at 01:14:17AM -0700, Anton Vorontsov wrote:
> >>And here we just try to let userland to assist, userland can tell "oh,
> >>don't bother with swapping or draining caches, I can just free some
> >>memory".
> >>
> >>Quite interesting, this also very much resembles volatile mmap ranges
> >>(i.e. the work that John Stultz is leading in parallel).
> >>
> >Agreed. I haven't been paying close attention to those patches but it
> >seems to me that one possiblity is that a listener for a vmevent would
> >set volatile ranges in response.
> 
> I don't have too much to comment on the rest of this mail, but just
> wanted to pipe in here, as the volatile ranges have caused some
> confusion.
> 
> While your suggestion would be possible, with volatile ranges, I've
> been promoting a more hands off-approach from the application
> perspective, where the application always would mark data that could
> be regenerated as volatile, unmarking it when accessing it.
> 
> This way the application doesn't need to be responsive to memory
> pressure, the kernel just takes what it needs from what the
> application made available.
> 
> Only when the application needs the data again, would it mark it
> non-volatile (or alternatively with the new SIGBUS semantics, access
> the purged volatile data and catch a SIGBUS), find the data was
> purged and regenerate it.
> 

Ok understood.

> That said, hybrid approaches like you suggested would be possible,
> but at a certain point, if we're waiting for a notification to take
> action, it might be better just to directly free that memory, rather
> then just setting it as volatile, and leaving it to the kernel then
> reclaim it for you.
> 

That's fine. I did not mean to suggest that volatile and vmevents on
memory pressure should be related or depending on each other in any way.


-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ