lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 30 Dec 2011 22:29:02 +0200
From:	Gilad Ben-Yossef <gilad@...yossef.com>
To:	Mel Gorman <mgorman@...e.de>, Chris Metcalf <cmetcalf@...era.com>
Cc:	linux-kernel@...r.kernel.org,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Russell King <linux@....linux.org.uk>, linux-mm@...ck.org,
	Pekka Enberg <penberg@...nel.org>,
	Matt Mackall <mpm@...enic.com>,
	Sasha Levin <levinsasha928@...il.com>,
	Rik van Riel <riel@...hat.com>,
	Andi Kleen <andi@...stfloor.org>
Subject: Re: [PATCH v4 5/5] mm: Only IPI CPUs to drain local pages if they exist

On Fri, Dec 30, 2011 at 6:08 PM, Mel Gorman <mgorman@...e.de> wrote:
> On Fri, Dec 30, 2011 at 10:25:46AM -0500, Chris Metcalf wrote:

>> Alternately, since we really don't want more than one cpu running the drain
>> code anyway, you could imagine using a static cpumask, along with a lock to
>> serialize attempts to drain all the pages.  (Locking here would be tricky,
>> since we need to run on_each_cpu with interrupts enabled, but there's
>> probably some reasonable way to make it work.)
>>
>
> Good suggestion, that would at least shut up my complaining
> about allocation costs! A statically-declared mutex similar
> to hugetlb_instantiation_mutex should do it. The context that
> drain_all_pages is called from will have interrupts enabled.
>
> Serialising processes entering direct reclaim may result in some stalls
> but overall I think the impact of that would be less than increasing
> memory pressure when low on memory.
>

Chris, I like the idea :-)

Actually, assuming for a second that on_each_cpu* and underlying code
wont mind if the cpumask will change mid call (I know it does, just thinking out
loud), you could say you don't even need the lock if you careful in how you
set/unset the per cpu bits of the cpumask, since they track the same thing...

Of course, it'll still cause a load of cache line bouncing, so maybe
it's not worth it.

> It would still be nice to have some data on how much IPIs are reduced
> in practice to confirm the patch really helps.

I agree. I'll prepare the patch and will present the data.

Thanks!
Gilad


-- 
Gilad Ben-Yossef
Chief Coffee Drinker
gilad@...yossef.com
Israel Cell: +972-52-8260388
US Cell: +1-973-8260388
http://benyossef.com

"Unfortunately, cache misses are an equal opportunity pain provider."
-- Mike Galbraith, LKML
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ