lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 23 Aug 2007 15:58:57 +0200
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Nikita Danilov <nikita@...sterfs.com>
Cc:	Nick Piggin <npiggin@...e.de>,
	Christoph Lameter <clameter@....com>,
	Pavel Machek <pavel@....cz>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
	dkegel@...gle.com, David Miller <davem@...emloft.net>
Subject: Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if
	PF_MEMALLOC is set

On Thu, 2007-08-23 at 14:11 +0400, Nikita Danilov wrote:
> Peter Zijlstra writes:
> 
> [...]
> 
>  > My idea is to extend kswapd, run cpus_per_node instances of kswapd per
>  > node for each of GFP_KERNEL, GFP_NOFS, GFP_NOIO. (basically 3 kswapds
>  > per cpu)
>  > 
>  > whenever we would hit direct reclaim, add ourselves to a special
>  > waitqueue corresponding to the type of GFP and kick all the
>  > corresponding kswapds.
> 
> There are two standard objections to this:
> 
>     - direct reclaim was introduced to reduce memory allocation latency,
>       and going to scheduler kills this. But more importantly,

The part you snipped:

> > Here is were the 'special' part of the waitqueue comes into order.
> > 
> > Instead of freeing pages to the page allocator, these kswapds would hand
> > out pages to the waiting processes in a round robin fashion. Only if
> > there are no more waiting processes left, would the page go to the buddy
> > system.

should deal with that, it allows processes to quickly get some memory.

>     - it might so happen that _all_ per-cpu kswapd instances are
>       blocked, e.g., waiting for IO on indirect blocks, or queue
>       congestion. In that case whole system stops waiting for IO to
>       complete. In the direct reclaim case, other threads can continue
>       zone scanning.

By running separate GFP_KERNEL, GFP_NOFS and GFP_NOIO kswapds this
should not occur. Much like it now does not occur.

This approach would make it work pretty much like it does now. But
instead of letting each separate context run into reclaim we then have a
fixed set of reclaim contexts which evenly distribute their resulting
free pages.

The possible down sides are:

 - more schedule()s, but I don't think these will matter when we're that
deep into reclaim
 - less concurrency - but I hope 1 set per cpu is enough, we could up
this if it turns out to really help.

Download attachment "signature.asc" of type "application/pgp-signature" (190 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ