lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1187705235.6114.247.camel@twins>
Date:	Tue, 21 Aug 2007 16:07:15 +0200
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Nick Piggin <npiggin@...e.de>
Cc:	Christoph Lameter <clameter@....com>, Pavel Machek <pavel@....cz>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	akpm@...ux-foundation.org, dkegel@...gle.com,
	David Miller <davem@...emloft.net>
Subject: Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if
	PF_MEMALLOC is set

On Tue, 2007-08-21 at 02:39 +0200, Nick Piggin wrote:
> On Mon, Aug 20, 2007 at 11:14:08PM +0200, Peter Zijlstra wrote:
> > On Mon, 2007-08-20 at 13:27 -0700, Christoph Lameter wrote:
> > > On Mon, 20 Aug 2007, Peter Zijlstra wrote:
> > > 
> > > > > Plus the same issue can happen today. Writes are usually not completed 
> > > > > during reclaim. If the writes are sufficiently deferred then you have the 
> > > > > same issue now.
> > > > 
> > > > Once we have initiated (disk) writeout we do not need more memory to
> > > > complete it, all we need to do is wait for the completion interrupt.
> > > 
> > > We cannot reclaim the page as long as the I/O is not complete. If you 
> > > have too many anonymous pages and the rest of memory is dirty then you can 
> > > get into OOM scenarios even without this patch.
> > 
> > As long as the reserve is large enough to completely initialize writeout
> > of a single page we can make progress. Once writeout is initialized the
> > completion interrupt is guaranteed to happen (assuming working
> > hardware).
> 
> Although interestingly, we are not guaranteed to have enough memory to
> completely initialise writeout of a single page.

Yes, that is due to the unbounded nature of direct reclaim, no?

I've been meaning to write some patches to address this problem in a way
that does not introduce the hard wall Linus objects to. If only I had
this extra day in the week :-/

And then there is the deadlock in add_to_swap() that I still have to
look into, I hope it can eventually be solved using reserve based
allocation.

> The buffer layer doesn't require disk blocks to be allocated at page
> dirty-time. Allocating disk blocks can require complex filesystem operations
> and readin of buffer cache pages. The buffer_head structures themselves may
> not even be present and must be allocated :P
> 
> In _practice_, this isn't such a problem because we have dirty limits, and
> we're almost guaranteed to have some clean pages to be reclaimed. In this
> same way, networked filesystems are not a problem in practice. However
> network swap, because there is no dirty limits on swap, can actually see
> the deadlock problems.

The main problem with networked swap is not so much sending out the
pages (this has similar problems like the filesystems but is all bounded
in its memory use).

The biggest issue is receiving the completion notification. Network
needs to fall back to a state where it does not blindly consumes memory
or drops _all_ packets. An intermediate state is required, one where we
can receive and inspect incoming packets but commit to very few.

In order to create such a network state and for it to be stable, a
certain amount of memory needs to be available and an external trigger
is needed to enter and leave this state - currently provided by there
being more memory available than needed or not.

Download attachment "signature.asc" of type "application/pgp-signature" (190 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ