[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1179173129.2942.52.camel@lappy>
Date: Mon, 14 May 2007 22:05:29 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Matt Mackall <mpm@...enic.com>,
Christoph Lameter <clameter@....com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Thomas Graf <tgraf@...g.ch>,
David Miller <davem@...emloft.net>,
Daniel Phillips <phillips@...gle.com>,
Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: [PATCH 0/5] make slab gfp fair
On Mon, 2007-05-14 at 12:44 -0700, Andrew Morton wrote:
> On Mon, 14 May 2007 11:12:24 -0500
> Matt Mackall <mpm@...enic.com> wrote:
>
> > If I understand this correctly:
> >
> > privileged thread unprivileged greedy process
> > kmem_cache_alloc(...)
> > adds new slab page from lowmem pool
> > do_io()
> > kmem_cache_alloc(...)
> > kmem_cache_alloc(...)
> > kmem_cache_alloc(...)
> > kmem_cache_alloc(...)
> > kmem_cache_alloc(...)
> > ...
> > eats it all
> > kmem_cache_alloc(...) -> ENOMEM
> > who ate my donuts?!
>
> Yes, that's my understanding also.
>
> I can see why it's a problem in theory, but I don't think Peter has yet
> revealed to us why it's a problem in practice. I got all excited when
> Christoph asked "I am not sure what the point of all of this is.", but
> Peter cunningly avoided answering that ;)
>
> What observed problem is being fixed here?
I'm moving towards swapping over networked storage. Admittedly a new
feature.
Like with pretty much all other swap solutions; there is the fundamental
vm deadlock: freeing memory requires memory. Current block devices get
around that by using mempools. This works well.
However with network traffic mempools are not easily usable; the network
stack uses kmalloc. By using reserve based allocation we can keep
operating in a similar matter.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists