[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070806202323.GH11115@waste.org>
Date: Mon, 6 Aug 2007 15:23:23 -0500
From: Matt Mackall <mpm@...enic.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
David Miller <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Phillips <phillips@...gle.com>,
Pekka Enberg <penberg@...helsinki.fi>,
Christoph Lameter <clameter@....com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Steve Dickson <SteveD@...hat.com>
Subject: Re: [PATCH 00/10] foundations for reserve-based allocation
On Mon, Aug 06, 2007 at 12:29:22PM +0200, Peter Zijlstra wrote:
>
> In the interrest of getting swap over network working and posting in smaller
> series, here is the first series.
>
> This series lays the foundations needed to do reserve based allocation.
> Traditionally we have used mempools (and others like radix_tree_preload) to
> handle the problem.
>
> However this does not fit the network stack. It is built around variable
> sized allocations using kmalloc().
>
> This calls for a different approach.
One wonders if containers are a possible solution. I can already solve
this problem with virtualization: have one VM manage all the network
I/O and export the device as a simpler virtual block device to other
VMs. Provided this VM isn't doing any "real" work and is sized
appropriately, it won't get wedged. Since the other VMs now submit I/O
through the simpler block interface, they can avoid getting wedged
with the standard mempool approach.
If we can run nbd and friends inside their own container that can give
similar isolation, we might not need to add this other complexity.
Just food for thought. I haven't looked closely enough at the
containers implementations yet to determine whether this is possible
or if the overhead in performance or complexity is acceptable.
--
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists