[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F68CD55.4040606@redhat.com>
Date: Tue, 20 Mar 2012 14:32:53 -0400
From: Rik van Riel <riel@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Mel Gorman <mel@....ul.ie>,
Johannes Weiner <hannes@...xchg.org>,
KOSAKI Motohiro <kosaki.motohiro@...il.com>,
Andrea Arcangeli <aarcange@...hat.com>, hughd@...gle.com
Subject: Re: [PATCH -mm 2/2] mm: do not reset mm->free_area_cache on every
single munmap
On 02/23/2012 04:56 PM, Andrew Morton wrote:
> We've been playing whack-a-mole with this search for many years. What
> about developing a proper data structure with which to locate a
> suitable-sized hole in O(log(N)) time?
I got around to looking at this, and the more I look, the
worse things get. The obvious (and probably highest
reasonable complexity) solution looks like this:
struct free_area {
unsigned long address;
struct rb_node rb_addr;
unsigned long size;
struct rb_node rb_size;
};
This works in a fairly obvious way for normal mmap
and munmap calls, inserting the free area into the tree
at the desired location, or expanding one that is already
there.
However, it totally falls apart when we need to get
aligned areas, for eg. hugetlb or cache coloring on
architectures with virtually indexed caches.
For those kinds of allocations, we are back to tree
walking just like today, giving us a fairly large amount
of additional complexity for no obvious gain.
Is this really the path we want to go down?
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists