[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E6000C4.7030007@linux.vnet.ibm.com>
Date: Thu, 01 Sep 2011 17:01:40 -0500
From: Seth Jennings <sjenning@...ux.vnet.ibm.com>
To: Dave Hansen <dave@...ux.vnet.ibm.com>
CC: Dan Magenheimer <dan.magenheimer@...cle.com>, gregkh@...e.de,
devel@...verdev.osuosl.org, ngupta@...are.org,
cascardo@...oscopio.com, rdunlap@...otime.net,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/3] staging: zcache: xcfmalloc support
On 09/01/2011 11:54 AM, Dave Hansen wrote:
> On Thu, 2011-09-01 at 11:33 -0500, Seth Jennings wrote:
>> xcfmalloc is also 0(1) in that the number of freelists
>> at that have to be checked is constant and not increasing
>> with the number of allocations. The constant hidden
>> in the O(1) for finding a suitable block is NUM_FREELISTS.
>
> The algorithm is technically O(n^2) since there are
> XCF_MAX_BLOCKS_PER_ALLOC searches through XCF_NUM_FREELISTS. There's
> also the reserved pages refill loop, which is linear too.
>
I was seeing n as the number of allocations. Since
XCF_MAX_BLOCKS_PER_ALLOC and XCF_NUM_FREELISTS are constant (i.e.
not increasing with the number of allocations) wouldn't it be
O(1)?
I see it like this:
for (i=0; i<2; i++) {
do_something();
}
vs.
do_something();
do_something();
Is one O(n) and the other O(1)? They do the same thing because the
loop iterates a constant number of times.
For it to be O(n) it would have to be:
for (i=0; i<n; i++) {
do_something();
}
Right?
> xcfmalloc's big compromise is that it doesn't do any searching or
> fitting. It might needlessly split larger blocks when two small ones
> would have worked, for instance.
Splitting a larger block is the last option. I might not
be understanding you correctly, but find_remove_block() does try to
find the optimal block to use, which is "searching and fitting" in my
mind.
>
> -- Dave
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists