[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FEC9392.2090904@redhat.com>
Date: Thu, 28 Jun 2012 13:25:38 -0400
From: Rik van Riel <riel@...hat.com>
To: Jim Schutt <jaschut@...dia.gov>
CC: linux-mm@...ck.org, akpm@...ux-foundation.org,
Mel Gorman <mel@....ul.ie>, kamezawa.hiroyu@...fujitsu.com,
minchan@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -mm] mm: have order>0 compaction start off where it left
On 06/28/2012 01:16 PM, Jim Schutt wrote:
>
> On 06/27/2012 09:37 PM, Rik van Riel wrote:
>> Order> 0 compaction stops when enough free pages of the correct
>> page order have been coalesced. When doing subsequent higher order
>> allocations, it is possible for compaction to be invoked many times.
>>
>> However, the compaction code always starts out looking for things to
>> compact at the start of the zone, and for free pages to compact things
>> to at the end of the zone.
>>
>> This can cause quadratic behaviour, with isolate_freepages starting
>> at the end of the zone each time, even though previous invocations
>> of the compaction code already filled up all free memory on that end
>> of the zone.
>>
>> This can cause isolate_freepages to take enormous amounts of CPU
>> with certain workloads on larger memory systems.
>>
>> The obvious solution is to have isolate_freepages remember where
>> it left off last time, and continue at that point the next time
>> it gets invoked for an order> 0 compaction. This could cause
>> compaction to fail if cc->free_pfn and cc->migrate_pfn are close
>> together initially, in that case we restart from the end of the
>> zone and try once more.
>>
>> Forced full (order == -1) compactions are left alone.
>>
>> Reported-by: Jim Schutt<jaschut@...dia.gov>
>> Signed-off-by: Rik van Riel<riel@...hat.com>
>
> Tested-by: Jim Schutt<jaschut@...dia.gov>
>
> Please let me know if you further refine this patch
> and would like me to test it with my workload.
Mel pointed out a serious problem with the way wrapping
cc->free_pfn back to the top of the zone is handled.
I will send you a new patch once I have a fix for that.
> So far I've run a total of ~20 TB of data over fifty minutes
> or so through 12 machines running this patch; no hint of
> trouble, great performance.
>
> Without this patch I would typically start having trouble
> after just a few minutes of this load.
Good to hear that!
Thank you for testing last night's version.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists