[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51BF024F.2080609@yandex-team.ru>
Date: Mon, 17 Jun 2013 16:34:23 +0400
From: Roman Gushchin <klamm@...dex-team.ru>
To: David Rientjes <rientjes@...gle.com>
CC: Christoph Lameter <cl@...two.org>, penberg@...nel.org,
mpm@...enic.com, akpm@...ux-foundation.org, mgorman@...e.de,
glommer@...allels.com, hannes@...xchg.org, minchan@...nel.org,
jiang.liu@...wei.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] slub: Avoid direct compaction if possible
On 15.06.2013 00:26, David Rientjes wrote:
> On Fri, 14 Jun 2013, Christoph Lameter wrote:
>
>>> It's possible to avoid such problems (or at least to make them less probable)
>>> by avoiding direct compaction. If it's not possible to allocate a contiguous
>>> page without compaction, slub will fall back to order 0 page(s). In this case
>>> kswapd will be woken to perform asynchronous compaction. So, slub can return
>>> to default order allocations as soon as memory will be de-fragmented.
>>
>> Sounds like a good idea. Do you have some numbers to show the effect of
>> this patch?
>>
>
> I'm surprised you like this patch, it basically makes slub allocations to
> be atomic and doesn't try memory compaction nor reclaim. Asynchronous
> compaction certainly isn't aggressive enough to mimick the effects of the
> old lumpy reclaim that would have resulted in less fragmented memory. If
> slub is the only thing that is doing high-order allocations, it will start
> falling back to the smallest page order much much more often.
>
> I agree that this doesn't seem like a slub issue at all but rather a page
> allocator issue; if we have many simultaneous thp faults at the same time
> and /sys/kernel/mm/transparent_hugepage/defrag is "always" then you'll get
> the same problem if deferred compaction isn't helping.
>
> So I don't think we should be patching slub in any special way here.
>
> Roman, are you using the latest kernel? If so, what does
> grep compact_ /proc/vmstat show after one or more of these events?
>
We're using 3.4. And the problem reveals when we moved from 3.2 to 3.4.
It can be also reproduced on 3.5.
I'll send the exact numbers as soon I'll reproduce it again.
It can take up to 1 week.
Thanks!
Regards,
Roman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists