[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51BB4A53.4000505@yandex-team.ru>
Date: Fri, 14 Jun 2013 20:52:35 +0400
From: Roman Gushchin <klamm@...dex-team.ru>
To: Christoph Lameter <cl@...two.org>
CC: Pekka Enberg <penberg@...nel.org>, mpm@...enic.com,
akpm@...ux-foundation.org, mgorman@...e.de,
David Rientjes <rientjes@...gle.com>, glommer@...il.com,
hannes@...xchg.org, minchan@...nel.org, jiang.liu@...wei.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] slub: Avoid direct compaction if possible
On 14.06.2013 20:08, Christoph Lameter wrote:
> On Fri, 14 Jun 2013, Roman Gushchin wrote:
>
>> But there is an actual problem, that this patch solves.
>> Sometimes I saw the following issue on some machines:
>> all CPUs are performing compaction, system time is about 80%,
>> system is completely unreliable. It occurs only on machines
>> with specific workload (distributed data storage system, so,
>> intensive disk i/o is performed). A system can fall into
>> this state fast and unexpectedly or by progressive degradation.
>
> Well that is not a slab allocator specific issue but related to compaction
> concurrency. Likely cache line contention is causing a severe slowday. But
> that issue could be triggered by any subsystem that does lots of memory
> allocations. I would suggest that we try to address the problem in the
> compaction logic rather than modifying allocators.
I agree, that it's good to address the original issue. But I'm not sure,
that it's a compaction issue. If someone wants to participate here,
I can provide more information. The main problem here is that it's
__very__ hard to reproduce the issue.
But, I think, all that shouldn't stop us from modifying the allocator.
Falling back to minimal order is in any case better than running
direct compaction. Just because it's faster. Am I wrong?
Regards,
Roman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists