[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e876c71a-702d-adc5-e043-658de3b462c3@linux.intel.com>
Date: Fri, 18 Aug 2017 09:36:10 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Mel Gorman <mgorman@...hsingularity.net>,
"Liang, Kan" <kan.liang@...el.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] sched/wait: Break up long wake list walk
On 08/18/2017 07:46 AM, Mel Gorman wrote:
> On Fri, Aug 18, 2017 at 02:20:38PM +0000, Liang, Kan wrote:
>>> Nothing fancy other than needing a comment if it works.
>>>
>>
>> No, the patch doesn't work.
>>
>
> That indicates that it may be a hot page and it's possible that the page is
> locked for a short time but waiters accumulate. What happens if you leave
> NUMA balancing enabled but disable THP? Waiting on migration entries also
> uses wait_on_page_locked so it would be interesting to know if the problem
> is specific to THP.
>
> Can you tell me what this workload is doing? I want to see if it's something
> like many threads pounding on a limited number of pages very quickly. If
It is a customer workload so we have limited visibility. But we believe there are
some pages that are frequently accessed by all threads.
> it's many threads working on private data, it would also be important to
> know how each buffers threads are aligned, particularly if the buffers
> are smaller than a THP or base page size. For example, if each thread is
> operating on a base page sized buffer then disabling THP would side-step
> the problem but THP would be false sharing between multiple threads.
>
Still, I don't think this problem is THP specific. If there is a hot
regular page getting migrated, we'll also see many threads get
queued up quickly. THP may have made the problem worse as migrating
it takes a longer time, meaning more threads could get queued up.
Thanks.
Tim
Powered by blists - more mailing lists