[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <37D7C6CF3E00A74B8858931C1DB2F07753787AE4@SHSMSX103.ccr.corp.intel.com>
Date: Fri, 18 Aug 2017 16:53:30 +0000
From: "Liang, Kan" <kan.liang@...el.com>
To: Mel Gorman <mgorman@...hsingularity.net>
CC: Linus Torvalds <torvalds@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH 1/2] sched/wait: Break up long wake list walk
> On Fri, Aug 18, 2017 at 02:20:38PM +0000, Liang, Kan wrote:
> > > Nothing fancy other than needing a comment if it works.
> > >
> >
> > No, the patch doesn't work.
> >
>
> That indicates that it may be a hot page and it's possible that the page is
> locked for a short time but waiters accumulate. What happens if you leave
> NUMA balancing enabled but disable THP?
No, disabling THP doesn't help the case.
Thanks,
Kan
> Waiting on migration entries also
> uses wait_on_page_locked so it would be interesting to know if the problem
> is specific to THP.
>
> Can you tell me what this workload is doing? I want to see if it's something
> like many threads pounding on a limited number of pages very quickly. If it's
> many threads working on private data, it would also be important to know
> how each buffers threads are aligned, particularly if the buffers are smaller
> than a THP or base page size. For example, if each thread is operating on a
> base page sized buffer then disabling THP would side-step the problem but
> THP would be false sharing between multiple threads.
>
>
> --
> Mel Gorman
> SUSE Labs
Powered by blists - more mailing lists