[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1709141138340.30688@nuc-kabylake>
Date: Thu, 14 Sep 2017 11:39:53 -0500 (CDT)
From: Christopher Lameter <cl@...ux.com>
To: Tim Chen <tim.c.chen@...ux.intel.com>
cc: Linus Torvalds <torvalds@...ux-foundation.org>,
"Liang, Kan" <kan.liang@...el.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
"Eric W . Biederman" <ebiederm@...ssion.com>,
Davidlohr Bueso <dave@...olabs.net>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2 v2] sched/wait: Introduce lock breaker in
wake_up_page_bit
On Wed, 13 Sep 2017, Tim Chen wrote:
> Here's what the customer think happened and is willing to tell us.
> They have a parent process that spawns off 10 children per core and
> kicked them to run. The child processes all access a common library.
> We have 384 cores so 3840 child processes running. When migration occur on
> a page in the common library, the first child that access the page will
> page fault and lock the page, with the other children also page faulting
> quickly and pile up in the page wait list, till the first child is done.
I think we need some way to avoid migration in cases like this. This is
crazy. Page migration was not written to deal with something like this.
Powered by blists - more mailing lists