[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170124160722.GC12281@htj.duckdns.org>
Date: Tue, 24 Jan 2017 11:07:22 -0500
From: Tejun Heo <tj@...nel.org>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
Hillf Danton <hillf.zj@...baba-inc.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Petr Mladek <pmladek@...e.cz>
Subject: Re: [PATCH 3/4] mm, page_alloc: Drain per-cpu pages from workqueue
context
Hello, Mel.
On Mon, Jan 23, 2017 at 11:04:29PM +0000, Mel Gorman wrote:
> On Mon, Jan 23, 2017 at 03:55:01PM -0500, Tejun Heo wrote:
> > Hello, Mel.
> >
> > On Mon, Jan 23, 2017 at 08:04:12PM +0000, Mel Gorman wrote:
> > > What is the actual mechanism that does that? It's not something that
> > > schedule_on_each_cpu does and one would expect that the core workqueue
> > > implementation would get this sort of detail correct. Or is this a proposal
> > > on how it should be done?
> >
> > If you use schedule_on_each_cpu(), it's all fine as the thing pins
> > cpus and waits for all the work items synchronously. If you wanna do
> > it asynchronously, right now, you'll have to manually synchronize work
> > items against the offline callback manually.
> >
>
> Is the current implementation and what it does wrong in some way? I ask
> because synchronising against the offline callback sounds like it would
> be a bit of a maintenance mess for relatively little gain.
As long as you wrap them with get/put_online_cpus(), the current
implementation should be fine. If it were up to me, I'd rather use
static percpu work_structs and synchronize with a mutex tho. The cost
of synchronizing via mutex isn't high here compared to the overall
operation, the whole thing is synchronous anyway and you won't have to
worry about falling back.
Thanks.
--
tejun
Powered by blists - more mailing lists