[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4j5P7OWbof8gMUmFRswR4YVQAJEZTAPNGhUup_y3XRYiw@mail.gmail.com>
Date: Mon, 26 Nov 2018 17:01:14 -0800
From: Dan Williams <dan.j.williams@...el.com>
To: alexander.h.duyck@...ux.intel.com
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Greg KH <gregkh@...uxfoundation.org>,
linux-nvdimm <linux-nvdimm@...ts.01.org>,
Tejun Heo <tj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux-pm mailing list <linux-pm@...r.kernel.org>,
jiangshanlai@...il.com, "Rafael J. Wysocki" <rafael@...nel.org>,
"Brown, Len" <len.brown@...el.com>, Pavel Machek <pavel@....cz>,
zwisler@...nel.org, Dave Jiang <dave.jiang@...el.com>,
bvanassche@....org
Subject: Re: [driver-core PATCH v6 1/9] workqueue: Provide queue_work_node to
queue work near a given NUMA node
On Thu, Nov 8, 2018 at 10:06 AM Alexander Duyck
<alexander.h.duyck@...ux.intel.com> wrote:
>
> Provide a new function, queue_work_node, which is meant to schedule work on
> a "random" CPU of the requested NUMA node. The main motivation for this is
> to help assist asynchronous init to better improve boot times for devices
> that are local to a specific node.
>
> For now we just default to the first CPU that is in the intersection of the
> cpumask of the node and the online cpumask. The only exception is if the
> CPU is local to the node we will just use the current CPU. This should work
> for our purposes as we are currently only using this for unbound work so
> the CPU will be translated to a node anyway instead of being directly used.
>
> As we are only using the first CPU to represent the NUMA node for now I am
> limiting the scope of the function so that it can only be used with unbound
> workqueues.
>
> Acked-by: Tejun Heo <tj@...nel.org>
> Reviewed-by: Bart Van Assche <bvanassche@....org>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
Acked-by: Dan Williams <dan.j.williams@...el.com>
Powered by blists - more mailing lists