[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181119164554.axobolrufu26kfah@ca-dmjordan1.us.oracle.com>
Date: Mon, 19 Nov 2018 08:45:54 -0800
From: Daniel Jordan <daniel.m.jordan@...cle.com>
To: Tejun Heo <tj@...nel.org>
Cc: Daniel Jordan <daniel.m.jordan@...cle.com>, linux-mm@...ck.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
aarcange@...hat.com, aaron.lu@...el.com, akpm@...ux-foundation.org,
alex.williamson@...hat.com, bsd@...hat.com,
darrick.wong@...cle.com, dave.hansen@...ux.intel.com,
jgg@...lanox.com, jwadams@...gle.com, jiangshanlai@...il.com,
mhocko@...nel.org, mike.kravetz@...cle.com,
Pavel.Tatashin@...rosoft.com, prasad.singamsetty@...cle.com,
rdunlap@...radead.org, steven.sistare@...cle.com,
tim.c.chen@...el.com, vbabka@...e.cz
Subject: Re: [RFC PATCH v4 05/13] workqueue, ktask: renice helper threads to
prevent starvation
On Tue, Nov 13, 2018 at 08:34:00AM -0800, Tejun Heo wrote:
> Hello, Daniel.
Hi Tejun, sorry for the delay. Plumbers...
> On Mon, Nov 05, 2018 at 11:55:50AM -0500, Daniel Jordan wrote:
> > static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
> > - bool from_cancel)
> > + struct nice_work *nice_work, int flags)
> > {
> > struct worker *worker = NULL;
> > struct worker_pool *pool;
> > @@ -2868,11 +2926,19 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
> > if (pwq) {
> > if (unlikely(pwq->pool != pool))
> > goto already_gone;
> > +
> > + /* not yet started, insert linked work before work */
> > + if (unlikely(flags & WORK_FLUSH_AT_NICE))
> > + insert_nice_work(pwq, nice_work, work);
>
> So, I'm not sure this works that well. e.g. what if the work item is
> waiting for other work items which are at lower priority? Also, in
> this case, it'd be a lot simpler to simply dequeue the work item and
> execute it synchronously.
Good idea, that is much simpler (and shorter).
So doing it this way, the current task's nice level would be adjusted while
running the work synchronously.
>
> > } else {
> > worker = find_worker_executing_work(pool, work);
> > if (!worker)
> > goto already_gone;
> > pwq = worker->current_pwq;
> > + if (unlikely(flags & WORK_FLUSH_AT_NICE)) {
> > + set_user_nice(worker->task, nice_work->nice);
> > + worker->flags |= WORKER_NICED;
> > + }
> > }
>
> I'm not sure about this. Can you see whether canceling & executing
> synchronously is enough to address the latency regression?
In my testing, canceling was practically never successful because these are
long running jobs, so by the time the main ktask thread gets around to
flushing/nice'ing the works, worker threads have already started running them.
I had to write a no-op ktask to hit the first path where you suggest
dequeueing. So adjusting the priority of a running worker seems required to
address the latency issue.
So instead of flush_work_at_nice, how about this?:
void renice_work_sync(work_struct *work, long nice);
If a worker is running the work, renice the worker to 'nice' and wait for it to
finish (what this patch does now), and if the work isn't running, dequeue it
and run in the current thread, again at 'nice'.
Thanks for taking a look.
Powered by blists - more mailing lists