[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJhGHyDQ537NatcsTFAsTz=pKadnCtTYfvK_tXE=Z5oRp5FQyA@mail.gmail.com>
Date: Wed, 27 Jul 2022 16:55:09 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: Valentin Schneider <vschneid@...hat.com>
Cc: Tejun Heo <tj@...nel.org>, LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Frederic Weisbecker <frederic@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Phil Auld <pauld@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>
Subject: Re: [RFC PATCH] workqueue: Unbind workers before sending them to exit()
On Wed, Jul 27, 2022 at 2:30 PM Lai Jiangshan <jiangshanlai@...il.com> wrote:
> >
> > >
> > > What hasn't changed much between my attempts is transferring to-be-destroyed
> > > kworkers from their pool->idle_list to a reaper_list which is walked by
> > > *something* that does unbind+wakeup. AFAIA as long as the kworker is off
> > > the pool->idle_list we can play with it (i.e. unbind+wake) off the
> > > pool->lock.
> > >
> > > It's the *something* that's annoying to get right, I don't want it to be
> > > overly complicated given most users are probably not impacted by what I'm
> > > trying to fix, but I'm getting the feeling it should still be a per-pool
> > > kthread. I toyed with a single reaper kthread but a central synchronization
> > > for all the pools feels like a stupid overhead.
> >
> > I think fixing it in the workqueue.c is complicated.
> >
> > Nevertheless, I will also try to fix it inside workqueue only to see
> > what will come up.
>
> I'm going to kind of revert 3347fc9f36e7 ("workqueue: destroy worker
> directly in the idle timeout handler"), so that we can have a sleepable
> destroy_worker().
>
It is not a good idea. The woken up manager might still be in
the isolated CPU.
On Wed, Jul 27, 2022 at 6:59 AM Tejun Heo <tj@...nel.org> wrote:
>
> I mean, whatever works works but let's please keep it as minimal as
> possible. Why does it need dedicated kthreads in the first place? Wouldn't
> scheduling an unbound work item work just as well?
>
Scheduling an unbound work item will work well.
Powered by blists - more mailing lists