lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141030101932.2241daa7@notabene.brown>
Date:	Thu, 30 Oct 2014 10:19:32 +1100
From:	NeilBrown <neilb@...e.de>
To:	Tejun Heo <tj@...nel.org>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH/RFC] workqueue: allow rescuer thread to do more work.

On Wed, 29 Oct 2014 10:32:10 -0400 Tejun Heo <tj@...nel.org> wrote:

> Hello, Neil.
> 
> On Wed, Oct 29, 2014 at 05:26:08PM +1100, NeilBrown wrote:
> > Hi Tejun,
> >   I haven't tested this patch yet so this really is an 'RFC'.
> > In general ->nr_active should only be accessed under the pool->lock,
> > but a miss-read here will at most cause a very occasional 100ms delay so
> > shouldn't be a big problem.  The only thread likely to change ->nr_active is
> > this thread, so such a delay would be extremely unlikely.
> > 
> > Thanks,
> > NeilBrown
> > 
> > 
> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > index 09b685daee3d..d0a8b101c5d9 100644
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> > @@ -2244,16 +2244,18 @@ repeat:
> >  		spin_lock_irq(&pool->lock);
> >  		rescuer->pool = pool;
> >  
> > -		/*
> > -		 * Slurp in all works issued via this workqueue and
> > -		 * process'em.
> > -		 */
> > -		WARN_ON_ONCE(!list_empty(&rescuer->scheduled));
> > -		list_for_each_entry_safe(work, n, &pool->worklist, entry)
> > -			if (get_work_pwq(work) == pwq)
> > -				move_linked_works(work, scheduled, &n);
> > +		do {
> > +			/*
> > +			 * Slurp in all works issued via this workqueue and
> > +			 * process'em.
> > +			 */
> > +			WARN_ON_ONCE(!list_empty(&rescuer->scheduled));
> > +			list_for_each_entry_safe(work, n, &pool->worklist, entry)
> > +				if (get_work_pwq(work) == pwq)
> > +					move_linked_works(work, scheduled, &n);
> >  
> > -		process_scheduled_works(rescuer);
> > +			process_scheduled_works(rescuer);
> > +		} while (need_more_worker(pool) && pwq->nr_active);
> 
> need_more_worker(pool) is always true for unbound pools as long as
> there are work items queued, so the above condition may stay true
> longer than it needs to.

Because ->nr_running isn't maintained for WORKER_UNBOUND (which is part of
WORKER_NOT_RUNNING) - got it.

>                           Given that workder depletion is pool-wide
> event, maybe it'd make sense to trigger rescuers immediately while
> workers are in short supply?  e.g. while there's a manager stuck in
> maybe_create_worker() with the mayday timer already triggered?

So what if I change "need_more_worker" to "need_to_create_worker" ?
Then it will stop as soon as there in an idle worker thread.
That is the condition that keeps maybe_create_worker() looping.
??

Thanks,
NeilBrown


Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ