lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1250098593.4000.114.camel@mulgrave.site>
Date:	Wed, 12 Aug 2009 17:36:33 +0000
From:	James Bottomley <James.Bottomley@...senPartnership.com>
To:	Alan Stern <stern@...land.harvard.edu>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Kernel development list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] Add kerneldoc for flush_scheduled_work()

On Wed, 2009-08-12 at 13:25 -0400, Alan Stern wrote:
> On Wed, 12 Aug 2009, James Bottomley wrote:
> 
> > > >      2. If you have to hold a lock while waiting:
> > > >              1. If it's a global lock, make sure you're using a local
> > > >                 queue and that nothing you submitted to the queue can
> > > >                 take the lock
> > > >              2. If it's a local lock, you may use a global queue but
> > > >                 must still make sure that nothing you submitted to the
> > > >                 queue can take the lock.
> 
> > > Anyway, 2.1 and 2.2 are wrong.  They should read: "... make sure that
> 
> (I overstated things; 2.1 is okay.  But 2.2. is wrong.  And 
> flush_scheduled_work() definitely concerns a global queue.)
> 
> > > nothing submitted to the queue calls any of your routines that can take
> > > the lock."  The point being that even though _you_ don't submit
> > > anything bad to the queue, somebody else might do so.
> > 
> > Semantically "make sure that nothing submitted to the queue can take a
> > lock" means exactly that ... it includes all routines called by the
> > queue function.
> 
> James, you wrote "make sure that nothing _you_ submitted to the queue" 
> (my emphasis).  That is quite different from "make sure that nothing 
> submitted to the queue".  One is doable, the other isn't.

If it's a local lock, how would something someone else submitted to the
queue, which would be out of scope, take this local lock?  A local lock,
by definition is local to the code scope you control.

> > > If you use cancel_work_sync() instead of flush_scheduled_work() then 
> > > the rules become less onerous.  In place of your 2.1 and 2.2, we have:
> > > 
> > > 	Make sure the work item you are cancelling cannot take
> > > 	the lock.
> > > 
> > > This is much easier to verify.
> > 
> > It's the same, surely ... you must verify that everything going on to
> > the queue obeys the rules otherwise you get things on it which can't be
> > cancelled ...
> 
> Not at all.  With cancel_work_sync() you must verify only that the
> item you want to cancel obeys the rules.  There's no need to check the
> other items -- and a public workqueue like keventd_wq certainly will
> contain other items.

Um, so this is called on driver removal, which is an asynchronous event.
Thus, how can you assure that what's on the queue (which you are
advocating calling cancel sync for) doesn't violate the locking rules at
the time the driver is removed, unless you assure that every piece of
work submitted doesn't violate them.

Thus the rules for calling flush and cancel sync are the same.

James


> > and thus either the advice to call cancel is wrong, or we
> > could call flush because the locking rules are obeyed.
> 
> No; as shown above, your logic is flawed.
> 
> Alan Stern
> 
> P.S.: Ingo, I rather expected to see you comment on this thread.  What 
> is your opinion?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ