lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060821082454.GA19320@htj.dyndns.org>
Date:	Mon, 21 Aug 2006 17:24:54 +0900
From:	Tejun Heo <htejun@...il.com>
To:	Dipankar Sarma <dipankar@...ibm.com>
Cc:	viro@...iv.linux.org.uk, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] file: kill unnecessary timer in fdtable_defer

On Mon, Aug 21, 2006 at 01:28:18PM +0530, Dipankar Sarma wrote:
> On Mon, Aug 21, 2006 at 02:18:16PM +0900, Tejun Heo wrote:
> > On Mon, Aug 21, 2006 at 10:02:57AM +0530, Dipankar Sarma wrote:
> > > > regardless of its return value.  0 return simply means that the work
> > > > was already pending and thus no further action was required.
> > > 
> > > Hmm.. Is this really true ? IIRC, schedule_work() checks pending
> > > work based on bit ops on work->pending and clear_bit() is not
> > > a memory barrier.
> > 
> > Those bitops are not memory barriers but they can define order between
> > them alright.  Once the execution order is correct, the rest of
> 
> Huh ? If they are not memory barriers, they how can you guranttee
> ordering on weakly ordered CPUs ?

Atomic bitops define orders *between* them not *around* them.  ie. if
you have two atomic bitops on the same bit, they're ordered one way or
the other.  As the workqueue code currently stands, ordering around
the bitops is the caller's responsibility and fddef does it with a
spinlock.  As the producer and consumer usually access a common
resource, that kind of synchrnoization is inherent in most cases.

[--snip--]
> Given that there is no smp_mb__after_clear_bit() after clearing
> work->pending, what prevents the worker thread from seeing

Let me revise the diagram.

            Queuer                         Worker
 
             lock
         -------------             <--- clr pending --->
        | produce job |                       |
        |             |                       v
         -------------                  trying to lock
            unlock                            v
               |                        lock granted
               v                       --------------
     <--- set pending --->            | consume jobs |
                                      |              |
                                       --------------
                                           unlock

> the state of the deferred fd queue before setting the pending bit ?
> IOW, the queuer sees pending = 1 and ignores waking the
> worker thread, worker sees a stale state of the deferred fd queue
> ignoring the newly queued work. That should be possible on
> a cpu with weak memory ordering.

Queue being a shared resource, people usually synchronize around it.

> Perhaps, we should fix __queue_work() to add the
> smp_mb__after_clear_bit() and make sure that we have a memory
> barrier after adding the deferred fds.

That would help if the producer and consumer depend on memory ordering
for synchronization, which usually isn't the case.  I'm not sure
whether adding that is a good idea or not.  IMHO, it's better to use
explicit memory barriers with enough comments in such subtle cases to
make things clear.

Either way, fddef is just straight forward producer-consumer case with
no need for requeueing mechanism.  There is no need and no other user
does anything similar.

Thanks.

-- 
tejun
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ