[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070919231943.4b121361@lappy>
Date: Wed, 19 Sep 2007 23:19:43 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: "Dmitry Torokhov" <dmitry.torokhov@...il.com>
Cc: paulmck@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
"Ingo Molnar" <mingo@...e.hu>,
"Andrew Morton" <akpm@...ux-foundation.org>,
"Nick Piggin" <nickpiggin@...oo.com.au>
Subject: Re: [RFC][PATCH 2/6] lockdep: validate rcu_dereference() vs
rcu_read_lock()
On Wed, 19 Sep 2007 16:41:04 -0400 "Dmitry Torokhov"
<dmitry.torokhov@...il.com> wrote:
> > If the IRQ handler does rcu_read_lock(),unlock() and the i8042_stop()
> > function does sync_rcu() instead of _sched(), it should be good again.
> > It will not affect anything else than the task that calls _stop(). And
> > even there the only change is that the sleep might be a tad longer.
>
> And the IRQ handler needs to do some extra job... Anyway, it looks -rt
> breaks synchronize_sched() and needs to have it fixed:
>
> "/**
> * synchronize_sched - block until all CPUs have exited any non-preemptive
> * kernel code sequences.
> *
> * This means that all preempt_disable code sequences, including NMI and
> * hardware-interrupt handlers, in progress on entry will have completed
> * before this primitive returns."
That still does as it says in -rt. Its just that the interrupt handler
will be preemptible so the guarantees it gives are useless.
> > I find it curious that a driver that is 'low performant' and does not
> > suffer lock contention pioneers locking schemes. I agree with
> > optimizing, but this is not the place to push the envelope.
>
> Please realize that evey microsecond wasted on a 'low performant'
> driver is taken from high performers and if we can help it why
> shouldn't we?
sure, but the cache eviction caused by running the driver will have
more impact than the added rcu_read_{,un}lock() calls.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists