[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080422133631.GA28594@linux.vnet.ibm.com>
Date: Tue, 22 Apr 2008 06:36:31 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
"Rafael J. Wysocki" <rjw@...k.pl>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-ext4@...r.kernel.org
Subject: Re: 2.6.25-git2: BUG: unable to handle kernel paging request at ffffffffffffffff
On Tue, Apr 22, 2008 at 09:03:04AM +0800, Herbert Xu wrote:
> On Mon, Apr 21, 2008 at 08:49:58AM -0700, Linus Torvalds wrote:
> >
> > That is *not* the main problem.
> >
> > If you use "rcu_dereference()" on the wrong access, it not only loses the
> > "smp_read_barrier_depends()" (which is a no-op on all sane architectures
> > anyway), but it loses the ACCESS_ONCE() thing *entirely*.
>
> Actually rcu_dereference didn't have ACCESS_ONCE when I did this.
> That only appearaed later with the preemptible RCU work.
Yep, ACCESS_ONCE() is quite recent -- within the last year. So I should
have modified the list_for_each.*rcu() macros when I made that change.
> The original purpose of rcu_dereference was exactly to replace the
> explicit barriers that people were using for RCU, nothing more,
> nothing less.
>
> Oh and I totally agree that the compiler is going to generate insane
> code whenever ACCESS_ONCE is used. In this case we may have avoided
> it by rearranging the code, but in general the introduction of ACCESS_ONCE
> in rcu_dereference is likely to have a negative impact on the code
> generated.
>
> Remember that "volatile" discussion? I think this is where it all came
> from.
And I still have the bug in to gcc:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33102
Interesting, currently in status "unconfirmed"... I guess I should
supply a test case.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists