[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090629092407.GA3845@jolsa.lab.eng.brq.redhat.com>
Date: Mon, 29 Jun 2009 11:24:07 +0200
From: Jiri Olsa <jolsa@...hat.com>
To: Andi Kleen <andi@...stfloor.org>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
fbl@...hat.com, nhorman@...hat.com, davem@...hat.com,
oleg@...hat.com, eric.dumazet@...il.com
Subject: Re: [PATCH] net: fix race in the receive/select
On Mon, Jun 29, 2009 at 11:12:33AM +0200, Andi Kleen wrote:
> Jiri Olsa <jolsa@...hat.com> writes:
>
> > Adding memory barrier to the __pollwait function paired with
> > receive callbacks. The smp_mb__after_lock define is added,
> > since {read|write|spin}_lock() on x86 are full memory barriers.
>
> I was wondering did you see that race actually happening in practice?
> If yes on which system?
>
> At least on x86 I can't see how it happens. mb() is only a compile
> time barrier and the compiler doesn't optimize over indirect callbacks
> like __pollwait() anyways.
>
> It might be still needed on some weaker ordered architectures, but did you
> actually see it there?
>
> -Andi
yes, we have a customer that has been able to reproduce this problem on
x86_64 CPU model Xeon E5345*2, but they didn't reproduce on XEON MV, for example.
they were able to capture a backtrace when the race happened:
https://bugzilla.redhat.com/show_bug.cgi?id=494404#c1
jirka
>
> --
> ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists