[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20071214.113951.114866674.davem@davemloft.net>
Date: Fri, 14 Dec 2007 11:39:51 -0800 (PST)
From: David Miller <davem@...emloft.net>
To: xemul@...nvz.org
Cc: herbert@...dor.apana.org.au, netdev@...r.kernel.org,
devel@...nvz.org
Subject: Re: [PATCH][XFRM] Fix potential race vs xfrm_state(only)_find and
xfrm_hash_resize.
From: Pavel Emelyanov <xemul@...nvz.org>
Date: Thu, 13 Dec 2007 13:56:14 +0300
> The _find calls calculate the hash value using the
> xfrm_state_hmask, without the xfrm_state_lock. But the
> value of this mask can change in the _resize call under
> the state_lock, so we risk to fail in finding the desired
> entry in hash.
>
> I think, that the hash value is better to calculate
> under the state lock.
>
> Signed-off-by: Pavel Emelyanov <xemul@...nvz.org>
Thanks for the bug fix.
I know why I coded it this way, I wanted to give GCC more
room to schedule the loads away from the uses in the hash
calculation.
Once you cram it after the spin lock acquire, it can't load unrelated
values earlier to soften the load/use cost on cache misses.
Of course it's invalid because the hash mask can change as you
noticed.
I wish there was a way to conditionally clobber memory, then we could
tell GCC exactly what memory objects are protected by the lock and
thus help in situations like this so much.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists