[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <453212cf-8987-9f05-ceae-42a4fc3b0876@gmail.com>
Date: Thu, 6 Feb 2020 09:10:34 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: "Jason A. Donenfeld" <Jason@...c4.com>, eric.dumazet@...il.com
Cc: cai@....pw, netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Marco Elver <elver@...gle.com>
Subject: Re: [PATCH v3] skbuff: fix a data race in skb_queue_len()
On 2/6/20 8:38 AM, Jason A. Donenfeld wrote:
> Hi Eric,
>
> On Tue, Feb 04, 2020 at 01:40:29PM -0500, Qian Cai wrote:
>> - list->qlen--;
>> + WRITE_ONCE(list->qlen, list->qlen - 1);
>
> Sorry I'm a bit late to the party here, but this immediately jumped out.
> This generates worse code with a bigger race in some sense:
>
> list->qlen-- is:
>
> 0: 83 6f 10 01 subl $0x1,0x10(%rdi)
>
> whereas WRITE_ONCE(list->qlen, list->qlen - 1) is:
>
> 0: 8b 47 10 mov 0x10(%rdi),%eax
> 3: 83 e8 01 sub $0x1,%eax
> 6: 89 47 10 mov %eax,0x10(%rdi)
>
> Are you sure that's what we want?
>
> Jason
>
Unfortunately we do not have ADD_ONCE() or something like that.
Sure, on x86 we could get much better code generation.
If we agree a READ_ONCE() was needed at the read side,
then a WRITE_ONCE() is needed as well on write sides.
If we believe load-tearing and/or write-tearing must not ever happen,
then we must document this.
Powered by blists - more mailing lists