[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 11 Nov 2021 08:01:26 -0800
From: Tadeusz Struk <tadeusz.struk@...aro.org>
To: Marco Elver <elver@...gle.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Nathan Chancellor <nathan@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Alexander Lobakin <alobakin@...me>,
Willem de Bruijn <willemb@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
Cong Wang <cong.wang@...edance.com>,
Kevin Hao <haokexin@...il.com>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
llvm@...ts.linux.dev, Kees Cook <keescook@...omium.org>,
Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH] skbuff: suppress clang object-size-mismatch error
On 11/11/21 07:52, Marco Elver wrote:
>> The other way to fix it would be to make the struct sk_buff_head
>> equal in size with struct sk_buff:
>>
>> struct sk_buff_head {
>> - /* These two members must be first. */
>> - struct sk_buff *next;
>> - struct sk_buff *prev;
>> + union {
>> + struct {
>> + /* These two members must be first. */
>> + struct sk_buff *next;
>> + struct sk_buff *prev;
>>
>> - __u32 qlen;
>> - spinlock_t lock;
>> + __u32 qlen;
>> + spinlock_t lock;
>> + };
>> + struct sk_buff __prv;
>> + };
>> };
>>
>> but that's much more invasive, and I don't even have means to
>> quantify this in terms of final binary size and performance
>> impact. I think that would be a flat out no go.
>>
>> From the other hand if you look at the __skb_queue functions
>> they don't do much and at all so there is no much room for
>> other issues really. I followed the suggestion in [1]:
>>
>> "if your function deliberately contains possible ..., you can
>> use __attribute__((no_sanitize... "
> That general advice might not be compatible with what the kernel
> wants, especially since UBSAN_OBJECT_SIZE is normally disabled and I
> think known to cause these issues in the kernel.
>
> I'll defer to maintainers to decide what would be the preferred way of
> handling this.
Sure, I would also like to know if there is a better way of fixing this.
Thanks for your feedback.
--
Thanks,
Tadeusz
Powered by blists - more mailing lists