[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e105258b-cef0-279b-a66c-e133be82a0dd@gmail.com>
Date: Fri, 5 Nov 2021 11:16:32 +0200
From: Leonard Crestez <cdleonard@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
Dmitry Safonov <dima@...sta.com>, linux-kernel@...r.kernel.org
Cc: Dmitry Safonov <0x7f454c46@...il.com>,
Andy Lutomirski <luto@...capital.net>,
David Ahern <dsahern@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Francesco Ruggeri <fruggeri@...sta.com>,
Jakub Kicinski <kuba@...nel.org>,
Herbert Xu <herbert@...dor.apana.org.au>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
linux-crypto@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH 1/5] tcp/md5: Don't BUG_ON() failed kmemdup()
On 11/5/21 4:55 AM, Eric Dumazet wrote:
>
>
> On 11/4/21 6:49 PM, Dmitry Safonov wrote:
>> static_branch_unlikely(&tcp_md5_needed) is enabled by
>> tcp_alloc_md5sig_pool(), so as long as the code doesn't change
>> tcp_md5sig_pool has been already populated if this code is being
>> executed.
>>
>> In case tcptw->tw_md5_key allocaion failed - no reason to crash kernel:
>> tcp_{v4,v6}_send_ack() will send unsigned segment, the connection won't be
>> established, which is bad enough, but in OOM situation totally
>> acceptable and better than kernel crash.
>>
>> Introduce tcp_md5sig_pool_ready() helper.
>> tcp_alloc_md5sig_pool() usage is intentionally avoided here as it's
>> fast-path here and it's check for sanity rather than point of actual
>> pool allocation. That will allow to have generic slow-path allocator
>> for tcp crypto pool.
>>
>> Signed-off-by: Dmitry Safonov <dima@...sta.com>
>> ---
>> include/net/tcp.h | 1 +
>> net/ipv4/tcp.c | 5 +++++
>> net/ipv4/tcp_minisocks.c | 5 +++--
>> 3 files changed, 9 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/net/tcp.h b/include/net/tcp.h
>> index 4da22b41bde6..3e5423a10a74 100644
>> --- a/include/net/tcp.h
>> +++ b/include/net/tcp.h
>> @@ -1672,6 +1672,7 @@ tcp_md5_do_lookup(const struct sock *sk, int l3index,
>> #endif
>>
>> bool tcp_alloc_md5sig_pool(void);
>> +bool tcp_md5sig_pool_ready(void);
>>
>> struct tcp_md5sig_pool *tcp_get_md5sig_pool(void);
>> static inline void tcp_put_md5sig_pool(void)
>> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
>> index b7796b4cf0a0..c0856a6af9f5 100644
>> --- a/net/ipv4/tcp.c
>> +++ b/net/ipv4/tcp.c
>> @@ -4314,6 +4314,11 @@ bool tcp_alloc_md5sig_pool(void)
>> }
>> EXPORT_SYMBOL(tcp_alloc_md5sig_pool);
>>
>> +bool tcp_md5sig_pool_ready(void)
>> +{
>> + return tcp_md5sig_pool_populated;
>> +}
>> +EXPORT_SYMBOL(tcp_md5sig_pool_ready);
>>
>> /**
>> * tcp_get_md5sig_pool - get md5sig_pool for this user
>> diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
>> index cf913a66df17..c99cdb529902 100644
>> --- a/net/ipv4/tcp_minisocks.c
>> +++ b/net/ipv4/tcp_minisocks.c
>> @@ -293,11 +293,12 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
>> tcptw->tw_md5_key = NULL;
>> if (static_branch_unlikely(&tcp_md5_needed)) {
>> struct tcp_md5sig_key *key;
>> + bool err = WARN_ON(!tcp_md5sig_pool_ready());
>>
>> key = tp->af_specific->md5_lookup(sk, sk);
>> - if (key) {
>> + if (key && !err) {
>> tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC);
>> - BUG_ON(tcptw->tw_md5_key && !tcp_alloc_md5sig_pool());
>> + WARN_ON_ONCE(tcptw->tw_md5_key == NULL);
>> }
>> }
>> } while (0);
>>
>
> Hmmm.... how this BUG_ON() could trigger exactly ?
>
> tcp_md5_needed can only be enabled after __tcp_alloc_md5sig_pool has succeeded.
The tcp_alloc_md5_pool here should just be removed, it no longer does
anything since md5 pool reference counting and freeing were removed back
in 2013 by commit 71cea17ed39f ("tcp: md5: remove spinlock usage in fast
path").
Powered by blists - more mailing lists