[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20190430.105306.1978317247998825768.davem@davemloft.net>
Date: Tue, 30 Apr 2019 10:53:06 -0400 (EDT)
From: David Miller <davem@...emloft.net>
To: dhowells@...hat.com
Cc: netdev@...r.kernel.org, linux-afs@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net] rxrpc: Fix net namespace cleanup
From: David Howells <dhowells@...hat.com>
Date: Tue, 30 Apr 2019 08:34:08 +0100
> In rxrpc_destroy_all_calls(), there are two phases: (1) make sure the
> ->calls list is empty, emitting error messages if not, and (2) wait for the
> RCU cleanup to happen on outstanding calls (ie. ->nr_calls becomes 0).
>
> To avoid taking the call_lock, the function prechecks ->calls and if empty,
> it returns to avoid taking the lock - this is wrong, however: it still
> needs to go and do the second phase and wait for ->nr_calls to become 0.
>
> Without this, the rxrpc_net struct may get deallocated before we get to the
> RCU cleanup for the last calls. This can lead to:
>
> Slab corruption (Not tainted): kmalloc-16k start=ffff88802b178000, len=16384
> 050: 6b 6b 6b 6b 6b 6b 6b 6b 61 6b 6b 6b 6b 6b 6b 6b kkkkkkkkakkkkkkk
>
> Note the "61" at offset 0x58. This corresponds to the ->nr_calls member of
> struct rxrpc_net (which is >9k in size, and thus allocated out of the 16k
> slab).
>
>
> Fix this by flipping the condition on the if-statement, putting the locked
> section inside the if-body and dropping the return from there. The
> function will then always go on to wait for the RCU cleanup on outstanding
> calls.
>
> Fixes: 2baec2c3f854 ("rxrpc: Support network namespacing")
> Signed-off-by: David Howells <dhowells@...hat.com>
Applied and queued up for -stable, thanks.
Powered by blists - more mailing lists