[<prev] [next>] [day] [month] [year] [list]
Message-ID: <748e4541-7608-ae51-bdd1-1356deedb105@sony.com>
Date: Tue, 25 Aug 2020 20:51:14 +0200
From: peter enderborg <peter.enderborg@...y.com>
To: Ondrej Mosnacek <omosnace@...hat.com>, <selinux@...r.kernel.org>,
Paul Moore <paul@...l-moore.com>
CC: Stephen Smalley <stephen.smalley.work@...il.com>,
Lakshmi Ramasubramanian <nramas@...ux.microsoft.com>,
<rcu@...r.kernel.org>, "Paul E . McKenney" <paulmck@...nel.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 3/3] selinux: track policy lifetime with refcount
On 8/25/20 5:20 PM, Ondrej Mosnacek wrote:
> Instead of holding the RCU read lock the whole time while accessing the
> policy, add a simple refcount mechanism to track its lifetime. After
> this, the RCU read lock is held only for a brief time when fetching the
> policy pointer and incrementing the refcount. The policy struct is then
> guaranteed to stay alive until the refcount is decremented.
>
> Freeing of the policy remains the responsibility of the task that does
> the policy reload. In case the refcount drops to zero in a different
> task, the policy load task is notified via a completion.
>
> The advantage of this change is that the operations that access the
> policy can now do sleeping allocations, since they don't need to hold
> the RCU read lock anymore. This patch so far only leverages this in
> security_read_policy() for the vmalloc_user() allocation (although this
> function is always called under fsi->mutex and could just access the
> policy pointer directly). The conversion of affected GFP_ATOMIC
> allocations to GFP_KERNEL is left for a later patch, since auditing
> which code paths may still need GFP_ATOMIC is not very easy.
>
> Signed-off-by: Ondrej Mosnacek <omosnace@...hat.com>
>
Very clever. But is it the right prioritization? We get a lot more
cpu synchronization need with two RCU in-out and refcounts inc/dec
instead of only one RCU in-out. What is the problem with the atomic
allocations? And this if for each syscall, all caches are on the inside?
Powered by blists - more mailing lists