[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHC9VhQHpBjp-gYTqSg9kDj+ZTQES2EfyeQYgpfW7gaMZxj4eg@mail.gmail.com>
Date: Wed, 8 Jan 2025 16:17:47 -0500
From: Paul Moore <paul@...l-moore.com>
To: Kees Cook <kees@...nel.org>
Cc: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
syzbot+4eb7a741b3216020043a@...kaller.appspotmail.com, jmorris@...ei.org,
linux-kernel@...r.kernel.org, linux-security-module@...r.kernel.org,
serge@...lyn.com, syzkaller-bugs@...glegroups.com,
Leo Stone <leocstone@...il.com>, mortonm@...omium.org
Subject: Re: [PATCH v2] lsm: check size of writes
On Mon, Jan 6, 2025 at 7:09 PM Kees Cook <kees@...nel.org> wrote:
> On Sat, Jan 04, 2025 at 11:04:09PM -0500, Paul Moore wrote:
> > On Mon, Dec 23, 2024 at 12:33 AM Kees Cook <kees@...nel.org> wrote:
> > >
> > > If the LSM core did a kmem_buckets_create() for each LSM, and the LSMs
> > > were adjusted to explicitly allocate from their own bucket set, that
> > > would be one way. Or just for the LSM as a whole (1 set of buckets
> > > instead of a set for each LSM). I'd be happy to review patches for
> > > either idea.
> >
> > If we're doing the work to shift over to kmem_buckets, it seems like
> > creating per-LSM buckets is the better option unless I'm missing
> > something.
> >
> > I'm also not sure why the LSM framework would need to call
> > kmem_buckets_create() on behalf of the individual LSMs, can someone
> > help me understand why the individual LSMs couldn't do it in their
> > init routines?
>
> When we moved stuff around for stacking, we moved other allocation
> duties into the "core" of the LSM, so it just seemed reasonable to me to
> do it again if this happened.
Moving the allocation of the LSM state blobs for core kernel entities,
e.g. inode->i_security, was a bit different as it was necessary to
support multiple LSMs. How could we support multiple simultaneous
LSMs if each of them wanted to control the inode->i_security field?
Moving arbitrary LSM allocations out of the individual LSMs and into
the LSM framework itself is definitely *not* something we want to do.
We really want to keep the LSM framework as small as possible.
> > If it is necessary for the LSM framework to create the buckets and
> > hand them back to the individual LSMs, I would suggest adding a new
> > flag to the lsm_info->flags field that a LSM could set to request a
> > kmem_bucket, and then add a new field to lsm_info that the LSM
> > framework could use to return the bucket to the LSM. LSMs could
> > opt-in to kmem_buckets when they found the time to convert.
>
> Yeah, agreed. Since allocations would need to swap kmalloc() for
> kmem_bucket_alloc(), we could also create something like lsm_alloc() and
> hide everything from the individual LSMs -- the core would handle
> allocating and using the buckets handle, etc.
>
> Does anyone want to make a series for this? I am not planning to -- I'm
> focused on the per-site implementation.
I personally have enough other things to work on, and review, right
now that I don't see having any time to work on this in the near or
mid term, especially as a more preferred solution is in the works.
Putting on my SELinux maintainer hat for a moment, as long as there is
some belief that we'll have a per-site implementation, I'm not sure I
would merge a per-LSM bucket patchset right now as it would be work
that we would need to unroll once we had the per-site work upstream
... I would need to think about that. Of course other LSM maintainers
might feel differently about their respective LSMs.
> > > I think per-site buckets is going to be the most effective long-term:
> > > https://lore.kernel.org/lkml/20240809072532.work.266-kees@kernel.org/
> > >
> > > But that doesn't exclude new kmem_buckets_create() users.
> >
> > Is there an update on the per-site buckets? I agree that would be the
> > preferable solution from a hardening perspective, and if it is on the
> > horizon it may not be worth the effort to convert the LSMs over to an
> > explicit kmem_buckets approach.
>
> I haven't had a chance to refresh the patch series, but the implementation
> still works well. Besides some smaller feedback, I had also wanted to
> make the individual buckets be allocated as needed. That way if something
> was only doing allocations in, say, the 16 to 128 byte range, we wouldn't
> lose memory to track the (unused) higher order bucket sizes.
>
> I expect to send out the next revision after the coming merge window.
Thanks for the update.
--
paul-moore.com
Powered by blists - more mailing lists