lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <202501061501.26556F56@keescook>
Date: Mon, 6 Jan 2025 16:09:21 -0800
From: Kees Cook <kees@...nel.org>
To: Paul Moore <paul@...l-moore.com>
Cc: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
	syzbot+4eb7a741b3216020043a@...kaller.appspotmail.com,
	jmorris@...ei.org, linux-kernel@...r.kernel.org,
	linux-security-module@...r.kernel.org, serge@...lyn.com,
	syzkaller-bugs@...glegroups.com, Leo Stone <leocstone@...il.com>,
	mortonm@...omium.org
Subject: Re: [PATCH v2] lsm: check size of writes

On Sat, Jan 04, 2025 at 11:04:09PM -0500, Paul Moore wrote:
> On Mon, Dec 23, 2024 at 12:33 AM Kees Cook <kees@...nel.org> wrote:
> >
> > If the LSM core did a kmem_buckets_create() for each LSM, and the LSMs
> > were adjusted to explicitly allocate from their own bucket set, that
> > would be one way. Or just for the LSM as a whole (1 set of buckets
> > instead of a set for each LSM). I'd be happy to review patches for
> > either idea.
> 
> If we're doing the work to shift over to kmem_buckets, it seems like
> creating per-LSM buckets is the better option unless I'm missing
> something.
> 
> I'm also not sure why the LSM framework would need to call
> kmem_buckets_create() on behalf of the individual LSMs, can someone
> help me understand why the individual LSMs couldn't do it in their
> init routines?

When we moved stuff around for stacking, we moved other allocation
duties into the "core" of the LSM, so it just seemed reasonable to me to
do it again if this happened.

> If it is necessary for the LSM framework to create the buckets and
> hand them back to the individual LSMs, I would suggest adding a new
> flag to the lsm_info->flags field that a LSM could set to request a
> kmem_bucket, and then add a new field to lsm_info that the LSM
> framework could use to return the bucket to the LSM.  LSMs could
> opt-in to kmem_buckets when they found the time to convert.

Yeah, agreed. Since allocations would need to swap kmalloc() for
kmem_bucket_alloc(), we could also create something like lsm_alloc() and
hide everything from the individual LSMs -- the core would handle
allocating and using the buckets handle, etc.

Does anyone want to make a series for this? I am not planning to -- I'm
focused on the per-site implementation.

> > I think per-site buckets is going to be the most effective long-term:
> > https://lore.kernel.org/lkml/20240809072532.work.266-kees@kernel.org/
> >
> > But that doesn't exclude new kmem_buckets_create() users.
> 
> Is there an update on the per-site buckets?  I agree that would be the
> preferable solution from a hardening perspective, and if it is on the
> horizon it may not be worth the effort to convert the LSMs over to an
> explicit kmem_buckets approach.

I haven't had a chance to refresh the patch series, but the implementation
still works well. Besides some smaller feedback, I had also wanted to
make the individual buckets be allocated as needed. That way if something
was only doing allocations in, say, the 16 to 128 byte range, we wouldn't
lose memory to track the (unused) higher order bucket sizes.

I expect to send out the next revision after the coming merge window.

-Kees

-- 
Kees Cook

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ