lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e678b85e-efc3-491b-8cfa-72d33d769d25@gmail.com>
Date: Tue, 8 Apr 2025 14:00:03 +0200
From: Attila Szasz <szasza.contact@...il.com>
To: Greg KH <gregkh@...uxfoundation.org>,
 Christian Brauner <brauner@...nel.org>, cve@...nel.org
Cc: Cengiz Can <cengiz.can@...onical.com>,
 Salvatore Bonaccorso <carnil@...ian.org>, linux-fsdevel@...r.kernel.org,
 linux-kernel@...r.kernel.org, lvc-patches@...uxtesting.org,
 dutyrok@...linux.org, syzbot+5f3a973ed3dfb85a6683@...kaller.appspotmail.com,
 stable@...r.kernel.org, Alexander Viro <viro@...iv.linux.org.uk>
Subject: Re: [PATCH] hfs/hfsplus: fix slab-out-of-bounds in hfs_bnode_read_key

I’m not sure what you're doing with CVEs at the moment, but it doesn’t 
matter too much for me personally, though it can be a bit worrisome for 
others. Post-CRA, things will likely change again anyway.

Distros should probably consider guestmounting, as Richard suggested, or 
disabling the feature altogether once edge cases are worked out.

https://github.com/torvalds/linux/commit/25efb2ffdf991177e740b2f63e92b4ec7d310a92

Looking at other unpatched syzkaller reports and older commits, it seems 
plausible that the HFS+ driver was—and possibly still is—unstable in how 
it manipulates B-trees. One could potentially mount a valid, empty image 
and cause corruption through normal, unprivileged operations, leading to 
a memory corruption primitive that could root the box.

I’m not working on this right now, but it might be the case that 
syzkaller might uncover stuff similar to whatever was /*formerly 
referred to by*/ CVE-2025-0927. People tend to view these things as 
either abuse or contribution, depending on their sentiment toward 
information security, anyway.

Still, it might be worth keeping in mind for the distros and taking 
lessons from this to improve whatever implicit threat models the 
involved parties are working with.

Thanks for the fix upstream! It was probably a good call after all, all 
things considered.

On 4/8/25 10:03, Greg KH wrote:
> On Mon, Apr 07, 2025 at 12:59:18PM +0200, Christian Brauner wrote:
>> On Sun, Apr 06, 2025 at 07:07:57PM +0300, Cengiz Can wrote:
>>> On 24-03-25 11:53:51, Greg KH wrote:
>>>> On Mon, Mar 24, 2025 at 09:43:18PM +0300, Cengiz Can wrote:
>>>>> In the meantime, can we get this fix applied?
>>>> Please work with the filesystem maintainers to do so.
>>> Hello Christian, hello Alexander
>>>
>>> Can you help us with this?
>>>
>>> Thanks in advance!
>> Filesystem bugs due to corrupt images are not considered a CVE for any
>> filesystem that is only mountable by CAP_SYS_ADMIN in the initial user
>> namespace. That includes delegated mounting.
> Thank you for the concise summary of this.  We (i.e. the kernel CVE
> team) will try to not assign CVEs going forward that can only be
> triggered in this way.
>
>> The blogpost is aware that the VFS maintainers don't accept CVEs like
>> this. Yet a CVE was still filed against the upstream kernel. IOW,
>> someone abused the fact that a distro chose to allow mounting arbitrary
>> filesystems including orphaned ones by unprivileged user as an argument
>> to gain a kernel CVE.
> Yes, Canonical abused their role as a CNA and created this CVE without
> going through the proper processes.  kernel.org is now in charge of this
> CVE, and:
>
>> Revoke that CVE against the upstream kernel. This is a CVE against a
>> distro. There's zero reason for us to hurry with any fix.
> I will go reject this now.
>
> Note, there might be some older CVEs that we have accidentally assigned
> that can only be triggered by hand-crafted filesystem images.  If anyone
> wants to dig through the 5000+ different ones we have, we will be glad
> to reject them as well.
>
> thanks,
>
> greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ