lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <04f2a791-6b44-4743-b074-bda537cbc8e4@lucifer.local>
Date: Wed, 16 Oct 2024 12:16:10 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Bert Karwatzki <spasswolf@....de>
Cc: "Liam R . Howlett" <Liam.Howlett@...cle.com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 14/21] mm/mmap: Avoid zeroing vma tree in mmap_region()

On Wed, Oct 16, 2024 at 12:28:51PM +0200, Bert Karwatzki wrote:
> Am Montag, dem 14.10.2024 um 10:46 +0100 schrieb Lorenzo Stoakes:
> > On Mon, Oct 14, 2024 at 12:35:59AM +0200, Bert Karwatzki wrote:
> > > I created a program which can trigger the bug on newer kernel (after the
> > > "Avoid zeroing vma tree in mmap_region()" patch and before the fix).
> > > My original goal was to trigger the bug on older kernels,
> > > but that does not work, yet.
> > >
> > > Bert Karwatzki
> >
> > Thanks, that's great!
> >
> > For older kernels the problem should still be present, the fundamental
> > thing that changed from the point of view of this bug is that merge won't
> > contribute to the number of VMAs being overwritten at once.
> >
> > To trigger prior to commit f8d112a4e657 ("mm/mmap: avoid zeroing vma tree
> > in mmap_region()") you would need to create a situation where the _clear_
> > triggers the bug, i.e. you must consistute all the VMAs that are being
> > overwritten by the store from existing VMAs you are overwriting with a
> > MAP_FIXED.
> >
> > So some tweaks should get you there...
> > >
>
> I don't think triggering the bug on a clear works, because a write of a %NULL
> that will cause a node to end with a %NULL becomes a spanning write into the
> next node:
>
> /*
>  * mas_is_span_wr() - Check if the write needs to be treated as a write that
>  * spans the node.
>  * @mas: The maple state
>  * @piv: The pivot value being written
>  * @type: The maple node type
>  * @entry: The data to write
>  *
>  * Spanning writes are writes that start in one node and end in another OR if
>  * the write of a %NULL will cause the node to end with a %NULL.
>  *
>  * Return: True if this is a spanning write, false otherwise.
>  */
> static bool mas_is_span_wr(struct ma_wr_state *wr_mas)
> {
>
>
> I think the could would trigger in this situation
>
>               Node_0
> 	     /
>             /
> 	 Node_1
>          /    \
>         /      \
>      Node_2    Node_3
>
> but only if Node_3 contained only two ranges, one empty range and one normal
> range, and if the mmap into empty range of Node_3 would merge with the last
> range of Node_2 and the last range of Node_3. But I think the rebalancing of the
> tree will make it very hard if not impossible to create such a node.
>
>
> Bert Karwatzki

Hm well that would explain why we couldn't hit it so easy in the past then,
and is a good thing... :)

Still the bug is still a bug even if hard to hit from mm code and should
get backported (obviously you're suggesting otherwise, just to emphasise :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ