[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fcb2fbc2-a26e-486f-b6e4-4574774f476f@roeck-us.net>
Date: Fri, 28 Jul 2023 09:36:37 -0700
From: Guenter Roeck <linux@...ck-us.net>
To: Rik van Riel <riel@...riel.com>
Cc: Mike Rapoport <rppt@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [PATCH] mm,memblock: reset memblock.reserved to system init
state to prevent UAF
On Fri, Jul 28, 2023 at 09:09:09AM -0700, Guenter Roeck wrote:
> Hi,
>
> On Wed, Jul 19, 2023 at 03:41:37PM -0400, Rik van Riel wrote:
> > The memblock_discard function frees the memblock.reserved.regions
> > array, which is good.
> >
> > However, if a subsequent memblock_free (or memblock_phys_free) comes
> > in later, from for example ima_free_kexec_buffer, that will result in
> > a use after free bug in memblock_isolate_range.
> >
> > When running a kernel with CONFIG_KASAN enabled, this will cause a
> > kernel panic very early in boot. Without CONFIG_KASAN, there is
> > a chance that memblock_isolate_range might scribble on memory
> > that is now in use by somebody else.
> >
> > Avoid those issues by making sure that memblock_discard points
> > memblock.reserved.regions back at the static buffer.
> >
> > If memblock_discard is called while there is still memory
> > in the memblock.reserved type, that will print a warning
> > in memblock_remove_region.
> >
> > Signed-off-by: Rik van Riel <riel@...riel.com>
>
> This patch results in the following WARNING backtrace when booting sparc
> or sparc64 images in qemu. Bisect log is attached.
>
Follow-up: On sparc64, this patch also results in the following backtrace.
[ 2.931808] BUG: scheduling while atomic: swapper/0/1/0x00000002
[ 2.932865] no locks held by swapper/0/1.
[ 2.933722] Modules linked in:
[ 2.934627] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 6.5.0-rc3+ #1
[ 2.935604] Call Trace:
[ 2.936315] [<00000000004a0610>] __schedule_bug+0x70/0x80
[ 2.937174] [<0000000000f68f50>] switch_to_pc+0x598/0x8e8
[ 2.937999] [<0000000000f69300>] schedule+0x60/0xe0
[ 2.938811] [<0000000000f72d2c>] schedule_timeout+0x10c/0x1c0
[ 2.939668] [<0000000000f69be0>] __wait_for_common+0xa0/0x1a0
[ 2.940510] [<0000000000f69d98>] wait_for_completion_killable+0x18/0x40
[ 2.941402] [<0000000000494dec>] __kthread_create_on_node+0xac/0x120
[ 2.942259] [<0000000000494e80>] kthread_create_on_node+0x20/0x40
[ 2.943023] [<0000000001b81348>] devtmpfs_init+0xb4/0x140
[ 2.943777] [<0000000001b81068>] driver_init+0x10/0x60
[ 2.944528] [<0000000001b56e4c>] kernel_init_freeable+0xd4/0x228
[ 2.945300] [<0000000000f67404>] kernel_init+0x18/0x134
[ 2.946026] [<00000000004060e8>] ret_from_fork+0x1c/0x2c
[ 2.946757] [<0000000000000000>] 0x0
[ 2.959537] devtmpfs: initialized
While that seemed unlikely (and I don't claim to understand it), I ran
bisect separately and confirmed that both tracebacks are gone after
reverting this patch.
Guenter
Powered by blists - more mailing lists