[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZgFNVtp3EsJRaSN0@MiWiFi-R3L-srv>
Date: Mon, 25 Mar 2024 18:09:26 +0800
From: Baoquan He <bhe@...hat.com>
To: Heiko Carstens <hca@...ux.ibm.com>
Cc: Christoph Hellwig <hch@...radead.org>,
"Uladzislau Rezki (Sony)" <urezki@...il.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Lorenzo Stoakes <lstoakes@...il.com>,
Matthew Wilcox <willy@...radead.org>,
Dave Chinner <david@...morbit.com>,
Guenter Roeck <linux@...ck-us.net>,
Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>, linux-s390@...r.kernel.org
Subject: Re: [PATCH 1/1] mm: vmalloc: Bail out early in find_vmap_area() if
vmap is not init
On 03/25/24 at 10:39am, Heiko Carstens wrote:
> On Sun, Mar 24, 2024 at 04:32:00PM -0700, Christoph Hellwig wrote:
> > On Sat, Mar 23, 2024 at 03:15:44PM +0100, Uladzislau Rezki (Sony) wrote:
.....snip
> > I guess this is ok as an urgend bandaid to get s390 booting again,
> > but calling find_vmap_area before the vmap area is initialized
> > seems an actual issue in the s390 mm init code.
> >
> > Adding the s390 maintainers to see if they have and idea how this could
> > get fixed in a better way.
>
> I'm going to push the patch below to the s390 git tree later. This is not a
> piece of art, but I wanted to avoid to externalize vmalloc's vmap_initialized,
> or come up with some s390 specific change_page_attr_alias_early() variant where
> sooner or later nobody remembers what "early" means.
>
> So this seems to be "good enough".
>
> From 0308cd304fa3b01904c6060e2115234101811e48 Mon Sep 17 00:00:00 2001
> From: Heiko Carstens <hca@...ux.ibm.com>
> Date: Thu, 21 Mar 2024 09:41:20 +0100
> Subject: [PATCH] s390/mm,pageattr: avoid early calls into vmalloc code
>
> The vmalloc code got changed and doesn't have the global statically
> initialized vmap_area_lock spinlock anymore. This leads to the following
> lockdep splat when find_vm_area() is called before the vmalloc code is
> initialized:
>
> BUG: spinlock bad magic on CPU#0, swapper/0
> lock: single+0x1868/0x1978, .magic: 00000000, .owner: swapper/0, .owner_cpu: 0
>
> CPU: 0 PID: 0 Comm: swapper Not tainted 6.8.0-11767-g23956900041d #1
> Hardware name: IBM 3931 A01 701 (KVM/Linux)
> Call Trace:
> [<00000000010d840a>] dump_stack_lvl+0xba/0x148
> [<00000000001fdf5c>] do_raw_spin_unlock+0x7c/0xd0
> [<000000000111d848>] _raw_spin_unlock+0x38/0x68
> [<0000000000485830>] find_vmap_area+0xb0/0x108
> [<0000000000485ada>] find_vm_area+0x22/0x40
> [<0000000000132bbc>] __set_memory+0xbc/0x140
> [<0000000001a7f048>] vmem_map_init+0x40/0x158
> [<0000000001a7edc8>] paging_init+0x28/0x80
> [<0000000001a7a6e2>] setup_arch+0x4b2/0x6d8
> [<0000000001a74438>] start_kernel+0x98/0x4b0
> [<0000000000100036>] startup_continue+0x36/0x40
> INFO: lockdep is turned off.
>
> Add a slab_is_available() check to change_page_attr_alias() in order to
> avoid early calls into vmalloc code. slab_is_available() is not exactly
> what is needed, but there is currently no other way to tell if the vmalloc
> code is initialized or not, and there is no reason to expose
> e.g. vmap_initialized from vmalloc to achieve the same.
If so, I would rather add a vmalloc_is_available() to achieve the same.
The added code and the code comment definitely will confuse people and
make people to dig why.
>
> The fixes tag does not mean that the referenced commit is broken, but that
> there is a dependency to this commit if the vmalloc commit should be
> backported.
>
> Fixes: d093602919ad ("mm: vmalloc: remove global vmap_area_root rb-tree")
> Signed-off-by: Heiko Carstens <hca@...ux.ibm.com>
> ---
> arch/s390/mm/pageattr.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c
> index 01bc8fad64d6..b6c6453d66e2 100644
> --- a/arch/s390/mm/pageattr.c
> +++ b/arch/s390/mm/pageattr.c
> @@ -344,6 +344,9 @@ static int change_page_attr_alias(unsigned long addr, unsigned long end,
> struct vm_struct *area;
> int rc = 0;
>
> + /* Avoid early calls into not initialized vmalloc code. */
> + if (!slab_is_available())
> + return 0;
> /*
> * Changes to read-only permissions on kernel VA mappings are also
> * applied to the kernel direct mapping. Execute permissions are
> --
> 2.40.1
>
Powered by blists - more mailing lists