[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YqGC5NxDm1WyOHcw@carbon>
Date: Wed, 8 Jun 2022 22:19:32 -0700
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Kefeng Wang <wangkefeng.wang@...wei.com>
Cc: Vasily Averin <vvs@...nvz.org>,
Naresh Kamboju <naresh.kamboju@...aro.org>,
Shakeel Butt <shakeelb@...gle.com>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Linux-Next Mailing List <linux-next@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>,
regressions@...ts.linux.dev, lkft-triage@...ts.linaro.org,
linux-mm <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Ard Biesheuvel <ardb@...nel.org>,
Arnd Bergmann <arnd@...db.de>,
Catalin Marinas <catalin.marinas@....com>,
Raghuram Thammiraju <raghuram.thammiraju@....com>,
Mark Brown <broonie@...nel.org>, Will Deacon <will@...nel.org>,
Qian Cai <quic_qiancai@...cinc.com>
Subject: Re: [next] arm64: boot failed - next-20220606
On Thu, Jun 09, 2022 at 12:43:00PM +0800, Kefeng Wang wrote:
>
> On 2022/6/9 11:44, Kefeng Wang wrote:
> >
> > On 2022/6/9 10:49, Vasily Averin wrote:
> > > Dear ARM developers,
> > > could you please help me to find the reason of this problem?
> > Hi,
> > > mem_cgroup_from_obj():
> > > ffff80000836cf40: d503245f bti c
> > > ffff80000836cf44: d503201f nop
> > > ffff80000836cf48: d503201f nop
> > > ffff80000836cf4c: d503233f paciasp
> > > ffff80000836cf50: d503201f nop
> > > ffff80000836cf54: d2e00021 mov x1,
> > > #0x1000000000000 // #281474976710656
> > > ffff80000836cf58: 8b010001 add x1, x0, x1
> > > ffff80000836cf5c: b25657e4 mov x4,
> > > #0xfffffc0000000000 // #-4398046511104
> > > ffff80000836cf60: d34cfc21 lsr x1, x1, #12
> > > ffff80000836cf64: d37ae421 lsl x1, x1, #6
> > > ffff80000836cf68: 8b040022 add x2, x1, x4
> > > ffff80000836cf6c: f9400443 ldr x3, [x2, #8]
> > >
> > > x5 : ffff80000a96f000 x4 : fffffc0000000000 x3 : ffff80000ad5e680
> > > x2 : fffffe00002bc240 x1 : 00000200002bc240 x0 : ffff80000af09740
> > >
> > > x0 = 0xffff80000af09740 is an argument of mem_cgroup_from_obj()
> > > according to System.map it is init_net
> > >
> > > This issue is caused by calling virt_to_page() on address of static
> > > variable init_net.
> > > Arm64 consider that addresses of static variables are not valid
> > > virtual addresses.
> > > On x86_64 the same API works without any problem.
> > >
> > > Unfortunately I do not understand the cause of the problem.
> > > I do not see any bugs in my patch.
> > > I'm using an existing API, mem_cgroup_from_obj(), to find the memory
> > > cgroup used
> > > to account for the specified object.
> > > In particular, in the current case, I wanted to get the memory
> > > cgroup of the
> > > specified network namespace by the name taken from for_each_net().
> > > The first object in this list is the static structure unit_net
> >
> > root@...t:~# cat /proc/kallsyms |grep -w _data
> > ffff80000a110000 D _data
> > root@...t:~# cat /proc/kallsyms |grep -w _end
> > ffff80000a500000 B _end
> > root@...t:~# cat /proc/kallsyms |grep -w init_net
> > ffff80000a4eb980 B init_net
> >
> > the init_net is located in data section, on arm64, it is allowed by
> > vmalloc, see
> >
> > map_kernel_segment(pgdp, _data, _end, PAGE_KERNEL, &vmlinux_data, 0,
> > 0);
> >
> > and the arm has same behavior.
> >
> > We could let init_net be allocated dynamically, but I think it could
> > change a lot.
> >
> > Any better sugguestion, Catalin?
>
> or add vmalloc check in mem_cgroup_from_obj()?
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 27cebaa53472..fb817e5da5f0 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2860,7 +2860,10 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p)
> if (mem_cgroup_disabled())
> return NULL;
>
> - folio = virt_to_folio(p);
> + if (unlikely(is_vmalloc_addr(p)))
> + folio = page_folio(vmalloc_to_page(p));
> + else
> + folio = virt_to_folio(p);
>
> /*
> * Slab objects are accounted individually, not per-page.
>
It sounds right. Later we can add something like mem_cgroup_from_slab_obj()
to use on hot paths and avoid this check.
Powered by blists - more mailing lists