[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZtVkaMO0dG082KNK@google.com>
Date: Mon, 2 Sep 2024 07:08:24 +0000
From: Sebastian Ene <sebastianene@...gle.com>
To: Marc Zyngier <maz@...nel.org>
Cc: akpm@...ux-foundation.org, alexghiti@...osinc.com, ankita@...dia.com,
ardb@...nel.org, catalin.marinas@....com,
christophe.leroy@...roup.eu, james.morse@....com,
vdonnefort@...gle.com, mark.rutland@....com, oliver.upton@...ux.dev,
rananta@...gle.com, ryan.roberts@....com, shahuang@...hat.com,
suzuki.poulose@....com, will@...nel.org, yuzenghui@...wei.com,
kvmarm@...ts.linux.dev, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v9 0/5] arm64: ptdump: View the second stage page-tables
On Mon, Sep 02, 2024 at 06:11:04AM +0000, Sebastian Ene wrote:
> On Fri, Aug 30, 2024 at 04:00:11PM +0100, Marc Zyngier wrote:
> > On Fri, 30 Aug 2024 15:44:39 +0100,
> > Marc Zyngier <maz@...nel.org> wrote:
Hello Marc,
I tried with a 16Kb host under VHE on Qemu starting a 4kb guest with
kvmtool:
root@...-qemu-pkvm-host:~# zcat /proc/config.gz | grep
"CONFIG_ARM64_[164K]*_PAGES=y"
CONFIG_ARM64_16K_PAGES=y
root@...-qemu-pkvm-host:~# cat
/sys/kernel/debug/kvm/263-4/stage2*
2
---[ Guest IPA ]---
0x0000000000000000-0x0000000001020000 16512K 3
0x0000000001020000-0x0000000001024000 16K 3 R W X AF
0x0000000001024000-0x0000000002000000 16240K 3
0x0000000002000000-0x0000000080000000 2016M 2
0x0000000080000000-0x0000000084000000 64M 2 R W AF BLK
0x0000000084000000-0x000000008e000000 160M 2
0x000000008e000000-0x0000000090000000 32M 2 R W AF BLK
0x0000000090000000-0x0000000098000000 128M 2
0x0000000098000000-0x000000009a000000 32M 2 R W X AF BLK
0x000000009a000000-0x000000009c000000 32M 2 R W AF BLK
This looks quite right I guess and I wonder how can I repro what you are seeing ?
What kvm-arm.mode is the host running into ?
Thanks,
Seb
>
> Hello Marc,
>
> > >
> > > Hi Seb,
> >
> > [...]
> >
> > > I've been giving this a go on my test systems with 16k pages, and it
> > > doesn't really work as advertised:
> > >
> > > root@...ette:/sys/kernel/debug/kvm# cat 2573-13/stage2_*
> > > 2
> > > ---[ Guest IPA ]---
> > > 0x0000000000000000-0x0000000008000000 128M
> > > 0x0000000008000000-0x00000000090a0000 17024K 3
> > > 0x00000000090a0000-0x00000000090a4000 16K 3 R W X AF
> > > 0x00000000090a4000-0x000000000a000000 15728K 3
> > >
> > > Only 16kB mapped? This is a full Linux guest running the Debian
> > > installer, and just the kernel is about 20MB (the VM has 4GB of RAM,
> > > and is using QEMU as the VMM)
> > >
> > > So clearly something isn't playing as expected. Also, this '128M'
> > > without a level being displayed makes me wonder. It is probably the
> > > QEMU flash, but then the rest of the addresses don't make much sense
> > > (RAM on QEMU is at 1GB, not at 128MB.
> > >
> > > On another system with kvmtool, I get something similar:
> > >
> > > root@...denum:/home/maz# cat /sys/kernel/debug/kvm/*/stage2_*
> > > 2
> > > ---[ Guest IPA ]---
> > > 0x0000000000000000-0x0000000001020000 16512K 3
> > > 0x0000000001020000-0x0000000001024000 16K 3 R W X AF
> > > 0x0000000001024000-0x0000000002000000 16240K 3
> > >
> > > and kvmtool places the RAM at 2GB. Clearly not what we're seeing here.
> > >
> > > Could you please verify this?
>
> Ughh, this doesn't look right. I will give it a spin with a different
> granule, thanks for bringing me to attention. I will look first at
> mm/ptdump.c if it works as intended.
>
>
> >
> > For the record, on a 4kB host, I get much more plausible results:
> >
> > root@...-leg-emma:/home/maz# cat /sys/kernel/debug/kvm/632-12/stage2_*
> > 3
> > ---[ Guest IPA ]---
> > 0x0000000000000000-0x0000000000200000 2M 2 R AF BLK
> > 0x0000000000200000-0x0000000040000000 1022M 2
> > 0x0000000040000000-0x0000000040200000 2M 2 R W X AF BLK
> > 0x0000000040200000-0x0000000044000000 62M 2
> > 0x0000000044000000-0x0000000044200000 2M 2 R W X AF BLK
> > 0x0000000044200000-0x0000000047600000 52M 2
> > 0x0000000047600000-0x0000000047800000 2M 2 R W AF BLK
> > 0x0000000047800000-0x0000000047e00000 6M 2 R W X AF BLK
> > 0x0000000047e00000-0x0000000048000000 2M 2 R W AF BLK
> > 0x0000000048000000-0x00000000b9c00000 1820M 2
> > 0x00000000b9c00000-0x00000000b9e00000 2M 2 R W X AF BLK
> > 0x00000000b9e00000-0x00000000bb800000 26M 2
> > 0x00000000bb800000-0x00000000bba00000 2M 2 R W X AF BLK
> > 0x00000000bba00000-0x00000000bbe00000 4M 2 R W AF BLK
> > 0x00000000bbe00000-0x00000000bc200000 4M 2 R W X AF BLK
> > 0x00000000bc200000-0x00000000bc800000 6M 2 R W AF BLK
> > 0x00000000bc800000-0x00000000be400000 28M 2
> > 0x00000000be400000-0x00000000bf800000 20M 2 R W X AF BLK
> > 0x00000000bf800000-0x00000000bfe00000 6M 2 R W AF BLK
> > 0x00000000bfe00000-0x00000000c0000000 2M 2 R W X AF BLK
> >
> > So 16kB is the one that needs investigating, and I strongly suspect
> > that 64kB is in the same boat...
> >
> > Thanks,
> >
> > M. (signing off for the day)
> >
>
> Thanks,
> Sebastian
>
> > --
> > Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists