lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANgfPd-cj4Bau4fYRoQY3hW0K4w19LaFr=6RjgyZGO65fZuq9g@mail.gmail.com>
Date:   Fri, 24 Jan 2020 10:53:57 -0800
From:   Ben Gardon <bgardon@...gle.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        linux-kselftest@...r.kernel.org,
        Cannon Matthews <cannonmatthews@...gle.com>,
        Peter Xu <peterx@...hat.com>,
        Andrew Jones <drjones@...hat.com>,
        Peter Shier <pshier@...gle.com>,
        Oliver Upton <oupton@...gle.com>
Subject: Re: [PATCH v4 10/10] KVM: selftests: Move memslot 0 above KVM
 internal memslots

On Fri, Jan 24, 2020 at 1:01 AM Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> On 23/01/20 19:04, Ben Gardon wrote:
> > KVM creates internal memslots between 3 and 4 GiB paddrs on the first
> > vCPU creation. If memslot 0 is large enough it collides with these
> > memslots an causes vCPU creation to fail. Instead of creating memslot 0
> > at paddr 0, start it 4G into the guest physical address space.
> >
> > Signed-off-by: Ben Gardon <bgardon@...gle.com>
> > ---
> >  tools/testing/selftests/kvm/lib/kvm_util.c | 11 +++++++----
> >  1 file changed, 7 insertions(+), 4 deletions(-)
>
> This breaks all tests for me:
>
>    $ ./state_test
>    Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
>    Guest physical address width detected: 46
>    ==== Test Assertion Failure ====
>   lib/x86_64/processor.c:580: false
>   pid=4873 tid=4873 - Success
>      1  0x0000000000409996: addr_gva2gpa at processor.c:579
>      2  0x0000000000406a38: addr_gva2hva at kvm_util.c:1636
>      3  0x000000000041036c: kvm_vm_elf_load at elf.c:192
>      4  0x0000000000409ea9: vm_create_default at processor.c:829
>      5  0x0000000000400f6f: main at state_test.c:132
>      6  0x00007f21bdf90494: ?? ??:0
>      7  0x0000000000401287: _start at ??:?
>   No mapping for vm virtual address, gva: 0x400000

Uh oh, I obviously did not test this patch adequately. My apologies.
I'll send another version of this patch after I've had time to test it
better. The memslots between 3G and 4G are also somewhat x86 specific,
so maybe this code should be elsewhere.

>
> Memslot 0 should not be too large, so this patch should not be needed.

I found that 3GB was not sufficient for memslot zero in my testing
because it needs to contain both the stack for every vCPU and the page
tables for the VM. When I ran with 416 vCPUs and of 1.6TB of total
ram, memslot zero needed to be substantially larger than 3G. Just the
4K guest PTEs required to map 4G per-vCPU for 416 vCPUs require (((416
* (4<<30)) / 4096) * 8) / (1<<30) = 3.25GB of memory.
I suppose another slot could be used for the page tables, but that
would complicate the implementation of any tests that want to run
large VMs substantially.

>
> Paolo
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ