lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADrL8HW6fFuFTm1wuW9UC4kr+rmRK4MqrU=bQEWram4xo9JBOw@mail.gmail.com>
Date: Thu, 5 Sep 2024 17:03:36 -0700
From: James Houghton <jthoughton@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org, 
	Oliver Upton <oliver.upton@...ux.dev>, Marc Zyngier <maz@...nel.org>, Peter Xu <peterx@...hat.com>
Subject: Re: [PATCH 04/22] KVM: selftests: Compute number of extra pages
 needed in mmu_stress_test

On Fri, Aug 9, 2024 at 12:43 PM Sean Christopherson <seanjc@...gle.com> wrote:
>
> Create mmu_stress_tests's VM with the correct number of extra pages needed
> to map all of memory in the guest.  The bug hasn't been noticed before as
> the test currently runs only on x86, which maps guest memory with 1GiB
> pages, i.e. doesn't need much memory in the guest for page tables.
>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
>  tools/testing/selftests/kvm/mmu_stress_test.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
> index 847da23ec1b1..5467b12f5903 100644
> --- a/tools/testing/selftests/kvm/mmu_stress_test.c
> +++ b/tools/testing/selftests/kvm/mmu_stress_test.c
> @@ -209,7 +209,13 @@ int main(int argc, char *argv[])
>         vcpus = malloc(nr_vcpus * sizeof(*vcpus));
>         TEST_ASSERT(vcpus, "Failed to allocate vCPU array");
>
> -       vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus);
> +       vm = __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus,
> +#ifdef __x86_64__
> +                                   max_mem / SZ_1G,
> +#else
> +                                   max_mem / vm_guest_mode_params[VM_MODE_DEFAULT].page_size,
> +#endif
> +                                   guest_code, vcpus);

Hmm... I'm trying to square this change with the logic in
vm_nr_pages_required(). That logic seems to be doing what you want
(though it always assumes small mappings IIUC).

So it seems like there's something else that's not being accounted
for? (Also without the extra pages, how does this test actually fail?)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ