[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <73511f2e-7b5d-0d29-b8dc-9cb16675afb3@oracle.com>
Date: Sun, 30 May 2021 01:13:36 +0200
From: "Maciej S. Szmigiero" <maciej.szmigiero@...cle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in
memslot_perf_test
On 29.05.2021 12:20, Paolo Bonzini wrote:
> On 28/05/21 21:51, Maciej S. Szmigiero wrote:
>> On 28.05.2021 21:11, Paolo Bonzini wrote:
>>> The memory that is allocated in vm_create is already mapped close to
>>> GPA 0, because test_execute passes the requested memory to
>>> prepare_vm. This causes overlapping memory regions and the
>>> test crashes. For simplicity just move MEM_GPA higher.
>>>
>>> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
>>
>> I am not sure that I understand the issue correctly, is vm_create_default()
>> already reserving low GPAs (around 0x10000000) on some arches or run
>> environments?
>
> It maps the number of pages you pass in the second argument, see
> vm_create.
>
> if (phy_pages != 0)
> vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> 0, 0, phy_pages, 0);
>
> In this case:
>
> data->vm = vm_create_default(VCPU_ID, mempages, guest_code);
>
> called here:
>
> if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
> mem_size, slot_runtime)) {
>
> where mempages is mem_size, which is declared as:
>
> uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;
>
> but actually a better fix is just to pass a small fixed value (e.g. 1024) to vm_create_default,
> since all other regions are added by hand
Yes, but the argument that is passed to vm_create_default() (mem_size
in the case of the test) is not passed as phy_pages to vm_create().
Rather, vm_create_with_vcpus() calculates some upper bound of extra
memory that is needed to cover that much guest memory (including for
its page tables).
The biggest possible mem_size from memslot_perf_test is 512 MiB + 1 page,
according to my calculations this results in phy_pages of 1029 (~4 MiB)
in the x86-64 case and around 1540 (~6 MiB) in the s390x case (here I am
not sure about the exact number, since s390x has some additional alignment
requirements).
Both values are well below 256 MiB (0x10000000UL), so I was wondering
what kind of circumstances can make these allocations collide
(maybe I am missing something in my analysis).
>
> Paolo
>
Thanks,
Maciej
Powered by blists - more mailing lists