[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17dcd165-10c2-2153-2914-e610d8e053ea@redhat.com>
Date: Mon, 18 Feb 2019 17:02:51 +0100
From: David Hildenbrand <david@...hat.com>
To: Nitesh Narayan Lal <nitesh@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, pbonzini@...hat.com,
lcapitulino@...hat.com, pagupta@...hat.com, wei.w.wang@...el.com,
yang.zhang.wz@...il.com, riel@...riel.com, mst@...hat.com,
dodgen@...gle.com, konrad.wilk@...cle.com, dhildenb@...hat.com,
aarcange@...hat.com, Alexander Duyck <alexander.duyck@...il.com>
Subject: Re: [RFC][Patch v8 0/7] KVM: Guest Free Page Hinting
On 18.02.19 16:50, Nitesh Narayan Lal wrote:
>
> On 2/16/19 4:40 AM, David Hildenbrand wrote:
>> On 04.02.19 21:18, Nitesh Narayan Lal wrote:
>>
>> Hi Nitesh,
>>
>> I thought again about how s390x handles free page hinting. As that seems
>> to work just fine, I guess sticking to a similar model makes sense.
>>
>>
>> I already explained in this thread how it works on s390x, a short summary:
>>
>> 1. Each VCPU has a buffer of pfns to be reported to the hypervisor. If I
>> am not wrong, it contains 512 entries, so is exactly 1 page big. This
>> buffer is stored in the hypervisor and is on page granularity.
>>
>> 2. This page buffer is managed via the ESSA instruction. In addition, to
>> synchronize with the guest ("page reused when freeing in the
>> hypervisor"), special bits in the host->guest page table can be
>> set/locked via the ESSA instruction by the guest and similarly accessed
>> by the hypervisor.
>>
>> 3. Once the buffer is full, the guest does a synchronous hypercall,
>> going over all 512 entries and zapping them (== similar to MADV_DONTNEED)
>>
>>
>> To mimic that, we
>>
>> 1. Have a static buffer per VCPU in the guest with 512 entries. You
>> basically have that already.
>>
>> 2. On every free, add the page _or_ the page after merging by the buddy
>> (e.g. MAX_ORDER - 1) to the buffer (this is where we could be better
>> than s390x). You basically have that already.
>>
>> 3. If the buffer is full, try to isolate all pages and do a synchronous
>> report to the hypervisor. You have the first part already. The second
>> part would require a change (don't use a separate/global thread to do
>> the hinting, just do it synchronously).
>>
>> 4. One hinting is done, putback all isolated pages to the budy. You
>> basically have that already.
>>
>>
>> For 3. we can try what you have right now, using virtio. If we detect
>> that's a problem, we can do it similar to what Alexander proposes and
>> just do a bare hypercall. It's just a different way of carrying out the
>> same task.
>>
>>
>> This approach
>> 1. Mimics what s390x does, besides supporting different granularities.
>> To synchronize guest->host we simply take the pages off the buddy.
>>
>> 2. Is basically what Alexander does, however his design limitation is
>> that doing any hinting on smaller granularities will not work because
>> there will be too many synchronous hints. Bad on fragmented guests.
>>
>> 3. Does not require any dynamic data structures in the guest.
>>
>> 4. Does not block allocation paths.
>>
>> 5. Blocks on e.g. every 512'ed free. It seems to work on s390x, why
>> shouldn't it for us. We have to measure.
>>
>> 6. We are free to decide which granularity we report.
>>
>> 7. Potentially works even if the guest memory is fragmented (little
>> MAX_ORDER - 1) pages.
>>
>> It would be worth a try. My feeling is that a synchronous report after
>> e.g. 512 frees should be acceptable, as it seems to be acceptable on
>> s390x. (basically always enabled, nobody complains).
>
> The reason I like the current approach of reporting via separate kernel
> thread is that it doesn't block any regular allocation/freeing code path
> in anyways.
Well, that is partially true. The work has to be done "somewhere", so
once you kick a separate kernel thread, it can easily be scheduled on
the very same VCPU in the very near future. So depending on the user,
the "hickup" is similarly visible.
Having separate kernel threads seems to result in other questions not
easy to answer (do we need dynamic data structures, how to size these
data structures, how many threads do we want (e.g. big number of vcpus)
) that seem to be avoidable by keeping it simple and not having separate
threads. Initially I also thought that separate threads were the natural
thing to do, but now I have the feeling that it tends to over complicate
the problem. (and I don't want to repeat myself, but on s390x it seems
to work this way just fine, if we want to mimic that). Especially
without us knowing if "don't do a hypercall every X free calls" is
really a problem.
>>
>> We would have to play with how to enable/disable reporting and when to
>> not report because it's not worth it in the guest (e.g. low on memory).
>>
>>
>> Do you think something like this would be easy to change/implement and
>> measure?
>
> I can do that as I figure out a real world guest workload using which
> the two approaches can be compared.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists