lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 18 Feb 2019 18:41:16 +0100
From:   David Hildenbrand <>
To:     Alexander Duyck <>
Cc:     "Michael S. Tsirkin" <>,
        Nitesh Narayan Lal <>,
        kvm list <>,
        LKML <>,
        Paolo Bonzini <>,,,,
        Yang Zhang <>,
        Rik van Riel <>,,
        Konrad Rzeszutek Wilk <>,, Andrea Arcangeli <>
Subject: Re: [RFC][Patch v8 0/7] KVM: Guest Free Page Hinting

On 18.02.19 18:31, Alexander Duyck wrote:
> On Mon, Feb 18, 2019 at 8:59 AM David Hildenbrand <> wrote:
>> On 18.02.19 17:49, Michael S. Tsirkin wrote:
>>> On Sat, Feb 16, 2019 at 10:40:15AM +0100, David Hildenbrand wrote:
>>>> It would be worth a try. My feeling is that a synchronous report after
>>>> e.g. 512 frees should be acceptable, as it seems to be acceptable on
>>>> s390x. (basically always enabled, nobody complains).
>>> What slips under the radar on an arch like s390 might
>>> raise issues for a popular arch like x86. My fear would be
>>> if it's only a problem e.g. for realtime. Then you get
>>> a condition that's very hard to trigger and affects
>>> worst case latencies.
>> Realtime should never use free page hinting. Just like it should never
>> use ballooning. Just like it should pin all pages in the hypervisor.
>>> But really what business has something that is supposedly
>>> an optimization blocking a VCPU? We are just freeing up
>>> lots of memory why is it a good idea to slow that
>>> process down?
>> I first want to know that it is a problem before we declare it a
>> problem. I provided an example (s390x) where it does not seem to be a
>> problem. One hypercall ~every 512 frees. As simple as it can get.
>> No trying to deny that it could be a problem on x86, but then I assume
>> it is only a problem in specific setups.
>> I would much rather prefer a simple solution that can eventually be
>> disabled in selected setup than a complicated solution that tries to fit
>> all possible setups. Realtime is one of the examples where such stuff is
>> to be disabled either way.
>> Optimization of space comes with a price (here: execution time).
> One thing to keep in mind though is that if you are already having to
> pull pages in and out of swap on the host in order be able to provide
> enough memory for the guests the free page hinting should be a
> significant win in terms of performance.

Indeed. And also we are in a virtualized environment already, we can
have any kind of sudden hickups. (again, realtime has special
requirements on the setup)

Side note: I like your approach because it is simple. I don't like your
approach because it cannot deal with fragmented memory. And that can
happen easily.

The idea I described here can be similarly be an extension of your
approach, merging in a "batched reporting" Nitesh proposed, so we can
report on something < MAX_ORDER, similar to s390x. In the end it boils
down to reporting via hypercall vs. reporting via virtio. The main point
is that it is synchronous and batched. (and that we properly take care
of the race between host freeing and guest allocation)

> So far with my patch set that hints at the PMD level w/ THP enabled I
> am not really seeing that much overhead for the hypercalls. The bigger
> piece that is eating up CPU time is all the page faults and page
> zeroing that is going on as we are cycling the memory in and out of
> the guest. Some of that could probably be resolved by using MADV_FREE,
> but if we are under actual memory pressure I suspect it would behave
> similar to MADV_DONTNEED.

MADV_FREE is certainly the better thing to do for hinting in my opinion.
It should result in even less overhead. Thanks for the comment about the
hypercall overhead.



David / dhildenb

Powered by blists - more mailing lists