[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190207150204.7b305de7@doriath>
Date: Thu, 7 Feb 2019 15:02:04 -0500
From: Luiz Capitulino <lcapitulino@...hat.com>
To: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
Cc: Alexander Duyck <alexander.duyck@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
rkrcmar@...hat.com, x86@...nel.org, mingo@...hat.com, bp@...en8.de,
hpa@...or.com, pbonzini@...hat.com, tglx@...utronix.de,
akpm@...ux-foundation.org
Subject: Re: [RFC PATCH 3/4] kvm: Add guest side support for free memory
hints
On Thu, 07 Feb 2019 10:44:11 -0800
Alexander Duyck <alexander.h.duyck@...ux.intel.com> wrote:
> On Thu, 2019-02-07 at 13:21 -0500, Luiz Capitulino wrote:
> > On Mon, 04 Feb 2019 10:15:52 -0800
> > Alexander Duyck <alexander.duyck@...il.com> wrote:
> >
> > > From: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
> > >
> > > Add guest support for providing free memory hints to the KVM hypervisor for
> > > freed pages huge TLB size or larger. I am restricting the size to
> > > huge TLB order and larger because the hypercalls are too expensive to be
> > > performing one per 4K page. Using the huge TLB order became the obvious
> > > choice for the order to use as it allows us to avoid fragmentation of higher
> > > order memory on the host.
> > >
> > > I have limited the functionality so that it doesn't work when page
> > > poisoning is enabled. I did this because a write to the page after doing an
> > > MADV_DONTNEED would effectively negate the hint, so it would be wasting
> > > cycles to do so.
> > >
> > > Signed-off-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
> > > ---
> > > arch/x86/include/asm/page.h | 13 +++++++++++++
> > > arch/x86/kernel/kvm.c | 23 +++++++++++++++++++++++
> > > 2 files changed, 36 insertions(+)
> > >
> > > diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
> > > index 7555b48803a8..4487ad7a3385 100644
> > > --- a/arch/x86/include/asm/page.h
> > > +++ b/arch/x86/include/asm/page.h
> > > @@ -18,6 +18,19 @@
> > >
> > > struct page;
> > >
> > > +#ifdef CONFIG_KVM_GUEST
> > > +#include <linux/jump_label.h>
> > > +extern struct static_key_false pv_free_page_hint_enabled;
> > > +
> > > +#define HAVE_ARCH_FREE_PAGE
> > > +void __arch_free_page(struct page *page, unsigned int order);
> > > +static inline void arch_free_page(struct page *page, unsigned int order)
> > > +{
> > > + if (static_branch_unlikely(&pv_free_page_hint_enabled))
> > > + __arch_free_page(page, order);
> > > +}
> > > +#endif
> > > +
> > > #include <linux/range.h>
> > > extern struct range pfn_mapped[];
> > > extern int nr_pfn_mapped;
> > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > > index 5c93a65ee1e5..09c91641c36c 100644
> > > --- a/arch/x86/kernel/kvm.c
> > > +++ b/arch/x86/kernel/kvm.c
> > > @@ -48,6 +48,7 @@
> > > #include <asm/tlb.h>
> > >
> > > static int kvmapf = 1;
> > > +DEFINE_STATIC_KEY_FALSE(pv_free_page_hint_enabled);
> > >
> > > static int __init parse_no_kvmapf(char *arg)
> > > {
> > > @@ -648,6 +649,15 @@ static void __init kvm_guest_init(void)
> > > if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
> > > apic_set_eoi_write(kvm_guest_apic_eoi_write);
> > >
> > > + /*
> > > + * The free page hinting doesn't add much value if page poisoning
> > > + * is enabled. So we only enable the feature if page poisoning is
> > > + * no present.
> > > + */
> > > + if (!page_poisoning_enabled() &&
> > > + kvm_para_has_feature(KVM_FEATURE_PV_UNUSED_PAGE_HINT))
> > > + static_branch_enable(&pv_free_page_hint_enabled);
> > > +
> > > #ifdef CONFIG_SMP
> > > smp_ops.smp_prepare_cpus = kvm_smp_prepare_cpus;
> > > smp_ops.smp_prepare_boot_cpu = kvm_smp_prepare_boot_cpu;
> > > @@ -762,6 +772,19 @@ static __init int kvm_setup_pv_tlb_flush(void)
> > > }
> > > arch_initcall(kvm_setup_pv_tlb_flush);
> > >
> > > +void __arch_free_page(struct page *page, unsigned int order)
> > > +{
> > > + /*
> > > + * Limit hints to blocks no smaller than pageblock in
> > > + * size to limit the cost for the hypercalls.
> > > + */
> > > + if (order < KVM_PV_UNUSED_PAGE_HINT_MIN_ORDER)
> > > + return;
> > > +
> > > + kvm_hypercall2(KVM_HC_UNUSED_PAGE_HINT, page_to_phys(page),
> > > + PAGE_SIZE << order);
> >
> > Does this mean that the vCPU executing this will get stuck
> > here for the duration of the hypercall? Isn't that too long,
> > considering that the zone lock is taken and madvise in the
> > host block on semaphores?
>
> I'm pretty sure the zone lock isn't held when this is called. The lock
> isn't acquired until later in the path. This gets executed just before
> the page poisoning call which would take time as well since it would
> have to memset an entire page. This function is called as a part of
> free_pages_prepare, the zone locks aren't acquired until we are calling
> into either free_one_page and a few spots before calling
> __free_one_page.
Yeah, you're right of course! I think mixed up __arch_free_page()
and __free_one_page()... free_pages() code path won't take any
locks up to calling __arch_free_page(). Sorry for the noise.
> My other function in patch 4 which does this from inside of
> __free_one_page does have to release the zone lock since it is taken
> there.
I haven't checked that one yet, I'll let you know if I have comments.
Powered by blists - more mailing lists