lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180806173959.GF1967@redhat.com>
Date:   Mon, 6 Aug 2018 13:39:59 -0400
From:   Andrea Arcangeli <aarcange@...hat.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Xiao Guangrong <guangrong.xiao@...il.com>,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        rkrcmar@...hat.com, Vitaly Kuznetsov <vkuznets@...hat.com>,
        Junaid Shahid <junaids@...gle.com>,
        Xiao Guangrong <xiaoguangrong@...cent.com>
Subject: Re: [PATCH] KVM: try __get_user_pages_fast even if not in atomic
 context

Hello,

On Mon, Aug 06, 2018 at 01:44:49PM +0200, Paolo Bonzini wrote:
> On 06/08/2018 09:51, Xiao Guangrong wrote:
> > 
> > 
> > On 07/27/2018 11:46 PM, Paolo Bonzini wrote:
> >> We are currently cutting hva_to_pfn_fast short if we do not want an
> >> immediate exit, which is represented by !async && !atomic.  However,
> >> this is unnecessary, and __get_user_pages_fast is *much* faster
> >> because the regular get_user_pages takes pmd_lock/pte_lock.
> >> In fact, when many CPUs take a nested vmexit at the same time
> >> the contention on those locks is visible, and this patch removes
> >> about 25% (compared to 4.18) from vmexit.flat on a 16 vCPU
> >> nested guest.
> >>
> > 
> > Nice improvement.
> > 
> > Then after that, we will unconditionally try hva_to_pfn_fast(), does
> > it hurt the case that the mappings in the host's page tables have not
> > been present yet?
> 
> I don't think so, because that's quite slow anyway.

There will be a minimal impact, but it's worth it.

The reason it's worth is that we shouldn't be calling
get_user_pages_unlocked in hva_to_pfn_slow if we could pass
FOLL_HWPOISON to get_user_pages_fast.

And get_user_pages_fast is really just __get_user_pages_fast +
get_user_pages_unlocked with just a difference (see below).

Reviewed-by: Andrea Arcangeli <aarcange@...hat.com>

> 
> > Can we apply this tech to other places using gup or even squash it
> > into  get_user_pages()?
> 
> That may make sense.  Andrea, do you have an idea?

About further improvements looking at commit
5b65c4677a57a1d4414212f9995aa0e46a21ff80 it looks like it may be worth
adding a new gup variant __get_user_pages_fast_irq_enabled to make our
slow path "__get_user_pages_fast_irq_enabled +
get_user_pages_unlocked" really as fast as get_user_pages_fast (which
we can't call in the atomic case and can't take the foll flags, making
it take the foll flags would also make it somewhat slower by adding
branches).

If I understand correctly the commit header Before refers to when
get_user_pages_fast was calling __get_user_pages_fast, and After is
the optimized version without local_irq_save/restore but instead using
local_irq_disable/enable.

So we'd need to call a new __get_user_pages_fast_irq_enabled instead
of __get_user_pages_fast that would only safe to call when irq are
enabled and that's always the case for KVM also for the atomic case
(KVM's atomic case is atomic only because of the spinlock, not because
irqs are disabled). Such new method would then also be ok to be called
from interrupts as long as irq are enabled when it is being called.

Such change would also contribute to reduce the minimal impact to the
_slow case. x86 would be sure fine with the generic version and it's
trivial to implement, I haven't checked other arch details.

Thanks,
Andrea

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ