lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL715WJWPzBqmjeTJ6mZa=dUaF5+MdqaCrk5CEzvcz1X99cm0g@mail.gmail.com>
Date:   Tue, 10 Aug 2021 18:06:51 -0700
From:   Mingwei Zhang <mizhang@...gle.com>
To:     Peter Xu <peterx@...hat.com>
Cc:     Jim Mattson <jmattson@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Sean Christopherson <seanjc@...gle.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Joerg Roedel <joro@...tes.org>, kvm <kvm@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Ben Gardon <bgardon@...gle.com>,
        David Matlack <dmatlack@...gle.com>,
        Jing Zhang <jingzhangos@...gle.com>
Subject: Re: [PATCH v4 3/3] KVM: x86/mmu: Add detailed page size stats

Hi Peter,

Thanks for the comments. Please check my feedback inline.

> > From a functionality point of view, the above patch seems duplicate
> > with mine.
>
> The rmap statistics are majorly for rmap, not huge pages.
>

Got you. I guess the focus of the stat is different.


>
> I have a question: why change to using atomic ops?  As most kvm statistics
> seems to be not with atomics before.
>
> AFAIK atomics are expensive, and they get even more expensive when the host is
> bigger (it should easily go into ten-nanosecond level).  So I have no idea
> whether it's worth it for persuing that accuracy.
>
> Thanks,

Yes, regarding the usage of 'atomic_t', we previously had discussions
internally with Sean. So I think the main reason is because: in KVM
currently, mmu may have several modes. Among them, one is the mmu
running with TDP enabled (say, legacy mode in this scope) and another
one is the TDP mmu mode (mmu/tdp_mmu.c). In the legacy mode, mmu will
update spte under a write lock, while in comparison, in the TDP MMU
mode, mmu will use a read lock. I copied a simple code snippet to
illustrate the situation:

».......if (is_tdp_mmu_fault)
».......».......read_lock(&vcpu->kvm->mmu_lock);
».......else
».......».......write_lock(&vcpu->kvm->mmu_lock);

So here comes the problem: how do we make the page stat update
correctly across all modes? For the latter case, we definitely have to
update the stat in an atomic way unless we want extra locking (we
don't want that). So we could either 1) use a branch to update the
stat differently for each mode or 2) use an atomic update across all
cases. After review, Sean mentioned (in pre-v1 internal review) that
we should just use atomic. I agree since adding a branch at such a hot
path may slow down even more globally, especially if there is branch
misprediction? But I appreciate your feedback as well.

Regarding the pursuit for accuracy, I think there might be several
reasons. One of the most critical reasons that I know is that we need
to ensure dirty logging works correctly, i.e., when dirty logging is
enabled, all huge pages (both 2MB and 1GB) _are_ gone. Hope that
clarifies a little bit?

Thanks.
-Mingwei

-Mingwei


>
> --
> Peter Xu
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ