[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YRMKPd2ZarXCX6vm@t490s>
Date: Tue, 10 Aug 2021 19:22:37 -0400
From: Peter Xu <peterx@...hat.com>
To: Mingwei Zhang <mizhang@...gle.com>
Cc: Jim Mattson <jmattson@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Joerg Roedel <joro@...tes.org>, kvm <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Ben Gardon <bgardon@...gle.com>,
David Matlack <dmatlack@...gle.com>,
Jing Zhang <jingzhangos@...gle.com>
Subject: Re: [PATCH v4 3/3] KVM: x86/mmu: Add detailed page size stats
On Mon, Aug 09, 2021 at 05:01:39PM -0700, Mingwei Zhang wrote:
> Hi Paolo,
Hi, Mingwei,
>
> I recently looked at the patches queued and I find this patch from
> Peter Xu (Cced), which is also adding 'page stats' information into
> KVM:
>
> https://patchwork.kernel.org/project/kvm/patch/20210625153214.43106-7-peterx@redhat.com/
>
> From a functionality point of view, the above patch seems duplicate
> with mine.
The rmap statistics are majorly for rmap, not huge pages.
> But in detail, Peter's approach is using debugfs with
> proper locking to traverse the whole rmap to get the detailed page
> sizes in different granularity.
>
> In comparison, mine is to add extra code in low level SPTE update
> routines and store aggregated data in kvm->kvm_stats. This data could
> be retrieved from Jing's fd based API without any lock required, but
> it does not provide the fine granular information such as the number
> of contiguous 4KG/2MB/1GB pages.
>
> So would you mind giving me some feedback on this patch? I would
> really appreciate it.
I have a question: why change to using atomic ops? As most kvm statistics
seems to be not with atomics before.
AFAIK atomics are expensive, and they get even more expensive when the host is
bigger (it should easily go into ten-nanosecond level). So I have no idea
whether it's worth it for persuing that accuracy.
Thanks,
--
Peter Xu
Powered by blists - more mailing lists