[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YY4amnb1kcBEVw3E@xz-m1.local>
Date: Fri, 12 Nov 2021 15:41:14 +0800
From: Peter Xu <peterx@...hat.com>
To: Mina Almasry <almasrymina@...gle.com>
Cc: David Hildenbrand <david@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
"Paul E . McKenney" <paulmckrcu@...com>,
Yu Zhao <yuzhao@...gle.com>, Jonathan Corbet <corbet@....net>,
Andrew Morton <akpm@...ux-foundation.org>,
Ivan Teterevkov <ivan.teterevkov@...anix.com>,
Florian Schmidt <florian.schmidt@...anix.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v4] mm: Add PM_HUGE_THP_MAPPING to /proc/pid/pagemap
On Wed, Nov 10, 2021 at 09:42:25AM -0800, Mina Almasry wrote:
> Sorry, yes I should update the commit message with this info. The
> issues with smaps are:
> 1. Performance: I've pinged our network service folks to obtain a
> rough perf comparison but I haven't been able to get one. I can try to
> get a performance measurement myself but Peter seems to be also seeing
> this.
No I was not seeing any real issues in my environment, but I remembered people
complaining about it because smaps needs to walk the whole memory of the
process, then if one program is only interested in some small portion of the
whole memory, it'll be slow because smaps will still need to walk all the
memory anyway.
> 2. smaps output is human readable and a bit convoluted for userspace to parse.
IMHO this is not a major issue. AFAIK lots of programs will still try to parse
human readable output like smaps to get some solid numbers. It's just that
it'll be indeed an perf issue if it's only a part of the memory that is of
interest.
Could we consider exporting a new smaps interface that:
1. allows to specify a range of memory, and,
2. expose information as "struct mem_size_stats" in binary format
(we may want to replace "unsigned long" with "u64", then also have some
versioning or having a "size" field for the struct, though; seems doable)
I'm wondering whether this could be helpful in even more scenarios.
--
Peter Xu
Powered by blists - more mailing lists