[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160810150221.GA23703@pc.thejh.net>
Date: Wed, 10 Aug 2016 17:02:21 +0200
From: Jann Horn <jann@...jh.net>
To: Robert Foss <robert.foss@...labora.com>
Cc: Sonny Rao <sonnyrao@...omium.org>, akpm@...ux-foundation.org,
keescook@...omium.org, viro@...iv.linux.org.uk,
gorcunov@...nvz.org, john.stultz@...aro.org,
plaguedbypenguins@...il.com, mguzik@...hat.com,
adobriyan@...il.com, jdanis@...gle.com, calvinowens@...com,
mhocko@...e.com, koct9i@...il.com, vbabka@...e.cz,
n-horiguchi@...jp.nec.com, kirill.shutemov@...ux.intel.com,
ldufour@...ux.vnet.ibm.com, hannes@...xchg.org,
linux-kernel@...r.kernel.org, Ben Zhang <benzh@...omium.org>,
Bryan Freed <bfreed@...omium.org>,
Filipe Brandenburger <filbranden@...omium.org>
Subject: Re: [PACTH v1] mm, proc: Implement /proc/<pid>/totmaps
On Wed, Aug 10, 2016 at 10:16:45AM -0400, Robert Foss wrote:
>
>
> On 2016-08-09 06:30 PM, Jann Horn wrote:
> >On Tue, Aug 09, 2016 at 05:01:44PM -0400, Robert Foss wrote:
> >>On 2016-08-09 03:24 PM, Jann Horn wrote:
> >>>On Tue, Aug 09, 2016 at 12:05:43PM -0400, robert.foss@...labora.com wrote:
> >>>>+ down_read(&mm->mmap_sem);
> >>>>+ hold_task_mempolicy(priv);
> >>>>+
> >>>>+ for (vma = mm->mmap; vma != priv->tail_vma; vma = vma->vm_next) {
> >>>>+ struct mem_size_stats mss;
> >>>>+ struct mm_walk smaps_walk = {
> >>>>+ .pmd_entry = smaps_pte_range,
> >>>>+ .mm = vma->vm_mm,
> >>>>+ .private = &mss,
> >>>>+ };
> >>>>+
> >>>>+ if (vma->vm_mm && !is_vm_hugetlb_page(vma)) {
> >>>>+ memset(&mss, 0, sizeof(mss));
> >>>>+ walk_page_vma(vma, &smaps_walk);
> >>>>+ add_smaps_sum(&mss, mss_sum);
> >>>>+ }
> >>>>+ }
> >>>
> >>>Errrr... what? You accumulate values from mem_size_stats items into a
> >>>struct mss_sum that is associated with the struct file? So when you
> >>>read the file the second time, you get the old values plus the new ones?
> >>>And when you read the file in parallel, you get inconsistent values?
> >>>
> >>>For most files in procfs, the behavior is that you can just call
> >>>pread(fd, buf, sizeof(buf), 0) on the same fd again and again, giving
> >>>you the current values every time, without mutating state. I strongly
> >>>recommend that you get rid of priv->mss and just accumulate the state
> >>>in a local variable (maybe one on the stack).
> >>
> >>So a simple "static struct mem_size_stats" in totmaps_proc_show() would be a
> >>better solution?
> >
> >Er, why "static"? Are you trying to create shared state between different
> >readers for some reason?
> >
>
> I think I'm a bit confused now, how are you suggesting that I replace
> priv->mss?
Like this:
static int totmaps_proc_show(struct seq_file *m, void *data)
{
struct proc_maps_private *priv = m->private;
struct mm_struct *mm;
struct vm_area_struct *vma;
struct mem_size_stats mss_sum;
memset(&mss_sum, 0, sizeof(mss_sum));
[...]
for (vma = mm->mmap; vma != priv->tail_vma; vma = vma->vm_next) {
struct mem_size_stats mss;
struct mm_walk smaps_walk = {
.pmd_entry = smaps_pte_range,
.mm = vma->vm_mm,
.private = &mss,
};
if (vma->vm_mm && !is_vm_hugetlb_page(vma)) {
memset(&mss, 0, sizeof(mss));
walk_page_vma(vma, &smaps_walk);
add_smaps_sum(&mss, &mss_sum);
}
}
seq_printf(m,
"Rss: %8lu kB\n"
"Pss: %8lu kB\n"
"Shared_Clean: %8lu kB\n"
[...],
mss_sum.resident >> 10,
(unsigned long)(mss_sum.pss >> (10 + PSS_SHIFT)),
mss_sum.shared_clean >> 10,
[...]);
[...]
}
Download attachment "signature.asc" of type "application/pgp-signature" (820 bytes)
Powered by blists - more mailing lists