[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAL1p7m6rp3LOX=B_UdcvFMdOAYtPNudQtYyFLi=Jv0dTUxsjyA@mail.gmail.com>
Date: Tue, 30 Apr 2019 17:01:21 -0400
From: Joel Savitz <jsavitz@...hat.com>
To: Alexey Dobriyan <adobriyan@...il.com>
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Ram Pai <linuxram@...ibm.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Huang Ying <ying.huang@...el.com>,
Sandeep Patil <sspatil@...roid.com>,
Rafael Aquini <aquini@...hat.com>,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v2] fs/proc: add VmTaskSize field to /proc/$$/status
Good point Alexey.
Expect v3 shortly.
Best,
Joel Savitz
On Sat, Apr 27, 2019 at 5:45 PM Alexey Dobriyan <adobriyan@...il.com> wrote:
>
> On Fri, Apr 26, 2019 at 03:02:08PM -0400, Joel Savitz wrote:
> > In the mainline kernel, there is no quick mechanism to get the virtual
> > memory size of the current process from userspace.
> >
> > Despite the current state of affairs, this information is available to the
> > user through several means, one being a linear search of the entire address
> > space. This is an inefficient use of cpu cycles.
>
> You can test only a few known per arch values. Linear search is a self
> inflicted wound.
>
> prctl(2) is more natural place and will also be arch neutral.
>
> > A component of the libhugetlb kernel test does exactly this, and as
> > systems' address spaces increase beyond 32-bits, this method becomes
> > exceedingly tedious.
>
> > For example, on a ppc64le system with a 47-bit address space, the linear
> > search causes the test to hang for some unknown amount of time. I
> > couldn't give you an exact number because I just ran it for about 10-20
> > minutes and went to go do something else, probably to get coffee or
> > something, and when I came back, I just killed the test and patched it
> > to use this new mechanism. I re-ran my new version of the test using a
> > kernel with this patch, and of course it passed through the previously
> > bottlenecking codepath nearly instantaneously.
> >
> > This patched enabled me to upgrade an O(n) codepath to O(1) in an
> > architecture-independent manner.
>
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -74,7 +74,10 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
> > seq_put_decimal_ull_width(m,
> > " kB\nVmPTE:\t", mm_pgtables_bytes(mm) >> 10, 8);
> > SEQ_PUT_DEC(" kB\nVmSwap:\t", swap);
> > - seq_puts(m, " kB\n");
> > + SEQ_PUT_DEC(" kB\nVmSwap:\t", swap);
> > + seq_put_decimal_ull_width(m,
> > + " kB\nVmTaskSize:\t", TASK_SIZE >> 10, 8);
> > + seq_puts(m, " kB\n");
>
> All fields in this file are related to the task. New field related
> to "current" will stick like an eyesore.
Powered by blists - more mailing lists