[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJrd-UvttDDSL=q1RXC6Z+jvZAGsN2iM8C8xOSrpJFdLb0e-3g@mail.gmail.com>
Date: Tue, 17 Mar 2020 12:04:46 +0900
From: Jaewon Kim <jaewon31.kim@...il.com>
To: Leon Romanovsky <leon@...nel.org>
Cc: Jaewon Kim <jaewon31.kim@...sung.com>,
Vlastimil Babka <vbabka@...e.cz>, adobriyan@...il.com,
Andrew Morton <akpm@...ux-foundation.org>,
Laura Abbott <labbott@...hat.com>,
Sumit Semwal <sumit.semwal@...aro.org>, minchan@...nel.org,
ngupta@...are.org, sergey.senozhatsky.work@...il.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Linux API <linux-api@...r.kernel.org>
Subject: Re: [RFC PATCH 0/3] meminfo: introduce extra meminfo
2020년 3월 16일 (월) 오후 5:32, Leon Romanovsky <leon@...nel.org>님이 작성:
>
> On Mon, Mar 16, 2020 at 01:07:08PM +0900, Jaewon Kim wrote:
> >
> >
> > On 2020년 03월 14일 02:48, Leon Romanovsky wrote:
> > > On Fri, Mar 13, 2020 at 04:19:36PM +0100, Vlastimil Babka wrote:
> > >> +CC linux-api, please include in future versions as well
> > >>
> > >> On 3/11/20 4:44 AM, Jaewon Kim wrote:
> > >>> /proc/meminfo or show_free_areas does not show full system wide memory
> > >>> usage status. There seems to be huge hidden memory especially on
> > >>> embedded Android system. Because it usually have some HW IP which do not
> > >>> have internal memory and use common DRAM memory.
> > >>>
> > >>> In Android system, most of those hidden memory seems to be vmalloc pages
> > >>> , ion system heap memory, graphics memory, and memory for DRAM based
> > >>> compressed swap storage. They may be shown in other node but it seems to
> > >>> useful if /proc/meminfo shows all those extra memory information. And
> > >>> show_mem also need to print the info in oom situation.
> > >>>
> > >>> Fortunately vmalloc pages is alread shown by commit 97105f0ab7b8
> > >>> ("mm: vmalloc: show number of vmalloc pages in /proc/meminfo"). Swap
> > >>> memory using zsmalloc can be seen through vmstat by commit 91537fee0013
> > >>> ("mm: add NR_ZSMALLOC to vmstat") but not on /proc/meminfo.
> > >>>
> > >>> Memory usage of specific driver can be various so that showing the usage
> > >>> through upstream meminfo.c is not easy. To print the extra memory usage
> > >>> of a driver, introduce following APIs. Each driver needs to count as
> > >>> atomic_long_t.
> > >>>
> > >>> int register_extra_meminfo(atomic_long_t *val, int shift,
> > >>> const char *name);
> > >>> int unregister_extra_meminfo(atomic_long_t *val);
> > >>>
> > >>> Currently register ION system heap allocator and zsmalloc pages.
> > >>> Additionally tested on local graphics driver.
> > >>>
> > >>> i.e) cat /proc/meminfo | tail -3
> > >>> IonSystemHeap: 242620 kB
> > >>> ZsPages: 203860 kB
> > >>> GraphicDriver: 196576 kB
> > >>>
> > >>> i.e.) show_mem on oom
> > >>> <6>[ 420.856428] Mem-Info:
> > >>> <6>[ 420.856433] IonSystemHeap:32813kB ZsPages:44114kB GraphicDriver::13091kB
> > >>> <6>[ 420.856450] active_anon:957205 inactive_anon:159383 isolated_anon:0
> > >> I like the idea and the dynamic nature of this, so that drivers not present
> > >> wouldn't add lots of useless zeroes to the output.
> > >> It also makes simpler the decisions of "what is important enough to need its own
> > >> meminfo entry".
> > >>
> > >> The suggestion for hunting per-driver /sys files would only work if there was a
> > >> common name to such files so once can find(1) them easily.
> > >> It also doesn't work for the oom/failed alloc warning output.
> > > Of course there is a need to have a stable name for such an output, this
> > > is why driver/core should be responsible for that and not drivers authors.
> > >
> > > The use case which I had in mind slightly different than to look after OOM.
> > >
> > > I'm interested to optimize our drivers in their memory footprint to
> > > allow better scale in SR-IOV mode where one device creates many separate
> > > copies of itself. Those copies easily can take gigabytes of RAM due to
> > > the need to optimize for high-performance networking. Sometimes the
> > > amount of memory and not HW is actually limits the scale factor.
> > >
> > > So I would imagine this feature being used as an aid for the driver
> > > developers and not for the runtime decisions.
> > >
> > > My 2-cents.
> > >
> > > Thanks
> > >
> > >
> > Thank you for your comment.
> > My idea, I think, may be able to help each driver developer to see their memory usage.
> > But I'd like to see overall memory usage through the one node.
>
> It is more than enough :).
>
> >
> > Let me know if you have more comment.
> > I am planning to move my logic to be shown on a new node, /proc/meminfo_extra at v2.
>
> Can you please help me to understand how that file will look like once
> many drivers will start to use this interface? Will I see multiple
> lines?
>
> Something like:
> driver1 ....
> driver2 ....
> driver3 ....
> ...
> driver1000 ....
>
> How can we extend it to support subsystems core code?
I do not have a plan to support subsystem core.
I just want the /proc/meminfo_extra to show size of alloc_pages APIs
rather than slub size. It is to show hidden huge memory.
I think most of drivers do not need to register its size to
/proc/meminfo_extra because
drivers usually use slub APIs and rather than alloc_pages APIs.
/proc/slabinfo helps for slub size in detail.
As a candidate of /proc/meminfo_extra, I hope only few drivers using
huge memory like over 100 MB got from alloc_pages APIs.
As you say, if there is a static node on /sys for each driver, it may
be used for all the drivers.
I think sysfs class way may be better to show categorized sum size.
But /proc/meminfo_extra can be another way to show those hidden huge memory.
I mean your idea and my idea is not exclusive.
Thank you
>
> Thanks
>
> >
> > Thank you
> > Jaewon Kim
Powered by blists - more mailing lists