lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 13 Mar 2020 19:48:27 +0200
From:   Leon Romanovsky <leon@...nel.org>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     Jaewon Kim <jaewon31.kim@...sung.com>, adobriyan@...il.com,
        akpm@...ux-foundation.org, labbott@...hat.com,
        sumit.semwal@...aro.org, minchan@...nel.org, ngupta@...are.org,
        sergey.senozhatsky.work@...il.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, jaewon31.kim@...il.com,
        Linux API <linux-api@...r.kernel.org>
Subject: Re: [RFC PATCH 0/3] meminfo: introduce extra meminfo

On Fri, Mar 13, 2020 at 04:19:36PM +0100, Vlastimil Babka wrote:
> +CC linux-api, please include in future versions as well
>
> On 3/11/20 4:44 AM, Jaewon Kim wrote:
> > /proc/meminfo or show_free_areas does not show full system wide memory
> > usage status. There seems to be huge hidden memory especially on
> > embedded Android system. Because it usually have some HW IP which do not
> > have internal memory and use common DRAM memory.
> >
> > In Android system, most of those hidden memory seems to be vmalloc pages
> > , ion system heap memory, graphics memory, and memory for DRAM based
> > compressed swap storage. They may be shown in other node but it seems to
> > useful if /proc/meminfo shows all those extra memory information. And
> > show_mem also need to print the info in oom situation.
> >
> > Fortunately vmalloc pages is alread shown by commit 97105f0ab7b8
> > ("mm: vmalloc: show number of vmalloc pages in /proc/meminfo"). Swap
> > memory using zsmalloc can be seen through vmstat by commit 91537fee0013
> > ("mm: add NR_ZSMALLOC to vmstat") but not on /proc/meminfo.
> >
> > Memory usage of specific driver can be various so that showing the usage
> > through upstream meminfo.c is not easy. To print the extra memory usage
> > of a driver, introduce following APIs. Each driver needs to count as
> > atomic_long_t.
> >
> > int register_extra_meminfo(atomic_long_t *val, int shift,
> > 			   const char *name);
> > int unregister_extra_meminfo(atomic_long_t *val);
> >
> > Currently register ION system heap allocator and zsmalloc pages.
> > Additionally tested on local graphics driver.
> >
> > i.e) cat /proc/meminfo | tail -3
> > IonSystemHeap:    242620 kB
> > ZsPages:          203860 kB
> > GraphicDriver:    196576 kB
> >
> > i.e.) show_mem on oom
> > <6>[  420.856428]  Mem-Info:
> > <6>[  420.856433]  IonSystemHeap:32813kB ZsPages:44114kB GraphicDriver::13091kB
> > <6>[  420.856450]  active_anon:957205 inactive_anon:159383 isolated_anon:0
>
> I like the idea and the dynamic nature of this, so that drivers not present
> wouldn't add lots of useless zeroes to the output.
> It also makes simpler the decisions of "what is important enough to need its own
> meminfo entry".
>
> The suggestion for hunting per-driver /sys files would only work if there was a
> common name to such files so once can find(1) them easily.
> It also doesn't work for the oom/failed alloc warning output.

Of course there is a need to have a stable name for such an output, this
is why driver/core should be responsible for that and not drivers authors.

The use case which I had in mind slightly different than to look after OOM.

I'm interested to optimize our drivers in their memory footprint to
allow better scale in SR-IOV mode where one device creates many separate
copies of itself. Those copies easily can take gigabytes of RAM due to
the need to optimize for high-performance networking. Sometimes the
amount of memory and not HW is actually limits the scale factor.

So I would imagine this feature being used as an aid for the driver
developers and not for the runtime decisions.

My 2-cents.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ