lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200323080503.6224-1-jaewon31.kim@samsung.com>
Date:   Mon, 23 Mar 2020 17:05:00 +0900
From:   Jaewon Kim <jaewon31.kim@...sung.com>
To:     gregkh@...uxfoundation.org, leon@...nel.org, vbabka@...e.cz,
        adobriyan@...il.com, akpm@...ux-foundation.org, labbott@...hat.com,
        sumit.semwal@...aro.org, minchan@...nel.org, ngupta@...are.org,
        sergey.senozhatsky.work@...il.com, kasong@...hat.com,
        bhe@...hat.com
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        jaewon31.kim@...il.com, linux-api@...r.kernel.org,
        kexec@...ts.infradead.org, Jaewon Kim <jaewon31.kim@...sung.com>
Subject: [RFC PATCH v2 0/3] meminfo_extra: introduce meminfo extra

/proc/meminfo or show_free_areas does not show full system wide memory
usage status because memory stats do not track all memory allocations.
There seems to be huge hidden memory especially on embedded system. It
is because some HW IPs in the system use common DRAM memory instead of
internal memory. Device drivers directly request huge pages from the
page allocator with alloc_pages.

In Android system, most of those hidden memory seems to be vmalloc
pages, ion system heap memory, graphics memory, and memory for DRAM
based compressed swap storage. They may be shown in other node but it
seems to be useful if /proc/meminfo_extra shows all those extra memory
information. And show_mem also need to print the info in oom situation.

Fortunately vmalloc pages is already shown by commit 97105f0ab7b8
("mm: vmalloc: show number of vmalloc pages in /proc/meminfo"). Swap
memory using zsmalloc can be seen through vmstat by commit 91537fee0013
("mm: add NR_ZSMALLOC to vmstat") but not on /proc/meminfo.

Memory usage of specific driver can be various so that showing the usage
through upstream meminfo.c is not easy. To print the extra memory usage
of a driver, introduce following APIs. Each driver needs to count as
atomic_long_t.

int register_meminfo_extra(atomic_long_t *val, int shift,
			   const char *name);
int unregister_meminfo_extra(atomic_long_t *val);

Currently register ION system heap allocator and zsmalloc pages.
Additionally tested on local graphics driver.

i.e) cat /proc/meminfo_extra | tail -3
IonSystemHeap:    242620 kB
ZsPages:          203860 kB
GraphicDriver:    196576 kB

i.e.) show_mem on oom
<6>[  420.856428]  Mem-Info:
<6>[  420.856433]  IonSystemHeap:32813kB ZsPages:44114kB GraphicDriver::13091kB
<6>[  420.856450]  active_anon:957205 inactive_anon:159383 isolated_anon:0

---
v2: move to /proc/meminfo_extra, and use rcu
v1: print info at /proc/meminfo
On v1 patch, there was not resolved discussion about the logic. There
seems to be agreement on showing memory usage, but there was a lack of
consensus on way of showing the information. Other opinion is using
/sys/ as separate file for each driver.
---

Jaewon Kim (3):
  meminfo_extra: introduce meminfo extra
  mm: zsmalloc: include zs page size in meminfo extra
  android: ion: include system heap size in meminfo extra

 drivers/staging/android/ion/ion.c             |   2 +
 drivers/staging/android/ion/ion.h             |   1 +
 drivers/staging/android/ion/ion_system_heap.c |   2 +
 fs/proc/Makefile                              |   1 +
 fs/proc/meminfo_extra.c                       | 123 ++++++++++++++++++++++++++
 include/linux/mm.h                            |   4 +
 mm/page_alloc.c                               |   1 +
 mm/zsmalloc.c                                 |   2 +
 8 files changed, 136 insertions(+)
 create mode 100644 fs/proc/meminfo_extra.c

-- 
2.13.7

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ