lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220429133552.33768-1-zhengqi.arch@bytedance.com>
Date:   Fri, 29 Apr 2022 21:35:34 +0800
From:   Qi Zheng <zhengqi.arch@...edance.com>
To:     akpm@...ux-foundation.org, tglx@...utronix.de,
        kirill.shutemov@...ux.intel.com, mika.penttila@...tfour.com,
        david@...hat.com, jgg@...dia.com, tj@...nel.org, dennis@...nel.org,
        ming.lei@...hat.com
Cc:     linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, songmuchun@...edance.com,
        zhouchengming@...edance.com, Qi Zheng <zhengqi.arch@...edance.com>
Subject: [RFC PATCH 00/18] Try to free user PTE page table pages

Hi,

This patch series aims to try to free user PTE page table pages when no one is
using it.

The beginning of this story is that some malloc libraries(e.g. jemalloc or
tcmalloc) usually allocate the amount of VAs by mmap() and do not unmap those
VAs. They will use madvise(MADV_DONTNEED) to free physical memory if they want.
But the page tables do not be freed by madvise(), so it can produce many
page tables when the process touches an enormous virtual address space.

The following figures are a memory usage snapshot of one process which actually
happened on our server:

        VIRT:  55t
        RES:   590g
        VmPTE: 110g

As we can see, the PTE page tables size is 110g, while the RES is 590g. In
theory, the process only need 1.2g PTE page tables to map those physical
memory. The reason why PTE page tables occupy a lot of memory is that
madvise(MADV_DONTNEED) only empty the PTE and free physical memory but
doesn't free the PTE page table pages. So we can free those empty PTE page
tables to save memory. In the above cases, we can save memory about 108g(best
case). And the larger the difference between the size of VIRT and RES, the
more memory we save.

In this patch series, we add a pte_ref field to the struct page of page table
to track how many users of user PTE page table. Similar to the mechanism of page
refcount, the user of PTE page table should hold a refcount to it before
accessing. The user PTE page table page may be freed when the last refcount is
dropped.

Different from the idea of another patchset of mine before[1], the pte_ref
becomes a struct percpu_ref type, and we switch it to atomic mode only in cases
such as MADV_DONTNEED and MADV_FREE that may clear the user PTE page table
entryies, and then release the user PTE page table page when checking that
pte_ref is 0. The advantage of this is that there is basically no performance
overhead in percpu mode, but it can also free the empty PTEs. In addition, the
code implementation of this patchset is much simpler and more portable than the
another patchset[1].

Testing:

The following code snippet can show the effect of optimization:

        mmap 50G
        while (1) {
                for (; i < 1024 * 25; i++) {
                        touch 2M memory
                        madvise MADV_DONTNEED 2M
                }
        }

As we can see, the memory usage of VmPTE is reduced:

                        before                          after
VIRT                   50.0 GB                        50.0 GB
RES                     3.1 MB                         3.1 MB
VmPTE                102640 kB                          96 kB

I also have tested the stability by LTP[2] for several weeks. I have not seen
any crash so far.

This series is based on v5.18-rc2.

Comments and suggestions are welcome.

Thanks,
Qi.

[1] https://patchwork.kernel.org/project/linux-mm/cover/20211110105428.32458-1-zhengqi.arch@bytedance.com/
[2] https://github.com/linux-test-project/ltp

Qi Zheng (18):
  x86/mm/encrypt: add the missing pte_unmap() call
  percpu_ref: make ref stable after percpu_ref_switch_to_atomic_sync()
    returns
  percpu_ref: make percpu_ref_switch_lock per percpu_ref
  mm: convert to use ptep_clear() in pte_clear_not_present_full()
  mm: split the related definitions of pte_offset_map_lock() into
    pgtable.h
  mm: introduce CONFIG_FREE_USER_PTE
  mm: add pte_to_page() helper
  mm: introduce percpu_ref for user PTE page table page
  pte_ref: add pte_tryget() and {__,}pte_put() helper
  mm: add pte_tryget_map{_lock}() helper
  mm: convert to use pte_tryget_map_lock()
  mm: convert to use pte_tryget_map()
  mm: add try_to_free_user_pte() helper
  mm: use try_to_free_user_pte() in MADV_DONTNEED case
  mm: use try_to_free_user_pte() in MADV_FREE case
  pte_ref: add track_pte_{set, clear}() helper
  x86/mm: add x86_64 support for pte_ref
  Documentation: add document for pte_ref

 Documentation/vm/index.rst         |   1 +
 Documentation/vm/pte_ref.rst       | 210 ++++++++++++++++++++++++++
 arch/x86/Kconfig                   |   1 +
 arch/x86/include/asm/pgtable.h     |   7 +-
 arch/x86/mm/mem_encrypt_identity.c |  10 +-
 fs/proc/task_mmu.c                 |  16 +-
 fs/userfaultfd.c                   |  10 +-
 include/linux/mm.h                 | 162 ++------------------
 include/linux/mm_types.h           |   1 +
 include/linux/percpu-refcount.h    |   6 +-
 include/linux/pgtable.h            | 196 +++++++++++++++++++++++-
 include/linux/pte_ref.h            |  73 +++++++++
 include/linux/rmap.h               |   2 +
 include/linux/swapops.h            |   4 +-
 kernel/events/core.c               |   5 +-
 lib/percpu-refcount.c              |  86 +++++++----
 mm/Kconfig                         |  10 ++
 mm/Makefile                        |   2 +-
 mm/damon/vaddr.c                   |  30 ++--
 mm/debug_vm_pgtable.c              |   2 +-
 mm/filemap.c                       |   4 +-
 mm/gup.c                           |  20 ++-
 mm/hmm.c                           |   9 +-
 mm/huge_memory.c                   |   4 +-
 mm/internal.h                      |   3 +-
 mm/khugepaged.c                    |  18 ++-
 mm/ksm.c                           |   4 +-
 mm/madvise.c                       |  35 +++--
 mm/memcontrol.c                    |   8 +-
 mm/memory-failure.c                |  15 +-
 mm/memory.c                        | 187 +++++++++++++++--------
 mm/mempolicy.c                     |   4 +-
 mm/migrate.c                       |   8 +-
 mm/migrate_device.c                |  22 ++-
 mm/mincore.c                       |   5 +-
 mm/mlock.c                         |   5 +-
 mm/mprotect.c                      |   4 +-
 mm/mremap.c                        |  10 +-
 mm/oom_kill.c                      |   3 +-
 mm/page_table_check.c              |   2 +-
 mm/page_vma_mapped.c               |  59 +++++++-
 mm/pagewalk.c                      |   6 +-
 mm/pte_ref.c                       | 230 +++++++++++++++++++++++++++++
 mm/rmap.c                          |   9 ++
 mm/swap_state.c                    |   4 +-
 mm/swapfile.c                      |  18 ++-
 mm/userfaultfd.c                   |  11 +-
 mm/vmalloc.c                       |   2 +-
 48 files changed, 1203 insertions(+), 340 deletions(-)
 create mode 100644 Documentation/vm/pte_ref.rst
 create mode 100644 include/linux/pte_ref.h
 create mode 100644 mm/pte_ref.c

-- 
2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ