[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkZcZ2mzEAv5x5TQk8of9A7w2p_fY3dGJAM29sXPvS7_RA@mail.gmail.com>
Date: Tue, 31 Jan 2023 11:49:22 -0800
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Alistair Popple <apopple@...dia.com>
Cc: linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, jgg@...dia.com, jhubbard@...dia.com,
tjmercier@...gle.com, hannes@...xchg.org, surenb@...gle.com,
mkoutny@...e.com, daniel@...ll.ch
Subject: Re: [RFC PATCH 00/19] mm: Introduce a cgroup to limit the amount of
locked and pinned memory
On Tue, Jan 31, 2023 at 3:24 AM Alistair Popple <apopple@...dia.com> wrote:
>
>
> Yosry Ahmed <yosryahmed@...gle.com> writes:
>
> > On Mon, Jan 30, 2023 at 5:07 PM Alistair Popple <apopple@...dia.com> wrote:
> >>
> >>
> >> Yosry Ahmed <yosryahmed@...gle.com> writes:
> >>
> >> > On Mon, Jan 23, 2023 at 9:43 PM Alistair Popple <apopple@...dia.com> wrote:
> >> >>
> >> >> Having large amounts of unmovable or unreclaimable memory in a system
> >> >> can lead to system instability due to increasing the likelihood of
> >> >> encountering out-of-memory conditions. Therefore it is desirable to
> >> >> limit the amount of memory users can lock or pin.
> >> >>
> >> >> From userspace such limits can be enforced by setting
> >> >> RLIMIT_MEMLOCK. However there is no standard method that drivers and
> >> >> other in-kernel users can use to check and enforce this limit.
> >> >>
> >> >> This has lead to a large number of inconsistencies in how limits are
> >> >> enforced. For example some drivers will use mm->locked_mm while others
> >> >> will use mm->pinned_mm or user->locked_mm. It is therefore possible to
> >> >> have up to three times RLIMIT_MEMLOCKED pinned.
> >> >>
> >> >> Having pinned memory limited per-task also makes it easy for users to
> >> >> exceed the limit. For example drivers that pin memory with
> >> >> pin_user_pages() it tends to remain pinned after fork. To deal with
> >> >> this and other issues this series introduces a cgroup for tracking and
> >> >> limiting the number of pages pinned or locked by tasks in the group.
> >> >>
> >> >> However the existing behaviour with regards to the rlimit needs to be
> >> >> maintained. Therefore the lesser of the two limits is
> >> >> enforced. Furthermore having CAP_IPC_LOCK usually bypasses the rlimit,
> >> >> but this bypass is not allowed for the cgroup.
> >> >>
> >> >> The first part of this series converts existing drivers which
> >> >> open-code the use of locked_mm/pinned_mm over to a common interface
> >> >> which manages the refcounts of the associated task/mm/user
> >> >> structs. This ensures accounting of pages is consistent and makes it
> >> >> easier to add charging of the cgroup.
> >> >>
> >> >> The second part of the series adds the cgroup and converts core mm
> >> >> code such as mlock over to charging the cgroup before finally
> >> >> introducing some selftests.
> >> >
> >> >
> >> > I didn't go through the entire series, so apologies if this was
> >> > mentioned somewhere, but do you mind elaborating on why this is added
> >> > as a separate cgroup controller rather than an extension of the memory
> >> > cgroup controller?
> >>
> >> One of my early prototypes actually did add this to the memcg
> >> controller. However pinned pages fall under their own limit, and we
> >> wanted to always account pages to the cgroup of the task using the
> >> driver rather than say folio_memcg(). So adding it to memcg didn't seem
> >> to have much benefit as we didn't end up using any of the infrastructure
> >> provided by memcg. Hence I thought it was clearer to just add it as it's
> >> own controller.
> >
> > To clarify, you account and limit pinned memory based on the cgroup of
> > the process pinning the pages, not based on the cgroup that the pages
> > are actually charged to? Is my understanding correct?
>
> That's correct.
Interesting.
>
> > IOW, you limit the amount of memory that processes in a cgroup can
> > pin, not the amount of memory charged to a cgroup that can be pinned?
>
> Right, that's a good clarification which I might steal and add to the
> cover letter.
Feel free to :)
Please also clarify this in the code/docs. Glancing through the
patches I was asking myself multiple times why this is not
"memory.pinned.[current/max]" or similar.
>
> >>
> >> - Alistair
> >>
> >> >>
> >> >>
> >> >> As I don't have access to systems with all the various devices I
> >> >> haven't been able to test all driver changes. Any help there would be
> >> >> appreciated.
> >> >>
> >> >> Alistair Popple (19):
> >> >> mm: Introduce vm_account
> >> >> drivers/vhost: Convert to use vm_account
> >> >> drivers/vdpa: Convert vdpa to use the new vm_structure
> >> >> infiniband/umem: Convert to use vm_account
> >> >> RMDA/siw: Convert to use vm_account
> >> >> RDMA/usnic: convert to use vm_account
> >> >> vfio/type1: Charge pinned pages to pinned_vm instead of locked_vm
> >> >> vfio/spapr_tce: Convert accounting to pinned_vm
> >> >> io_uring: convert to use vm_account
> >> >> net: skb: Switch to using vm_account
> >> >> xdp: convert to use vm_account
> >> >> kvm/book3s_64_vio: Convert account_locked_vm() to vm_account_pinned()
> >> >> fpga: dfl: afu: convert to use vm_account
> >> >> mm: Introduce a cgroup for pinned memory
> >> >> mm/util: Extend vm_account to charge pages against the pin cgroup
> >> >> mm/util: Refactor account_locked_vm
> >> >> mm: Convert mmap and mlock to use account_locked_vm
> >> >> mm/mmap: Charge locked memory to pins cgroup
> >> >> selftests/vm: Add pins-cgroup selftest for mlock/mmap
> >> >>
> >> >> MAINTAINERS | 8 +-
> >> >> arch/powerpc/kvm/book3s_64_vio.c | 10 +-
> >> >> arch/powerpc/mm/book3s64/iommu_api.c | 29 +--
> >> >> drivers/fpga/dfl-afu-dma-region.c | 11 +-
> >> >> drivers/fpga/dfl-afu.h | 1 +-
> >> >> drivers/infiniband/core/umem.c | 16 +-
> >> >> drivers/infiniband/core/umem_odp.c | 6 +-
> >> >> drivers/infiniband/hw/usnic/usnic_uiom.c | 13 +-
> >> >> drivers/infiniband/hw/usnic/usnic_uiom.h | 1 +-
> >> >> drivers/infiniband/sw/siw/siw.h | 2 +-
> >> >> drivers/infiniband/sw/siw/siw_mem.c | 20 +--
> >> >> drivers/infiniband/sw/siw/siw_verbs.c | 15 +-
> >> >> drivers/vdpa/vdpa_user/vduse_dev.c | 20 +--
> >> >> drivers/vfio/vfio_iommu_spapr_tce.c | 15 +-
> >> >> drivers/vfio/vfio_iommu_type1.c | 59 +----
> >> >> drivers/vhost/vdpa.c | 9 +-
> >> >> drivers/vhost/vhost.c | 2 +-
> >> >> drivers/vhost/vhost.h | 1 +-
> >> >> include/linux/cgroup.h | 20 ++-
> >> >> include/linux/cgroup_subsys.h | 4 +-
> >> >> include/linux/io_uring_types.h | 3 +-
> >> >> include/linux/kvm_host.h | 1 +-
> >> >> include/linux/mm.h | 5 +-
> >> >> include/linux/mm_types.h | 88 ++++++++-
> >> >> include/linux/skbuff.h | 6 +-
> >> >> include/net/sock.h | 2 +-
> >> >> include/net/xdp_sock.h | 2 +-
> >> >> include/rdma/ib_umem.h | 1 +-
> >> >> io_uring/io_uring.c | 20 +--
> >> >> io_uring/notif.c | 4 +-
> >> >> io_uring/notif.h | 10 +-
> >> >> io_uring/rsrc.c | 38 +---
> >> >> io_uring/rsrc.h | 9 +-
> >> >> mm/Kconfig | 11 +-
> >> >> mm/Makefile | 1 +-
> >> >> mm/internal.h | 2 +-
> >> >> mm/mlock.c | 76 +------
> >> >> mm/mmap.c | 76 +++----
> >> >> mm/mremap.c | 54 +++--
> >> >> mm/pins_cgroup.c | 273 ++++++++++++++++++++++++-
> >> >> mm/secretmem.c | 6 +-
> >> >> mm/util.c | 196 +++++++++++++++--
> >> >> net/core/skbuff.c | 47 +---
> >> >> net/rds/message.c | 9 +-
> >> >> net/xdp/xdp_umem.c | 38 +--
> >> >> tools/testing/selftests/vm/Makefile | 1 +-
> >> >> tools/testing/selftests/vm/pins-cgroup.c | 271 ++++++++++++++++++++++++-
> >> >> virt/kvm/kvm_main.c | 3 +-
> >> >> 48 files changed, 1114 insertions(+), 401 deletions(-)
> >> >> create mode 100644 mm/pins_cgroup.c
> >> >> create mode 100644 tools/testing/selftests/vm/pins-cgroup.c
> >> >>
> >> >> base-commit: 2241ab53cbb5cdb08a6b2d4688feb13971058f65
> >> >> --
> >> >> git-series 0.9.1
> >> >>
> >>
>
Powered by blists - more mailing lists