[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231019225346.1822282-1-roman.gushchin@linux.dev>
Date: Thu, 19 Oct 2023 15:53:40 -0700
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Dennis Zhou <dennis@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Naresh Kamboju <naresh.kamboju@...aro.org>,
Roman Gushchin <roman.gushchin@...ux.dev>
Subject: [PATCH v5 0/6] mm: improve performance of accounted kernel memory allocations
This patchset improves the performance of accounted kernel memory allocations
by ~30% as measured by a micro-benchmark [1]. The benchmark is very
straightforward: 1M of 64 bytes-large kmalloc() allocations.
Below are results with the disabled kernel memory accounting, the original state
and with this patchset applied.
| | Kmem disabled | Original | Patched | Delta |
|-------------+---------------+----------+---------+--------|
| User cgroup | 29764 | 84548 | 59078 | -30.0% |
| Root cgroup | 29742 | 48342 | 31501 | -34.8% |
As we can see, the patchset removes the majority of the overhead when there is
no actual accounting (a task belongs to the root memory cgroup) and almost
halves the accounting overhead otherwise.
The main idea is to get rid of unnecessary memcg to objcg conversions and switch
to a scope-based protection of objcgs, which eliminates extra operations with
objcg reference counters under a rcu read lock. More details are provided in
individual commit descriptions.
v5:
- fixed another refcnt bug spotted by Vlastimil
- small refactoring of current_obj_cgroup()
- added a patch for get_obj_cgroup() refactoring
v4:
- fixed a bug spotted by Vlastimil
- cosmetic changes, per Vlastimil
v3:
- fixed a bug spotted by Shakeel
- added some comments, per Shakeel
v2:
- fixed a bug discovered by Naresh Kamboju
- code changes asked by Johannes (added comments, open-coded bit ops)
- merged in a couple of small fixes
v1:
- made the objcg update fully lockless
- fixed !CONFIG_MMU build issues
rfc:
https://lwn.net/Articles/945722/
--
[1]:
static int memory_alloc_test(struct seq_file *m, void *v)
{
unsigned long i, j;
void **ptrs;
ktime_t start, end;
s64 delta, min_delta = LLONG_MAX;
ptrs = kvmalloc(sizeof(void *) * 1000000, GFP_KERNEL);
if (!ptrs)
return -ENOMEM;
for (j = 0; j < 100; j++) {
start = ktime_get();
for (i = 0; i < 1000000; i++)
ptrs[i] = kmalloc(64, GFP_KERNEL_ACCOUNT);
end = ktime_get();
delta = ktime_us_delta(end, start);
if (delta < min_delta)
min_delta = delta;
for (i = 0; i < 1000000; i++)
kfree(ptrs[i]);
}
kvfree(ptrs);
seq_printf(m, "%lld us\n", min_delta);
return 0;
}
--
Signed-off-by: Roman Gushchin (Cruise) <roman.gushchin@...ux.dev>
Roman Gushchin (6):
mm: kmem: optimize get_obj_cgroup_from_current()
mm: kmem: add direct objcg pointer to task_struct
mm: kmem: make memcg keep a reference to the original objcg
mm: kmem: scoped objcg protection
percpu: scoped objcg protection
mm: kmem: reimplement get_obj_cgroup_from_current()
include/linux/memcontrol.h | 28 +++++-
include/linux/sched.h | 4 +
include/linux/sched/mm.h | 4 +
mm/memcontrol.c | 187 +++++++++++++++++++++++++++++++------
mm/percpu.c | 8 +-
mm/slab.h | 15 +--
6 files changed, 204 insertions(+), 42 deletions(-)
--
2.42.0
Powered by blists - more mailing lists