[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230208161654.99556-1-ryncsn@gmail.com>
Date: Thu, 9 Feb 2023 00:16:52 +0800
From: Kairui Song <ryncsn@...il.com>
To: Johannes Weiner <hannes@...xchg.org>,
Suren Baghdasaryan <surenb@...gle.com>
Cc: Chengming Zhou <zhouchengming@...edance.com>,
Michal Koutný <mkoutny@...e.com>,
Tejun Heo <tj@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, Kairui Song <ryncsn@...il.com>
Subject: [PATCH 0/2] sched/psi: Optimize PSI iteration
Hi all,
Patch 1/2 simplify cgroup_psi, I didn't see a measurable performance
change with this.
Patch 2/2 removed the cache, I just noticed it have the same hierarchy
as the cgroup it's in, so I wondered if it worth adding a cache just for
simplifying the branch, and if we can balance the branch simplification
and minimize the memory footprint in another way, it seems this is
doable.
After the two patches, I see a measurable performance gain
using mmtests/perfpipe:
(AVG of 100 test, ops/sec, the higher the better)
KVM guest on a i7-9700:
psi=0 root cgroup 5 levels of cgroup
Before: 59221 55352 47821
After: 60100 56036 50884
KVM guest on a Ryzen 9 5900HX:
psi=0 root cgroup 5 levels of cgroup
Before: 144566 138919 128888
After: 145812 139580 133514
Kairui Song (2):
sched/psi: simplify cgroup psi retrieving
sched/psi: iterate through cgroups directly
include/linux/psi.h | 2 +-
include/linux/psi_types.h | 1 -
kernel/cgroup/cgroup.c | 7 +++++-
kernel/sched/psi.c | 45 ++++++++++++++++++++++++++++-----------
4 files changed, 39 insertions(+), 16 deletions(-)
--
2.39.1
Powered by blists - more mailing lists