[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200117171331.GA17179@blackbody.suse.cz>
Date: Fri, 17 Jan 2020 18:13:31 +0100
From: Michal Koutný <mkoutny@...e.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Hocko <mhocko@...nel.org>,
Christopher Lameter <cl@...ux.com>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Subject: Re: SLUB: purpose of sysfs events on cache creation/removal
Hello.
On Thu, Jan 09, 2020 at 11:44:15AM -0800, Andrew Morton <akpm@...ux-foundation.org> wrote:
> I looked at it - there wasn't really any compelling followup.
FTR, I noticed udevd consuming non-negligible CPU cycles when doing some
cgroup stress testing. And even extrapolating to less artificial
situations, the udev events seem to cause useless tickling of udevd.
I used the simple script below
cat measure.sh <<EOD
sample() {
local n=$(echo|awk "END {print int(40/$1)}")
for i in $(seq $n) ; do
mkdir /sys/fs/cgroup/memory/grp1 ;
echo 0 >/sys/fs/cgroup/memory/grp1/cgroup.procs ;
/usr/bin/sleep $1 ;
echo 0 >/sys/fs/cgroup/memory/cgroup.procs ;
rmdir /sys/fs/cgroup/memory/grp1 ;
done
}
for d in 0.004 0.008 0.016 0.032 0.064 0.128 0.256 0.5 1 ; do
echo 0 >/sys/fs/cgroup/cpuacct/system.slice/systemd-udevd.service/cpuacct.usage
time sample $d 2>&1 | grep real
echo -n "udev "
cat /sys/fs/cgroup/cpuacct/system.slice/systemd-udevd.service/cpuacct.usage
done
EOD
and I drew the following ballpark conclusion:
1.7% CPU time at 1 event/s -> 60 event/s 100% cpu
(The event is one mkdir/migrate/rmdir sequence. Numbers are from dummy
test VM, so take with a grain of salt.)
> If this change should be pursued then can we please have a formal
> resend?
Who's supposed to do that?
Regards,
Michal
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists