[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230405022702.753323-5-mcgrof@kernel.org>
Date: Tue, 4 Apr 2023 19:27:00 -0700
From: Luis Chamberlain <mcgrof@...nel.org>
To: david@...hat.com, patches@...ts.linux.dev,
linux-modules@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, pmladek@...e.com,
petr.pavlu@...e.com, prarit@...hat.com,
torvalds@...ux-foundation.org, gregkh@...uxfoundation.org,
rafael@...nel.org
Cc: christophe.leroy@...roup.eu, tglx@...utronix.de,
peterz@...radead.org, song@...nel.org, rppt@...nel.org,
dave@...olabs.net, willy@...radead.org, vbabka@...e.cz,
mhocko@...e.com, dave.hansen@...ux.intel.com,
colin.i.king@...il.com, jim.cromie@...il.com,
catalin.marinas@....com, jbaron@...mai.com,
rick.p.edgecombe@...el.com, mcgrof@...nel.org
Subject: [PATCH v2 4/6] module: avoid allocation if module is already present and ready
load_module() will allocate a struct module before even checking
if the module is already loaded. This can create unecessary memory
pressure since we can easily just check if the module is already
present early with the copy of the module information from userspace
after we've validated it a bit.
This can only be an issue if a system is getting hammered with userspace
loading modules. Note that there are two ways to load modules, one is
kernel moduile auto-loading (request_module() calls in-kernel) and the
other is modprobe calls from userspace. The auto-loading is in-kernel, that
pings back to userspace to just call modprobe. We already have a way to
restrict the amount of concurrent kernel auto-loads in a given time, however
that does not stop a system from issuing tons of system calls to load a
module and for the races to exist. Userspace itself *is* supposed to check
if a module is present before loading it. But we're observing situations
where tons of the same module are in effect being loaded. Although some of
these are acknolwedged as in-kernel bugs such as the ACPI frequency
modules, issues for which we already have fixes merged or are working
towards, but we can also help a bit more in the modules side to avoid
those dramatic situations. All that is just memory being allocated to then
be thrown away.
To avoid memory pressure for such stupid cases put a stop gap for them.
We now check for the module being present *before* allocation, and then
right after we are going to add it to the system.
On a 8vcpu 8 GiB RAM system using kdevops and testing against selftests
kmod.sh -t 0008 I see a saving in the *highest* side of memory
consumption of up to ~ 84 MiB with the Linux kernel selftests kmod
test 0008. With the new stress-ng module test I see a 145 MiB difference
in max memory consumption with 100 ops. The stress-ng module ops tests can be
pretty pathalogical -- it is not realistic, however it was used to
finally successfully reproduce issues which are only reported to happen on
system with over 400 CPUs [0] by just usign 100 ops on a 8vcpu 8 GiB RAM
system.
This can be observed and visualized below. The time it takes to run the
test is also not affected.
The kmod tests 0008:
The gnuplot is set to a range from 400000 KiB (390 Mib) - 580000 (566 Mib)
given the tests peak around that range.
cat kmod.plot
set term dumb
set output fileout
set yrange [400000:580000]
plot filein with linespoints title "Memory usage (KiB)"
Before:
root@...d ~ # /data/linux-next/tools/testing/selftests/kmod/kmod.sh -t 0008
root@...d ~ # free -k -s 1 -c 40 | grep Mem | awk '{print $3}' > log-0008-before.txt ^C
root@...d ~ # sort -n -r log-0008-before.txt | head -1
528732
So ~516.33 MiB
After:
root@...d ~ # /data/linux-next/tools/testing/selftests/kmod/kmod.sh -t 0008
root@...d ~ # free -k -s 1 -c 40 | grep Mem | awk '{print $3}' > log-0008-after.txt ^C
root@...d ~ # sort -n -r log-0008-after.txt | head -1
442516
So ~432.14 MiB
That's about 84 ~MiB in savings in the worst case. The graphs:
root@...d ~ # gnuplot -e "filein='log-0008-before.txt'; fileout='graph-0008-before.txt'" kmod.plot
root@...d ~ # gnuplot -e "filein='log-0008-after.txt'; fileout='graph-0008-after.txt'" kmod.plot
root@...d ~ # cat graph-0008-before.txt
580000 +-----------------------------------------------------------------+
| + + + + + + + |
560000 |-+ Memory usage (KiB) ***A***-|
| |
540000 |-+ +-|
| |
| *A *AA*AA*A*AA *A*AA A*A*A *AA*A*AA*A A |
520000 |-+A*A*AA *AA*A *A*AA*A*AA *A*A A *A+-|
|*A |
500000 |-+ +-|
| |
480000 |-+ +-|
| |
460000 |-+ +-|
| |
| |
440000 |-+ +-|
| |
420000 |-+ +-|
| + + + + + + + |
400000 +-----------------------------------------------------------------+
0 5 10 15 20 25 30 35 40
root@...d ~ # cat graph-0008-after.txt
580000 +-----------------------------------------------------------------+
| + + + + + + + |
560000 |-+ Memory usage (KiB) ***A***-|
| |
540000 |-+ +-|
| |
| |
520000 |-+ +-|
| |
500000 |-+ +-|
| |
480000 |-+ +-|
| |
460000 |-+ +-|
| |
| *A *A*A |
440000 |-+A*A*AA*A A A*A*AA A*A*AA*A*AA*A*AA*A*AA*AA*A*AA*A*AA-|
|*A *A*AA*A |
420000 |-+ +-|
| + + + + + + + |
400000 +-----------------------------------------------------------------+
0 5 10 15 20 25 30 35 40
The stress-ng module tests:
This is used to run the test to try to reproduce the vmap issues
reported by David:
echo 0 > /proc/sys/vm/oom_dump_tasks
./stress-ng --module 100 --module-name xfs
Prior to this commit:
root@...d ~ # free -k -s 1 -c 40 | grep Mem | awk '{print $3}' > baseline-stress-ng.txt
root@...d ~ # sort -n -r baseline-stress-ng.txt | head -1
5046456
After this commit:
root@...d ~ # free -k -s 1 -c 40 | grep Mem | awk '{print $3}' > after-stress-ng.txt
root@...d ~ # sort -n -r after-stress-ng.txt | head -1
4896972
5046456 - 4896972
149484
149484/1024
145.98046875000000000000
So this commit using stress-ng reveals saving about 145 MiB in memory
using 100 ops from stress-ng which reproduced the vmap issue reported.
cat kmod.plot
set term dumb
set output fileout
set yrange [4700000:5070000]
plot filein with linespoints title "Memory usage (KiB)"
root@...d ~ # gnuplot -e "filein='baseline-stress-ng.txt'; fileout='graph-stress-ng-before.txt'" kmod-simple-stress-ng.plot
root@...d ~ # gnuplot -e "filein='after-stress-ng.txt'; fileout='graph-stress-ng-after.txt'" kmod-simple-stress-ng.plot
root@...d ~ # cat graph-stress-ng-before.txt
+---------------------------------------------------------------+
5.05e+06 |-+ + A + + + + + + +-|
| * Memory usage (KiB) ***A*** |
| * A |
5e+06 |-+ ** ** +-|
| ** * * A |
4.95e+06 |-+ * * A * A* +-|
| * * A A * * * * A |
| * * * * * * *A * * * A * |
4.9e+06 |-+ * * * A*A * A*AA*A A *A **A **A*A *+-|
| A A*A A * A * * A A * A * ** |
| * ** ** * * * * * * * |
4.85e+06 |-+ A A A ** * * ** *-|
| * * * * ** * |
| * A * * * * |
4.8e+06 |-+ * * * A A-|
| * * * |
4.75e+06 |-+ * * * +-|
| * ** |
| * + + + + + + ** + |
4.7e+06 +---------------------------------------------------------------+
0 5 10 15 20 25 30 35 40
root@...d ~ # cat graph-stress-ng-after.txt
+---------------------------------------------------------------+
5.05e+06 |-+ + + + + + + + +-|
| Memory usage (KiB) ***A*** |
| |
5e+06 |-+ +-|
| |
4.95e+06 |-+ +-|
| |
| |
4.9e+06 |-+ *AA +-|
| A*AA*A*A A A*AA*AA*A*AA*A A A A*A *AA*A*A A A*AA*AA |
| * * ** * * * ** * *** * |
4.85e+06 |-+* *** * * * * *** A * * +-|
| * A * * ** * * A * * |
| * * * * ** * * |
4.8e+06 |-+* * * A * * * +-|
| * * * A * * |
4.75e+06 |-* * * * * +-|
| * * * * * |
| * + * *+ + + + + * *+ |
4.7e+06 +---------------------------------------------------------------+
0 5 10 15 20 25 30 35 40
[0] https://lkml.kernel.org/r/20221013180518.217405-1-david@redhat.com
Reported-by: David Hildenbrand <david@...hat.com>
Signed-off-by: Luis Chamberlain <mcgrof@...nel.org>
---
kernel/module/main.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/module/main.c b/kernel/module/main.c
index 8f382580195b..137fd9292dc0 100644
--- a/kernel/module/main.c
+++ b/kernel/module/main.c
@@ -2797,7 +2797,11 @@ static int early_mod_check(struct load_info *info, int flags)
if (err)
return err;
- return 0;
+ mutex_lock(&module_mutex);
+ err = module_patient_check_exists(info->mod->name);
+ mutex_unlock(&module_mutex);
+
+ return err;
}
/*
--
2.39.2
Powered by blists - more mailing lists