[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yc0yskk0m2bePLu6@dev0025.ash9.facebook.com>
Date: Wed, 29 Dec 2021 20:16:50 -0800
From: David Vernet <void@...ifault.com>
To: live-patching@...r.kernel.org, linux-kernel@...r.kernel.org,
jpoimboe@...hat.com, pmladek@...e.com, jikos@...nel.org,
mbenes@...e.cz, joe.lawrence@...hat.com
Cc: linux-modules@...r.kernel.org, mcgrof@...nel.org, jeyu@...nel.org,
bpf@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
andrii@...nel.org, kafai@...com, songliubraving@...com, yhs@...com,
john.fastabend@...il.com, kpsingh@...nel.org,
netdev@...r.kernel.org, memxor@...il.com
Subject: Re: [PATCH] livepatch: Avoid CPU hogging with cond_resched
Adding modules + BPF list and maintainers to this thread.
David Vernet <void@...ifault.com> wrote on Wed [2021-Dec-29 13:56:47 -0800]:
> When initializing a 'struct klp_object' in klp_init_object_loaded(), and
> performing relocations in klp_resolve_symbols(), klp_find_object_symbol()
> is invoked to look up the address of a symbol in an already-loaded module
> (or vmlinux). This, in turn, calls kallsyms_on_each_symbol() or
> module_kallsyms_on_each_symbol() to find the address of the symbol that is
> being patched.
>
> It turns out that symbol lookups often take up the most CPU time when
> enabling and disabling a patch, and may hog the CPU and cause other tasks
> on that CPU's runqueue to starve -- even in paths where interrupts are
> enabled. For example, under certain workloads, enabling a KLP patch with
> many objects or functions may cause ksoftirqd to be starved, and thus for
> interrupts to be backlogged and delayed. This may end up causing TCP
> retransmits on the host where the KLP patch is being applied, and in
> general, may cause any interrupts serviced by softirqd to be delayed while
> the patch is being applied.
>
> So as to ensure that kallsyms_on_each_symbol() does not end up hogging the
> CPU, this patch adds a call to cond_resched() in kallsyms_on_each_symbol()
> and module_kallsyms_on_each_symbol(), which are invoked when doing a symbol
> lookup in vmlinux and a module respectively. Without this patch, if a
> live-patch is applied on a 36-core Intel host with heavy TCP traffic, a
> ~10x spike is observed in TCP retransmits while the patch is being applied.
> Additionally, collecting sched events with perf indicates that ksoftirqd is
> awakened ~1.3 seconds before it's eventually scheduled. With the patch, no
> increase in TCP retransmit events is observed, and ksoftirqd is scheduled
> shortly after it's awakened.
>
> Signed-off-by: David Vernet <void@...ifault.com>
> ---
> kernel/kallsyms.c | 1 +
> kernel/module.c | 2 ++
> 2 files changed, 3 insertions(+)
>
> diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
> index 0ba87982d017..2a9afe484aec 100644
> --- a/kernel/kallsyms.c
> +++ b/kernel/kallsyms.c
> @@ -223,6 +223,7 @@ int kallsyms_on_each_symbol(int (*fn)(void *, const char *, struct module *,
> ret = fn(data, namebuf, NULL, kallsyms_sym_address(i));
> if (ret != 0)
> return ret;
> + cond_resched();
> }
> return 0;
> }
> diff --git a/kernel/module.c b/kernel/module.c
> index 40ec9a030eec..c96160f7f3f5 100644
> --- a/kernel/module.c
> +++ b/kernel/module.c
> @@ -4462,6 +4462,8 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
> mod, kallsyms_symbol_value(sym));
> if (ret != 0)
> goto out;
> +
> + cond_resched();
> }
> }
> out:
> --
> 2.30.2
>
Powered by blists - more mailing lists