[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ydf3MBet/B+lUdRv@alley>
Date: Fri, 7 Jan 2022 09:17:52 +0100
From: Petr Mladek <pmladek@...e.com>
To: Song Liu <song@...nel.org>
Cc: void@...ifault.com, live-patching@...r.kernel.org,
open list <linux-kernel@...r.kernel.org>, jpoimboe@...hat.com,
jikos@...nel.org, mbenes@...e.cz, joe.lawrence@...hat.com
Subject: Re: [PATCH] livepatch: Avoid CPU hogging with cond_resched
On Thu 2022-01-06 16:21:18, Song Liu wrote:
> On Wed, Dec 29, 2021 at 1:57 PM David Vernet <void@...ifault.com> wrote:
> >
> > When initializing a 'struct klp_object' in klp_init_object_loaded(), and
> > performing relocations in klp_resolve_symbols(), klp_find_object_symbol()
> > is invoked to look up the address of a symbol in an already-loaded module
> > (or vmlinux). This, in turn, calls kallsyms_on_each_symbol() or
> > module_kallsyms_on_each_symbol() to find the address of the symbol that is
> > being patched.
> >
> > It turns out that symbol lookups often take up the most CPU time when
> > enabling and disabling a patch, and may hog the CPU and cause other tasks
> > on that CPU's runqueue to starve -- even in paths where interrupts are
> > enabled. For example, under certain workloads, enabling a KLP patch with
> > many objects or functions may cause ksoftirqd to be starved, and thus for
> > interrupts to be backlogged and delayed. This may end up causing TCP
> > retransmits on the host where the KLP patch is being applied, and in
> > general, may cause any interrupts serviced by softirqd to be delayed while
> > the patch is being applied.
> >
> > So as to ensure that kallsyms_on_each_symbol() does not end up hogging the
> > CPU, this patch adds a call to cond_resched() in kallsyms_on_each_symbol()
> > and module_kallsyms_on_each_symbol(), which are invoked when doing a symbol
> > lookup in vmlinux and a module respectively. Without this patch, if a
> > live-patch is applied on a 36-core Intel host with heavy TCP traffic, a
> > ~10x spike is observed in TCP retransmits while the patch is being applied.
> > Additionally, collecting sched events with perf indicates that ksoftirqd is
> > awakened ~1.3 seconds before it's eventually scheduled. With the patch, no
> > increase in TCP retransmit events is observed, and ksoftirqd is scheduled
> > shortly after it's awakened.
> >
> > Signed-off-by: David Vernet <void@...ifault.com>
>
> Acked-by: Song Liu <song@...nel.org>
>
> PS: Do we observe livepatch takes a longer time to load after this change?
> (I believe longer time shouldn't be a problem at all. Just curious.)
It should depend on the load of the system and the number of patched
symbols. The module is typically loaded with a normal priority
process.
The commit message talks about 1.3 seconds delay of ksoftirq. In
principle, the change caused that this 1.3 sec of a single CPU time
was interleaved with other scheduled tasks on the same CPU. I would
expect that it prolonged the load just by a couple of seconds in
the described use case.
Note that the change has effect only with voluntary scheduling.
Well, it is typically used on servers where the livepatching
makes sense.
Best Regards,
Petr
Powered by blists - more mailing lists