[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.21.2201051045540.12365@pobox.suse.cz>
Date: Wed, 5 Jan 2022 11:17:23 +0100 (CET)
From: Miroslav Benes <mbenes@...e.cz>
To: David Vernet <void@...ifault.com>
cc: live-patching@...r.kernel.org, linux-kernel@...r.kernel.org,
jpoimboe@...hat.com, pmladek@...e.com, jikos@...nel.org,
joe.lawrence@...hat.com
Subject: Re: [PATCH] livepatch: Avoid CPU hogging with cond_resched
On Wed, 29 Dec 2021, David Vernet wrote:
> When initializing a 'struct klp_object' in klp_init_object_loaded(), and
> performing relocations in klp_resolve_symbols(), klp_find_object_symbol()
> is invoked to look up the address of a symbol in an already-loaded module
> (or vmlinux). This, in turn, calls kallsyms_on_each_symbol() or
> module_kallsyms_on_each_symbol() to find the address of the symbol that is
> being patched.
>
> It turns out that symbol lookups often take up the most CPU time when
> enabling and disabling a patch, and may hog the CPU and cause other tasks
> on that CPU's runqueue to starve -- even in paths where interrupts are
> enabled. For example, under certain workloads, enabling a KLP patch with
> many objects or functions may cause ksoftirqd to be starved, and thus for
> interrupts to be backlogged and delayed. This may end up causing TCP
> retransmits on the host where the KLP patch is being applied, and in
> general, may cause any interrupts serviced by softirqd to be delayed while
> the patch is being applied.
>
> So as to ensure that kallsyms_on_each_symbol() does not end up hogging the
> CPU, this patch adds a call to cond_resched() in kallsyms_on_each_symbol()
> and module_kallsyms_on_each_symbol(), which are invoked when doing a symbol
> lookup in vmlinux and a module respectively. Without this patch, if a
> live-patch is applied on a 36-core Intel host with heavy TCP traffic, a
> ~10x spike is observed in TCP retransmits while the patch is being applied.
> Additionally, collecting sched events with perf indicates that ksoftirqd is
> awakened ~1.3 seconds before it's eventually scheduled. With the patch, no
> increase in TCP retransmit events is observed, and ksoftirqd is scheduled
> shortly after it's awakened.
>
> Signed-off-by: David Vernet <void@...ifault.com>
Acked-by: Miroslav Benes <mbenes@...e.cz>
I had similar ideas Petr mentioned elsewhere, but I, also, have no strong
opinion about it. Especially when livepatch is the only user of the said
interface.
M
Powered by blists - more mailing lists