lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Mon, 15 Apr 2024 14:28:00 +0300
From: Vladimir Kondratiev <vladimir.kondratiev@...ileye.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Vladimir Kondratiev <vladimir.kondratiev@...ileye.com>,
	Crystal Wood <crwood@...hat.com>,
	Vincent Whitchurch <vincent.whitchurch@...s.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Andreas Gruenbacher <agruenba@...hat.com>,
	linux-kernel@...r.kernel.org
Subject: [RFC PATCH] genirq: don't disable BH for PREEMPT_RT

With PREEMPT_RT, both BH and irq handled in threads.
BH served by the ksoftirqd that is SCHED_OTHER, IRQ threads are
SCHED_FIFO with some priority (default is 50).

On the test platform the following observed:

- ksoftirqd thread running, servicing softirqs
- hard IRQ received by the CPU. Default handler wakes up IRQ thread
  and exits
- in the IRQ thread, force_irqthreads is defined as (true) for PREEMPT_RT
  thus handler is irq_forced_thread_fn
- before calling IRQ handler, irq_forced_thread_fn calls
  local_bh_disable(); it in turn acquires local_lock(&softirq_ctrl.lock);
- softirq_ctrl.lock still owned by the ksoftirqd thread, so
  slow path taken (rt_spin_lock_slowlock), causing context switch to the
  ksoftirqd (with properly adjusted priority) and waiting for it to
  release the lock. Then, context switched back to the IRQ thread
- as result, IRQ handler invoked with observed delay for several
  hundreds micro-seconds on 2GHz platform.

This causes failure to deliver on real-time latency requirements.

What is the reason for disabling BH in the PREEMPT_RT? Looks like
it is more reasonable to let scheduler to run threads according to
priorities; IRQ thread will preempt BHs, run to its completion.

Experiment conducted with this approach, smooth execution observed with
no spikes in the IRQ latency.

I am likely missing corner cases with this approach, so this is request
for comments. Input welcome

Signed-off-by: Vladimir Kondratiev <vladimir.kondratiev@...ileye.com>
---
 kernel/irq/manage.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index bf9ae8a8686f..d7189ac37fa8 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1191,17 +1191,19 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
 {
 	irqreturn_t ret;
 
-	local_bh_disable();
-	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT)) {
+		local_bh_disable();
 		local_irq_disable();
+	}
 	ret = action->thread_fn(action->irq, action->dev_id);
 	if (ret == IRQ_HANDLED)
 		atomic_inc(&desc->threads_handled);
 
 	irq_finalize_oneshot(desc, action);
-	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT)) {
 		local_irq_enable();
-	local_bh_enable();
+		local_bh_enable();
+	}
 	return ret;
 }
 
-- 
2.37.3


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ