lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200330161534.5d4e7174@gandalf.local.home>
Date:   Mon, 30 Mar 2020 16:15:34 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Pavel Machek <pavel@...x.de>
Cc:     ben.hutchings@...ethink.co.uk, Chris.Paterson2@...esas.com,
        bigeasy@...utronix.de, LKML <linux-kernel@...r.kernel.org>,
        linux-rt-users <linux-rt-users@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Carsten Emde <C.Emde@...dl.org>,
        John Kacur <jkacur@...hat.com>,
        Julia Cartwright <julia@...com>,
        Daniel Wagner <wagi@...om.org>,
        Tom Zanussi <zanussi@...nel.org>,
        "Srivatsa S. Bhat" <srivatsa@...il.mit.edu>
Subject: Re: 4.19.106-rt44 -- boot problems with irqwork: push most work
 into softirq context

On Sun, 22 Mar 2020 00:00:28 +0100
Pavel Machek <pavel@...x.de> wrote:

> Hi!
> 
> > > > > Does this patch help?    
> > > > 
> > > > I don't think so. It also failed, and the failure seems to be
> > > > identical to me.
> > > > 
> > > > https://gitlab.com/cip-project/cip-kernel/linux-cip/tree/ci/pavel/linux-cip
> > > > https://lava.ciplatform.org/scheduler/job/13110
> > > >   
> > > 
> > > Can you send me a patch that shows the difference between the revert that
> > > you say works, and the upstream v4.19-rt tree (let me know which version
> > > of v4.19-rt you are basing it on).  
> > 
> > I was using -rt44, and yes, I can probably generate better diffs.
> > 
> > But I guess I found it with code review: how does this look to you? I
> > applied it on top of your fix, and am testing. 2 successes so far.  
> 
> And I'd recommend some kind of cleanup on top. The code is really
> "interesting" and we don't want to have two copies. Totally untested.
> 
> Looking at the code, it could be probably cleaned up further.
> 
> Signed-off-by: Pavel Machek <pavel@...x.de>
> 
> Best regards,
> 								Pavel

I applied this patch, does this work for you. It's slightly different than
yours as I thought only the condition needed to be saved, not the lists
themselves.

-- Steve

Index: stable-rt.git/kernel/irq_work.c
===================================================================
--- stable-rt.git.orig/kernel/irq_work.c	2020-03-30 15:11:13.849875145 -0400
+++ stable-rt.git/kernel/irq_work.c	2020-03-30 15:18:54.365242025 -0400
@@ -70,6 +70,12 @@ static void __irq_work_queue_local(struc
 		arch_irq_work_raise();
 }
 
+static inline bool use_lazy_list(struct irq_work *work)
+{
+	return (IS_ENABLED(CONFIG_PREEMPT_RT_FULL) && !(work->flags & IRQ_WORK_HARD_IRQ))
+		|| (work->flags & IRQ_WORK_LAZY);
+}
+
 /* Enqueue the irq work @work on the current CPU */
 bool irq_work_queue(struct irq_work *work)
 {
@@ -81,11 +87,10 @@ bool irq_work_queue(struct irq_work *wor
 
 	/* Queue the entry and raise the IPI if needed. */
 	preempt_disable();
-	if (IS_ENABLED(CONFIG_PREEMPT_RT_FULL) && !(work->flags & IRQ_WORK_HARD_IRQ))
+	if (use_lazy_list(work))
 		list = this_cpu_ptr(&lazy_list);
 	else
 		list = this_cpu_ptr(&raised_list);
-
 	__irq_work_queue_local(work, list);
 	preempt_enable();
 
@@ -106,7 +111,6 @@ bool irq_work_queue_on(struct irq_work *
 
 #else /* CONFIG_SMP: */
 	struct llist_head *list;
-	bool lazy_work, realtime = IS_ENABLED(CONFIG_PREEMPT_RT_FULL);
 
 	/* All work should have been flushed before going offline */
 	WARN_ON_ONCE(cpu_is_offline(cpu));
@@ -116,10 +120,7 @@ bool irq_work_queue_on(struct irq_work *
 		return false;
 
 	preempt_disable();
-
-	lazy_work = work->flags & IRQ_WORK_LAZY;
-
-	if (lazy_work || (realtime && !(work->flags & IRQ_WORK_HARD_IRQ)))
+	if (use_lazy_list(work))
 		list = &per_cpu(lazy_list, cpu);
 	else
 		list = &per_cpu(raised_list, cpu);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ