lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20221003232033.3404802-2-jstultz@google.com> Date: Mon, 3 Oct 2022 23:20:31 +0000 From: John Stultz <jstultz@...gle.com> To: LKML <linux-kernel@...r.kernel.org> Cc: John Stultz <jstultz@...gle.com>, John Dias <joaodias@...gle.com>, "Connor O'Brien" <connoro@...gle.com>, Rick Yiu <rickyiu@...gle.com>, John Kacur <jkacur@...hat.com>, Qais Yousef <qais.yousef@....com>, Chris Redpath <chris.redpath@....com>, Abhijeet Dharmapurikar <adharmap@...cinc.com>, Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>, Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Thomas Gleixner <tglx@...utronix.de>, Heiko Carstens <hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>, kernel-team@...roid.com, kernel test robot <lkp@...el.com> Subject: [RFC PATCH v4 1/3] softirq: Add generic accessor to percpu softirq_pending data In a previous iteration of this patch series, I was checking: per_cpu(irq_stat, cpu).__softirq_pending which resulted in build errors on s390. This patch tries to create a generic accessor to this percpu softirq_pending data. This interface is inherently racy as its reading percpu data without a lock. However, being able to peek at the softirq pending data allows us to make better decisions about rt task placement vs just ignoring it. On s390 this call returns 0, which maybe isn't ideal but results in no functional change from what we do now. TODO: Heiko suggested changing s390 to use a proper per-cpu irqstat variable instead. Feedback or suggestions for better approach here would be welcome! Cc: John Dias <joaodias@...gle.com> Cc: Connor O'Brien <connoro@...gle.com> Cc: Rick Yiu <rickyiu@...gle.com> Cc: John Kacur <jkacur@...hat.com> Cc: Qais Yousef <qais.yousef@....com> Cc: Chris Redpath <chris.redpath@....com> Cc: Abhijeet Dharmapurikar <adharmap@...cinc.com> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Ingo Molnar <mingo@...hat.com> Cc: Juri Lelli <juri.lelli@...hat.com> Cc: Vincent Guittot <vincent.guittot@...aro.org> Cc: Dietmar Eggemann <dietmar.eggemann@....com> Cc: Steven Rostedt <rostedt@...dmis.org> Cc: Thomas Gleixner <tglx@...utronix.de> Cc: Heiko Carstens <hca@...ux.ibm.com> Cc: Vasily Gorbik <gor@...ux.ibm.com> Cc: kernel-team@...roid.com Reported-by: kernel test robot <lkp@...el.com> Signed-off-by: John Stultz <jstultz@...gle.com> --- arch/s390/include/asm/hardirq.h | 6 ++++++ include/linux/interrupt.h | 11 +++++++++++ 2 files changed, 17 insertions(+) diff --git a/arch/s390/include/asm/hardirq.h b/arch/s390/include/asm/hardirq.h index 58668ffb5488..cd9cc11588ab 100644 --- a/arch/s390/include/asm/hardirq.h +++ b/arch/s390/include/asm/hardirq.h @@ -16,6 +16,12 @@ #define local_softirq_pending() (S390_lowcore.softirq_pending) #define set_softirq_pending(x) (S390_lowcore.softirq_pending = (x)) #define or_softirq_pending(x) (S390_lowcore.softirq_pending |= (x)) +/* + * Not sure what the right thing is here for s390, + * but returning 0 will result in no logical change + * from what happens now + */ +#define __cpu_softirq_pending(x) (0) #define __ARCH_IRQ_STAT #define __ARCH_IRQ_EXIT_IRQS_DISABLED diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index a92bce40b04b..a749a8663841 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -527,6 +527,17 @@ DECLARE_STATIC_KEY_FALSE(force_irqthreads_key); #define set_softirq_pending(x) (__this_cpu_write(local_softirq_pending_ref, (x))) #define or_softirq_pending(x) (__this_cpu_or(local_softirq_pending_ref, (x))) +/** + * __cpu_softirq_pending() - Checks to see if softirq is pending on a cpu + * + * This helper is inherently racy, as we're accessing per-cpu data w/o locks. + * But peeking at the flag can still be useful when deciding where to place a + * task. + */ +static inline u32 __cpu_softirq_pending(int cpu) +{ + return (u32)per_cpu(local_softirq_pending_ref, cpu); +} #endif /* local_softirq_pending */ /* Some architectures might implement lazy enabling/disabling of -- 2.38.0.rc1.362.ged0d419d3c-goog
Powered by blists - more mailing lists