>From wagi@monom.org Fri Feb 19 00:51:49 2016 Return-Path: X-Original-To: paulmck@localhost Delivered-To: paulmck@localhost Received: from paulmck-ThinkPad-W541 (localhost [127.0.0.1]) by paulmck-ThinkPad-W541 (Postfix) with ESMTP id 4122B16C0C48 for ; Fri, 19 Feb 2016 00:51:49 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com [9.63.16.69] by paulmck-ThinkPad-W541 with IMAP (fetchmail-6.3.26) for (single-drop); Fri, 19 Feb 2016 00:51:49 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com ([unix socket]) by g01zcilapp002 (Cyrus v2.3.11) with LMTPA; Fri, 19 Feb 2016 03:47:07 -0500 X-Sieve: CMU Sieve 2.3 Received: from localhost (localhost [127.0.0.1]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id D24AD26002 for ; Fri, 19 Feb 2016 03:47:07 -0500 (EST) X-Virus-Scanned: amavisd-new at linux.ibm.com X-Spam-Flag: NO X-Spam-Score: 0 X-Spam-Level: X-Spam-Status: No, score=0 tagged_above=-9999 required=6.2 tests=[none] autolearn=disabled Received: from g01zcilapp002.ahe.pok.ibm.com ([127.0.0.1]) by localhost (g01zcilapp002.ahe.pok.ibm.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 74Oj-Uirc_k9 for ; Fri, 19 Feb 2016 03:47:06 -0500 (EST) Received: from g01zcilapp001.ahe.pok.ibm.com (g01zcilapp001.ahe.pok.ibm.com [9.63.16.68]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id B9AFB26004 for ; Fri, 19 Feb 2016 03:47:06 -0500 (EST) Received: from VMSDVMA.POK.IBM.COM (vmsdvma.pok.ibm.com [9.63.66.59]) by g01zcilapp001.ahe.pok.ibm.com (Postfix) with ESMTP id B0EF613E002 for ; Fri, 19 Feb 2016 03:47:06 -0500 (EST) Received: by VMSDVMA.POK.IBM.COM (IBM VM SMTP Level 630) via spool with SMTP id 3152 ; Fri, 19 Feb 2016 03:47:06 EST Received: by vmsdvma.vnet.ibm.com (xagsmtp 2.0.1) via smtp4 with spool id 6211; Fri, 19 Feb 2016 03:47:06 -0500 Received: from b01cxnp22036.gho.pok.ibm.com [9.57.198.26] by VMSDVMA.POK.IBM.COM (IBM VM SMTP Level 630) via TCP with ESMTP ; Fri, 19 Feb 2016 03:47:06 EST Received: from d01av01.pok.ibm.com (d01av01.pok.ibm.com [9.56.224.215]) by b01cxnp22036.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u1J8l6Jh30539956 for ; Fri, 19 Feb 2016 08:47:06 GMT Received: from d01av01.pok.ibm.com (localhost [127.0.0.1]) by d01av01.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u1J8l5mv029034 for ; Fri, 19 Feb 2016 03:47:05 -0500 Received: from e12.ny.us.ibm.com (e12.pok.ibm.com [146.89.104.199]) by d01av01.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u1J8l52H029021 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Fri, 19 Feb 2016 03:47:05 -0500 Received: from localhost by e12.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 19 Feb 2016 03:47:05 -0500 Received: from hotel311.server4you.de (85.25.146.15) by e12.ny.us.ibm.com (158.87.18.12) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256/256) Fri, 19 Feb 2016 03:47:04 -0500 X-ISS-IPR: 1001:75 (85.25.146.15) X-IBM-Helo: hotel311.server4you.de X-IBM-MailFrom: wagi@monom.org X-IBM-RcptTo: paulmck@linux.vnet.ibm.com Received: from hotel311.server4you.de (localhost [127.0.0.1]) by filter.mynetwork.local (Postfix) with ESMTP id 43FA41940750; Fri, 19 Feb 2016 09:47:01 +0100 (CET) Received: from localhost (mail.bmw-carit.de [62.245.222.98]) by hotel311.server4you.de (Postfix) with ESMTPSA id 194C21940071; Fri, 19 Feb 2016 09:47:01 +0100 (CET) From: Daniel Wagner To: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org Cc: Marcelo Tosatti , Paolo Bonzini , "Paul E. McKenney" , Paul Gortmaker , "Peter Zijlstra (Intel)" , Thomas Gleixner , Steven Rostedt , Boqun Feng , Daniel Wagner Subject: [PATCH tip v8 0/5] Simple wait queue support Date: Fri, 19 Feb 2016 09:46:36 +0100 Message-Id: <1455871601-27484-1-git-send-email-wagi@monom.org> X-Mailer: git-send-email 2.5.0 X-ZLA-Header: unknown; 0 X-ZLA-DetailInfo: BA=6.00004205; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZH=6.00000000; ZP=6.00000000; ZU=6.00000002; UDB=6.00301139; UTC=2016-02-19 08:47:04 x-cbid: 16021908-0049-0000-0000-00000856BCBB X-IBM-ISS-SpamDetectors: Score=0.415652; FLB=0; FLI=0; BY=0.241865; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0; RB=0; SC=0.415652; ST=0; TS=0; UL=0; ISC= X-IBM-ISS-DetailInfo: BY=3.00004938; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000145; SDB=6.00661893; UDB=6.00301139; UTC=2016-02-19 08:47:05 X-TM-AS-MML: disable X-Xagent-Gateway: vmsdvma.vnet.ibm.com (XAGSMTP4 at VMSDVMA) Status: O Content-Length: 4747 Lines: 130 From: Daniel Wagner Hi, This version of the series got some compile time. I tried to weed out the incompatible-pointer-types warning in mainline. In the last verison I included two patches to fix those warnings. In the meantime these patches are either already in mainline or on the way. I found a couple more of them and sent out fixes. Overall it looks quite good on that front. So I don't think -Werror=incompatible-pointer-types will be a big desaster. Famous last words... I was able to build alpha, arm32, arm64, cris10, frv, ia64, mips32, mips64, m68k, ppc64, s390, sparc32, sparc64, um_x86_64 and x86_64 with toolchains I got. There are a comple of missing architectures though like avr32, blackfin, cris32, m32r, parsic, parsic64, ppc32, sh32, sh64, tile and xtensa for which I have toolchains but they are broken in some way beyound easy fixing. These patches are against tip/sched/core 3223d052b79eb9b620d170584c417d60a8bfd649 also available as git tree: git://git.kernel.org/pub/scm/linux/kernel/git/wagi/linux.git tip-swait cheers, daniel changes since v7: - dropped random fixes for incompatible-pointer-types warning - sent out patches for fixing incompatible-pointer-types warnings changes since v6: - fixed a couple of incompatible-pointer-types errors - fixed a missing KVM ARM wait -> swait update changes since v5: - unconditionally add -Werror=incompatible-pointer-types - updated KVM statistics in commit message - rebased on tip/sched/core - added ack-by PeterZ changes since v4: - replaced patch #2 which tried to force to compiler to exit with an error by using compile time assertion type check macros. Instead use -Werror=incompatible-pointer-types to tell the compiler to barf loudly. - fixed wrong API usage in patch 4 as reported by Boqun. changes since v3 - rebased it on tip/sched/core (KVM bits have changed slightly) - added compile time type check assertion - added non lazy version of swake_up_locked() changes since v2 - rebased again on tip/master. The patches apply cleanly on v4.3-rc6 too. - fixed up mips - reordered patches to avoid lockdep warning when doing bissect. - remove unnecessary initialization of rsp->rda in rcu_init_one(). changes since v1 (PATCH v0) - rebased and fixed some typos found by cross building for S390, ARM and powerpc. For some unknown reason didn't catch them last time. - dropped completion patches because it is not clear yet how to handle complete_all() calls hard-irq/atomic contexts and swake_up_all. changes since v0 (RFC v0) - promoted the series to PATCH state instead of RFC - fixed a few fallouts with build all and some cross compilers such ARM, PowerPC, S390. - Added the simple waitqueue transformation for KVM from -rt including some numbers requested by Paolo. - Added a commit message to PeterZ's patch. Hope he likes it. [I got the numbering wrong in v1, so instead 'PATCH v1' you find it as 'PATCH v0' series] v7: https://lkml.org/lkml/2016/1/29/305 v6: https://lkml.org/lkml/2016/1/28/462 v5: https://lkml.org/lkml/2015/11/30/318 v4: https://lwn.net/Articles/665655/ v3: https://lwn.net/Articles/661415/ v2: https://lwn.net/Articles/660628/ v1: https://lwn.net/Articles/656942/ v0: https://lwn.net/Articles/653586/ Daniel Wagner (2): kbuild: Add option to turn incompatible pointer check into error rcu: Do not call rcu_nocb_gp_cleanup() while holding rnp->lock Marcelo Tosatti (1): KVM: use simple waitqueue for vcpu->wq Paul Gortmaker (1): rcu: use simple wait queues where possible in rcutree Peter Zijlstra (Intel) (1): wait.[ch]: Introduce the simple waitqueue (swait) implementation Makefile | 3 + arch/arm/kvm/arm.c | 8 +- arch/arm/kvm/psci.c | 4 +- arch/mips/kvm/mips.c | 8 +- arch/powerpc/include/asm/kvm_host.h | 4 +- arch/powerpc/kvm/book3s_hv.c | 23 +++-- arch/s390/include/asm/kvm_host.h | 2 +- arch/s390/kvm/interrupt.c | 4 +- arch/x86/kvm/lapic.c | 6 +- include/linux/kvm_host.h | 5 +- include/linux/swait.h | 172 ++++++++++++++++++++++++++++++++++++ kernel/rcu/tree.c | 24 ++--- kernel/rcu/tree.h | 12 +-- kernel/rcu/tree_plugin.h | 32 ++++--- kernel/sched/Makefile | 2 +- kernel/sched/swait.c | 123 ++++++++++++++++++++++++++ virt/kvm/async_pf.c | 4 +- virt/kvm/kvm_main.c | 17 ++-- 18 files changed, 382 insertions(+), 71 deletions(-) create mode 100644 include/linux/swait.h create mode 100644 kernel/sched/swait.c -- 2.5.0 >From wagi@monom.org Fri Feb 19 00:51:51 2016 Return-Path: X-Original-To: paulmck@localhost Delivered-To: paulmck@localhost Received: from paulmck-ThinkPad-W541 (localhost [127.0.0.1]) by paulmck-ThinkPad-W541 (Postfix) with ESMTP id CFF9B16C024E for ; Fri, 19 Feb 2016 00:51:50 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com [9.63.16.69] by paulmck-ThinkPad-W541 with IMAP (fetchmail-6.3.26) for (single-drop); Fri, 19 Feb 2016 00:51:50 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com ([unix socket]) by g01zcilapp002 (Cyrus v2.3.11) with LMTPA; Fri, 19 Feb 2016 03:47:13 -0500 X-Sieve: CMU Sieve 2.3 Received: from localhost (localhost [127.0.0.1]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id 3FA8A26005 for ; Fri, 19 Feb 2016 03:47:13 -0500 (EST) X-Virus-Scanned: amavisd-new at linux.ibm.com X-Spam-Flag: NO X-Spam-Score: 0 X-Spam-Level: X-Spam-Status: No, score=0 tagged_above=-9999 required=6.2 tests=[none] autolearn=disabled Received: from g01zcilapp002.ahe.pok.ibm.com ([127.0.0.1]) by localhost (g01zcilapp002.ahe.pok.ibm.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id LPAL5vaQS6U7 for ; Fri, 19 Feb 2016 03:47:12 -0500 (EST) Received: from g01zcilapp001.ahe.pok.ibm.com (g01zcilapp001.ahe.pok.ibm.com [9.63.16.68]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id 682E526003 for ; Fri, 19 Feb 2016 03:47:12 -0500 (EST) Received: from VMSDVM9.POK.IBM.COM (vmsdvm9.pok.ibm.com [9.63.66.64]) by g01zcilapp001.ahe.pok.ibm.com (Postfix) with ESMTP id 5D62413E002 for ; Fri, 19 Feb 2016 03:47:12 -0500 (EST) Received: by VMSDVM9.POK.IBM.COM (IBM VM SMTP Level 630) via spool with SMTP id 4697 ; Fri, 19 Feb 2016 00:47:12 PST Received: by vmsdvm9.vnet.ibm.com (xagsmtp 2.0.1) via smtp5 with spool id 2701; Fri, 19 Feb 2016 00:47:12 -0800 Received: from b03cxnp08026.gho.boulder.ibm.com [9.17.130.18] by VMSDVM9.POK.IBM.COM (IBM VM SMTP Level 630) via TCP with ESMTP ; Fri, 19 Feb 2016 00:47:10 PST Received: from d03av05.boulder.ibm.com (d03av05.boulder.ibm.com [9.17.195.85]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u1J8lAu620119748 for ; Fri, 19 Feb 2016 01:47:10 -0700 Received: from d03av05.boulder.ibm.com (localhost [127.0.0.1]) by d03av05.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u1J8l9JB025462 for ; Fri, 19 Feb 2016 01:47:09 -0700 Received: from e32.co.us.ibm.com (e32.boulder.ibm.com [9.17.249.42]) by d03av05.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u1J8l9P3025443 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Fri, 19 Feb 2016 01:47:09 -0700 Received: from localhost by e32.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 19 Feb 2016 01:47:09 -0700 Received: from hotel311.server4you.de (85.25.146.15) by e32.co.us.ibm.com (192.168.2.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256/256) Fri, 19 Feb 2016 01:47:06 -0700 X-ISS-IPR: 1001:75 (85.25.146.15) X-IBM-Helo: hotel311.server4you.de X-IBM-MailFrom: wagi@monom.org X-IBM-RcptTo: paulmck@linux.vnet.ibm.com Received: from hotel311.server4you.de (localhost [127.0.0.1]) by filter.mynetwork.local (Postfix) with ESMTP id 07916194183B; Fri, 19 Feb 2016 09:47:02 +0100 (CET) Received: from localhost (mail.bmw-carit.de [62.245.222.98]) by hotel311.server4you.de (Postfix) with ESMTPSA id CCC62194181D; Fri, 19 Feb 2016 09:47:01 +0100 (CET) From: Daniel Wagner To: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org Cc: Marcelo Tosatti , Paolo Bonzini , "Paul E. McKenney" , Paul Gortmaker , "Peter Zijlstra (Intel)" , Thomas Gleixner , Steven Rostedt , Boqun Feng , Daniel Wagner Subject: [PATCH v8 1/5] wait.[ch]: Introduce the simple waitqueue (swait) implementation Date: Fri, 19 Feb 2016 09:46:37 +0100 Message-Id: <1455871601-27484-2-git-send-email-wagi@monom.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1455871601-27484-1-git-send-email-wagi@monom.org> References: <1455871601-27484-1-git-send-email-wagi@monom.org> X-ZLA-Header: unknown; 0 X-ZLA-DetailInfo: BA=6.00004205; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZH=6.00000000; ZP=6.00000000; ZU=6.00000002; UDB=6.00301139; UTC=2016-02-19 08:47:07 x-cbid: 16021908-0005-0000-0000-00001C9169DE X-IBM-ISS-SpamDetectors: Score=0.431554; FLB=0; FLI=0; BY=0; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0; SC=0.431554; ST=0; TS=0; UL=0; ISC= X-IBM-ISS-DetailInfo: BY=3.00004938; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000145; SDB=6.00661893; UDB=6.00301139; UTC=2016-02-19 08:47:08 X-TM-AS-MML: disable X-Xagent-Gateway: vmsdvm9.vnet.ibm.com (XAGSMTP2 at VMSDVM9) Status: O Content-Length: 10776 Lines: 360 From: "Peter Zijlstra (Intel)" The existing wait queue support has support for custom wake up call backs, wake flags, wake key (passed to call back) and exclusive flags that allow wakers to be tagged as exclusive, for limiting the number of wakers. In a lot of cases, none of these features are used, and hence we can benefit from a slimmed down version that lowers memory overhead and reduces runtime overhead. The concept originated from -rt, where waitqueues are a constant source of trouble, as we can't convert the head lock to a raw spinlock due to fancy and long lasting callbacks. With the removal of custom callbacks, we can use a raw lock for queue list manipulations, hence allowing the simple wait support to be used in -rt. Signed-off-by: Daniel Wagner Acked-by: Peter Zijlstra (Intel) Originally-by: Thomas Gleixner Cc: Paul Gortmaker [Patch is from PeterZ which is based on Thomas version. Commit message is written by Paul G. Daniel: - And some compile issues fixed. - Added non-lazy implementation of swake_up_locked as suggested by Boqun Feng.] --- include/linux/swait.h | 172 ++++++++++++++++++++++++++++++++++++++++++++++++++ kernel/sched/Makefile | 2 +- kernel/sched/swait.c | 123 ++++++++++++++++++++++++++++++++++++ 3 files changed, 296 insertions(+), 1 deletion(-) create mode 100644 include/linux/swait.h create mode 100644 kernel/sched/swait.c diff --git a/include/linux/swait.h b/include/linux/swait.h new file mode 100644 index 0000000..c1f9c62 --- /dev/null +++ b/include/linux/swait.h @@ -0,0 +1,172 @@ +#ifndef _LINUX_SWAIT_H +#define _LINUX_SWAIT_H + +#include +#include +#include +#include + +/* + * Simple wait queues + * + * While these are very similar to the other/complex wait queues (wait.h) the + * most important difference is that the simple waitqueue allows for + * deterministic behaviour -- IOW it has strictly bounded IRQ and lock hold + * times. + * + * In order to make this so, we had to drop a fair number of features of the + * other waitqueue code; notably: + * + * - mixing INTERRUPTIBLE and UNINTERRUPTIBLE sleeps on the same waitqueue; + * all wakeups are TASK_NORMAL in order to avoid O(n) lookups for the right + * sleeper state. + * + * - the exclusive mode; because this requires preserving the list order + * and this is hard. + * + * - custom wake functions; because you cannot give any guarantees about + * random code. + * + * As a side effect of this; the data structures are slimmer. + * + * One would recommend using this wait queue where possible. + */ + +struct task_struct; + +struct swait_queue_head { + raw_spinlock_t lock; + struct list_head task_list; +}; + +struct swait_queue { + struct task_struct *task; + struct list_head task_list; +}; + +#define __SWAITQUEUE_INITIALIZER(name) { \ + .task = current, \ + .task_list = LIST_HEAD_INIT((name).task_list), \ +} + +#define DECLARE_SWAITQUEUE(name) \ + struct swait_queue name = __SWAITQUEUE_INITIALIZER(name) + +#define __SWAIT_QUEUE_HEAD_INITIALIZER(name) { \ + .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \ + .task_list = LIST_HEAD_INIT((name).task_list), \ +} + +#define DECLARE_SWAIT_QUEUE_HEAD(name) \ + struct swait_queue_head name = __SWAIT_QUEUE_HEAD_INITIALIZER(name) + +extern void __init_swait_queue_head(struct swait_queue_head *q, const char *name, + struct lock_class_key *key); + +#define init_swait_queue_head(q) \ + do { \ + static struct lock_class_key __key; \ + __init_swait_queue_head((q), #q, &__key); \ + } while (0) + +#ifdef CONFIG_LOCKDEP +# define __SWAIT_QUEUE_HEAD_INIT_ONSTACK(name) \ + ({ init_swait_queue_head(&name); name; }) +# define DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(name) \ + struct swait_queue_head name = __SWAIT_QUEUE_HEAD_INIT_ONSTACK(name) +#else +# define DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(name) \ + DECLARE_SWAIT_QUEUE_HEAD(name) +#endif + +static inline int swait_active(struct swait_queue_head *q) +{ + return !list_empty(&q->task_list); +} + +extern void swake_up(struct swait_queue_head *q); +extern void swake_up_all(struct swait_queue_head *q); +extern void swake_up_locked(struct swait_queue_head *q); + +extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait); +extern void prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait, int state); +extern long prepare_to_swait_event(struct swait_queue_head *q, struct swait_queue *wait, int state); + +extern void __finish_swait(struct swait_queue_head *q, struct swait_queue *wait); +extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); + +/* as per ___wait_event() but for swait, therefore "exclusive == 0" */ +#define ___swait_event(wq, condition, state, ret, cmd) \ +({ \ + struct swait_queue __wait; \ + long __ret = ret; \ + \ + INIT_LIST_HEAD(&__wait.task_list); \ + for (;;) { \ + long __int = prepare_to_swait_event(&wq, &__wait, state);\ + \ + if (condition) \ + break; \ + \ + if (___wait_is_interruptible(state) && __int) { \ + __ret = __int; \ + break; \ + } \ + \ + cmd; \ + } \ + finish_swait(&wq, &__wait); \ + __ret; \ +}) + +#define __swait_event(wq, condition) \ + (void)___swait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0, \ + schedule()) + +#define swait_event(wq, condition) \ +do { \ + if (condition) \ + break; \ + __swait_event(wq, condition); \ +} while (0) + +#define __swait_event_timeout(wq, condition, timeout) \ + ___swait_event(wq, ___wait_cond_timeout(condition), \ + TASK_UNINTERRUPTIBLE, timeout, \ + __ret = schedule_timeout(__ret)) + +#define swait_event_timeout(wq, condition, timeout) \ +({ \ + long __ret = timeout; \ + if (!___wait_cond_timeout(condition)) \ + __ret = __swait_event_timeout(wq, condition, timeout); \ + __ret; \ +}) + +#define __swait_event_interruptible(wq, condition) \ + ___swait_event(wq, condition, TASK_INTERRUPTIBLE, 0, \ + schedule()) + +#define swait_event_interruptible(wq, condition) \ +({ \ + int __ret = 0; \ + if (!(condition)) \ + __ret = __swait_event_interruptible(wq, condition); \ + __ret; \ +}) + +#define __swait_event_interruptible_timeout(wq, condition, timeout) \ + ___swait_event(wq, ___wait_cond_timeout(condition), \ + TASK_INTERRUPTIBLE, timeout, \ + __ret = schedule_timeout(__ret)) + +#define swait_event_interruptible_timeout(wq, condition, timeout) \ +({ \ + long __ret = timeout; \ + if (!___wait_cond_timeout(condition)) \ + __ret = __swait_event_interruptible_timeout(wq, \ + condition, timeout); \ + __ret; \ +}) + +#endif /* _LINUX_SWAIT_H */ diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index 6768797..7d4cba2 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -13,7 +13,7 @@ endif obj-y += core.o loadavg.o clock.o cputime.o obj-y += idle_task.o fair.o rt.o deadline.o stop_task.o -obj-y += wait.o completion.o idle.o +obj-y += wait.o swait.o completion.o idle.o obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o obj-$(CONFIG_SCHED_AUTOGROUP) += auto_group.o obj-$(CONFIG_SCHEDSTATS) += stats.o diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c new file mode 100644 index 0000000..82f0dff --- /dev/null +++ b/kernel/sched/swait.c @@ -0,0 +1,123 @@ +#include +#include + +void __init_swait_queue_head(struct swait_queue_head *q, const char *name, + struct lock_class_key *key) +{ + raw_spin_lock_init(&q->lock); + lockdep_set_class_and_name(&q->lock, key, name); + INIT_LIST_HEAD(&q->task_list); +} +EXPORT_SYMBOL(__init_swait_queue_head); + +/* + * The thing about the wake_up_state() return value; I think we can ignore it. + * + * If for some reason it would return 0, that means the previously waiting + * task is already running, so it will observe condition true (or has already). + */ +void swake_up_locked(struct swait_queue_head *q) +{ + struct swait_queue *curr; + + if (list_empty(&q->task_list)) + return; + + curr = list_first_entry(&q->task_list, typeof(*curr), task_list); + wake_up_process(curr->task); + list_del_init(&curr->task_list); +} +EXPORT_SYMBOL(swake_up_locked); + +void swake_up(struct swait_queue_head *q) +{ + unsigned long flags; + + if (!swait_active(q)) + return; + + raw_spin_lock_irqsave(&q->lock, flags); + swake_up_locked(q); + raw_spin_unlock_irqrestore(&q->lock, flags); +} +EXPORT_SYMBOL(swake_up); + +/* + * Does not allow usage from IRQ disabled, since we must be able to + * release IRQs to guarantee bounded hold time. + */ +void swake_up_all(struct swait_queue_head *q) +{ + struct swait_queue *curr; + LIST_HEAD(tmp); + + if (!swait_active(q)) + return; + + raw_spin_lock_irq(&q->lock); + list_splice_init(&q->task_list, &tmp); + while (!list_empty(&tmp)) { + curr = list_first_entry(&tmp, typeof(*curr), task_list); + + wake_up_state(curr->task, TASK_NORMAL); + list_del_init(&curr->task_list); + + if (list_empty(&tmp)) + break; + + raw_spin_unlock_irq(&q->lock); + raw_spin_lock_irq(&q->lock); + } + raw_spin_unlock_irq(&q->lock); +} +EXPORT_SYMBOL(swake_up_all); + +void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait) +{ + wait->task = current; + if (list_empty(&wait->task_list)) + list_add(&wait->task_list, &q->task_list); +} + +void prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait, int state) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&q->lock, flags); + __prepare_to_swait(q, wait); + set_current_state(state); + raw_spin_unlock_irqrestore(&q->lock, flags); +} +EXPORT_SYMBOL(prepare_to_swait); + +long prepare_to_swait_event(struct swait_queue_head *q, struct swait_queue *wait, int state) +{ + if (signal_pending_state(state, current)) + return -ERESTARTSYS; + + prepare_to_swait(q, wait, state); + + return 0; +} +EXPORT_SYMBOL(prepare_to_swait_event); + +void __finish_swait(struct swait_queue_head *q, struct swait_queue *wait) +{ + __set_current_state(TASK_RUNNING); + if (!list_empty(&wait->task_list)) + list_del_init(&wait->task_list); +} + +void finish_swait(struct swait_queue_head *q, struct swait_queue *wait) +{ + unsigned long flags; + + __set_current_state(TASK_RUNNING); + + if (!list_empty_careful(&wait->task_list)) { + raw_spin_lock_irqsave(&q->lock, flags); + list_del_init(&wait->task_list); + raw_spin_unlock_irqrestore(&q->lock, flags); + } +} +EXPORT_SYMBOL(finish_swait); -- 2.5.0 >From wagi@monom.org Fri Feb 19 00:51:50 2016 Return-Path: X-Original-To: paulmck@localhost Delivered-To: paulmck@localhost Received: from paulmck-ThinkPad-W541 (localhost [127.0.0.1]) by paulmck-ThinkPad-W541 (Postfix) with ESMTP id 203F816C024E for ; Fri, 19 Feb 2016 00:51:50 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com [9.63.16.69] by paulmck-ThinkPad-W541 with IMAP (fetchmail-6.3.26) for (single-drop); Fri, 19 Feb 2016 00:51:50 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com ([unix socket]) by g01zcilapp002 (Cyrus v2.3.11) with LMTPA; Fri, 19 Feb 2016 03:47:10 -0500 X-Sieve: CMU Sieve 2.3 Received: from localhost (localhost [127.0.0.1]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id 6EF5726006 for ; Fri, 19 Feb 2016 03:47:10 -0500 (EST) X-Virus-Scanned: amavisd-new at linux.ibm.com X-Spam-Flag: NO X-Spam-Score: 0 X-Spam-Level: X-Spam-Status: No, score=0 tagged_above=-9999 required=6.2 tests=[none] autolearn=disabled Received: from g01zcilapp002.ahe.pok.ibm.com ([127.0.0.1]) by localhost (g01zcilapp002.ahe.pok.ibm.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 6Q692W3NGTwS for ; Fri, 19 Feb 2016 03:47:10 -0500 (EST) Received: from g01zcilapp001.ahe.pok.ibm.com (g01zcilapp001.ahe.pok.ibm.com [9.63.16.68]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id 0DA8226003 for ; Fri, 19 Feb 2016 03:47:10 -0500 (EST) Received: from VMSDVMA.POK.IBM.COM (vmsdvma.pok.ibm.com [9.63.66.59]) by g01zcilapp001.ahe.pok.ibm.com (Postfix) with ESMTP id 0A8BC13E004 for ; Fri, 19 Feb 2016 03:47:10 -0500 (EST) Received: by VMSDVMA.POK.IBM.COM (IBM VM SMTP Level 630) via spool with SMTP id 2924 ; Fri, 19 Feb 2016 03:47:10 EST Received: by vmsdvma.vnet.ibm.com (xagsmtp 2.0.1) via smtp4 with spool id 1621; Fri, 19 Feb 2016 03:47:10 -0500 Received: from b01cxnp22034.gho.pok.ibm.com [9.57.198.24] by VMSDVMA.POK.IBM.COM (IBM VM SMTP Level 630) via TCP with ESMTP ; Fri, 19 Feb 2016 03:47:09 EST Received: from d01av03.pok.ibm.com (d01av03.pok.ibm.com [9.56.224.217]) by b01cxnp22034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u1J8l9Q220119588 for ; Fri, 19 Feb 2016 08:47:09 GMT Received: from d01av03.pok.ibm.com (localhost [127.0.0.1]) by d01av03.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u1J8l9Vl022695 for ; Fri, 19 Feb 2016 03:47:09 -0500 Received: from e13.ny.us.ibm.com (e13.pok.ibm.com [146.89.104.200]) by d01av03.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u1J8l9lP022680 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Fri, 19 Feb 2016 03:47:09 -0500 Received: from localhost by e13.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 19 Feb 2016 03:47:09 -0500 Received: from hotel311.server4you.de (85.25.146.15) by e13.ny.us.ibm.com (158.87.18.13) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256/256) Fri, 19 Feb 2016 03:47:05 -0500 X-ISS-IPR: 1001:75 (85.25.146.15) X-IBM-Helo: hotel311.server4you.de X-IBM-MailFrom: wagi@monom.org X-IBM-RcptTo: paulmck@linux.vnet.ibm.com Received: from hotel311.server4you.de (localhost [127.0.0.1]) by filter.mynetwork.local (Postfix) with ESMTP id DACAC1941A3C; Fri, 19 Feb 2016 09:47:02 +0100 (CET) Received: from localhost (mail.bmw-carit.de [62.245.222.98]) by hotel311.server4you.de (Postfix) with ESMTPSA id B74831940071; Fri, 19 Feb 2016 09:47:02 +0100 (CET) From: Daniel Wagner To: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org Cc: Marcelo Tosatti , Paolo Bonzini , "Paul E. McKenney" , Paul Gortmaker , "Peter Zijlstra (Intel)" , Thomas Gleixner , Steven Rostedt , Boqun Feng , Daniel Wagner Subject: [PATCH v8 2/5] kbuild: Add option to turn incompatible pointer check into error Date: Fri, 19 Feb 2016 09:46:38 +0100 Message-Id: <1455871601-27484-3-git-send-email-wagi@monom.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1455871601-27484-1-git-send-email-wagi@monom.org> References: <1455871601-27484-1-git-send-email-wagi@monom.org> X-ZLA-Header: unknown; 0 X-ZLA-DetailInfo: BA=6.00004205; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZH=6.00000000; ZP=6.00000000; ZU=6.00000002; UDB=6.00301139; UTC=2016-02-19 08:47:05 x-cbid: 16021908-0009-0000-0000-00000888A76F X-IBM-ISS-SpamDetectors: Score=0.415652; FLB=0; FLI=0; BY=0.270473; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0; RB=0; SC=0.415652; ST=0; TS=0; UL=0; ISC= X-IBM-ISS-DetailInfo: BY=3.00004938; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000145; SDB=6.00661893; UDB=6.00301139; UTC=2016-02-19 08:47:07 X-TM-AS-MML: disable X-Xagent-Gateway: vmsdvma.vnet.ibm.com (XAGSMTP2 at VMSDVMA) Status: O Content-Length: 1806 Lines: 46 From: Daniel Wagner With the introduction of the simple wait API we have two very similar APIs in the kernel. For example wake_up() and swake_up() is only one character away. Although the compiler will warn happily the wrong usage it keeps on going an even links the kernel. Thomas and Peter would rather like to see early missuses reported as error early on. In a first attempt we tried to wrap all swait and wait calls into a macro which has an compile time type assertion. The result was pretty ugly and wasn't able to catch all wrong usages. woken_wake_function(), autoremove_wake_function() and wake_bit_function() are assigned as function pointers. Wrapping them with a macro around is not possible. Prefixing them with '_' was also not a real option because there some users in the kernel which do use them as well. All in all this attempt looked to intrusive and too ugly. An alternative is to turn the pointer type check into an error which catches wrong type uses. Obviously not only the swait/wait ones. That isn't a bad thing either. Signed-off-by: Daniel Wagner Acked-by: Peter Zijlstra (Intel) Cc: Thomas Gleixner --- Makefile | 3 +++ 1 file changed, 3 insertions(+) diff --git a/Makefile b/Makefile index 6c1a3c2..4513509 100644 --- a/Makefile +++ b/Makefile @@ -773,6 +773,9 @@ KBUILD_CFLAGS += $(call cc-option,-Werror=strict-prototypes) # Prohibit date/time macros, which would make the build non-deterministic KBUILD_CFLAGS += $(call cc-option,-Werror=date-time) +# enforce correct pointer usage +KBUILD_CFLAGS += $(call cc-option,-Werror=incompatible-pointer-types) + # use the deterministic mode of AR if available KBUILD_ARFLAGS := $(call ar-option,D) -- 2.5.0 >From wagi@monom.org Fri Feb 19 00:51:52 2016 Return-Path: X-Original-To: paulmck@localhost Delivered-To: paulmck@localhost Received: from paulmck-ThinkPad-W541 (localhost [127.0.0.1]) by paulmck-ThinkPad-W541 (Postfix) with ESMTP id 8899416C024E for ; Fri, 19 Feb 2016 00:51:52 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com [9.63.16.69] by paulmck-ThinkPad-W541 with IMAP (fetchmail-6.3.26) for (single-drop); Fri, 19 Feb 2016 00:51:52 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com ([unix socket]) by g01zcilapp002 (Cyrus v2.3.11) with LMTPA; Fri, 19 Feb 2016 03:47:15 -0500 X-Sieve: CMU Sieve 2.3 Received: from localhost (localhost [127.0.0.1]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id F41BE26004 for ; Fri, 19 Feb 2016 03:47:14 -0500 (EST) X-Virus-Scanned: amavisd-new at linux.ibm.com X-Spam-Flag: NO X-Spam-Score: 0 X-Spam-Level: X-Spam-Status: No, score=0 tagged_above=-9999 required=6.2 tests=[none] autolearn=disabled Received: from g01zcilapp002.ahe.pok.ibm.com ([127.0.0.1]) by localhost (g01zcilapp002.ahe.pok.ibm.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id G-1fSajc9SW4 for ; Fri, 19 Feb 2016 03:47:14 -0500 (EST) Received: from g01zcilapp001.ahe.pok.ibm.com (g01zcilapp001.ahe.pok.ibm.com [9.63.16.68]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id 439AC26002 for ; Fri, 19 Feb 2016 03:47:14 -0500 (EST) Received: from BLDGATE.BOULDER.IBM.COM (bldgate.boulder.ibm.com [9.17.210.138]) by g01zcilapp001.ahe.pok.ibm.com (Postfix) with ESMTP id 166F613E002 for ; Fri, 19 Feb 2016 03:47:14 -0500 (EST) Received: by BLDGATE.BOULDER.IBM.COM (IBM VM SMTP Level 630) via spool with SMTP id 8958 ; Fri, 19 Feb 2016 01:47:13 MST Received: by bldgate.vnet.ibm.com (xagsmtp 2.0.1) via smtp5 with spool id 6494; Fri, 19 Feb 2016 01:47:13 -0700 Received: from b03cxnp07029.gho.boulder.ibm.com [9.17.130.16] by BLDGATE.BOULDER.IBM.COM (IBM VM SMTP Level 630) via TCP with ESMTP ; Fri, 19 Feb 2016 01:47:12 MST Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by b03cxnp07029.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u1J8lCTY26345562 for ; Fri, 19 Feb 2016 01:47:12 -0700 Received: from d03av02.boulder.ibm.com (localhost [127.0.0.1]) by d03av02.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u1J8lBDk003229 for ; Fri, 19 Feb 2016 01:47:11 -0700 Received: from e14.ny.us.ibm.com (e14.pok.ibm.com [146.89.104.201]) by d03av02.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u1J8lA3C003109 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Fri, 19 Feb 2016 01:47:11 -0700 Received: from localhost by e14.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 19 Feb 2016 03:47:10 -0500 Received: from hotel311.server4you.de (85.25.146.15) by e14.ny.us.ibm.com (158.87.18.14) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256/256) Fri, 19 Feb 2016 03:47:08 -0500 X-ISS-IPR: 1001:75 (85.25.146.15) X-IBM-Helo: hotel311.server4you.de X-IBM-MailFrom: wagi@monom.org X-IBM-RcptTo: paulmck@linux.vnet.ibm.com Received: from hotel311.server4you.de (localhost [127.0.0.1]) by filter.mynetwork.local (Postfix) with ESMTP id D2AD31940071; Fri, 19 Feb 2016 09:47:03 +0100 (CET) Received: from localhost (mail.bmw-carit.de [62.245.222.98]) by hotel311.server4you.de (Postfix) with ESMTPSA id 7F3151941BAC; Fri, 19 Feb 2016 09:47:03 +0100 (CET) From: Daniel Wagner To: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org Cc: Marcelo Tosatti , Paolo Bonzini , "Paul E. McKenney" , Paul Gortmaker , "Peter Zijlstra (Intel)" , Thomas Gleixner , Steven Rostedt , Boqun Feng , Daniel Wagner Subject: [PATCH v8 3/5] KVM: use simple waitqueue for vcpu->wq Date: Fri, 19 Feb 2016 09:46:39 +0100 Message-Id: <1455871601-27484-4-git-send-email-wagi@monom.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1455871601-27484-1-git-send-email-wagi@monom.org> References: <1455871601-27484-1-git-send-email-wagi@monom.org> X-ZLA-Header: unknown; 0 X-ZLA-DetailInfo: BA=6.00004205; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZH=6.00000000; ZP=6.00000000; ZU=6.00000002; UDB=6.00301139; UTC=2016-02-19 08:47:09 x-cbid: 16021908-0053-0000-0000-0000077411E5 X-IBM-ISS-SpamDetectors: Score=0.415652; FLB=0; FLI=0; BY=0.043041; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0; RB=0; SC=0.415652; ST=0; TS=0; UL=0; ISC= X-IBM-ISS-DetailInfo: BY=3.00004938; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000145; SDB=6.00661892; UDB=6.00301139; UTC=2016-02-19 08:47:09 X-TM-AS-MML: disable X-Xagent-Gateway: bldgate.vnet.ibm.com (XAGSMTP3 at BLDGATE) Status: RO Content-Length: 13677 Lines: 436 From: Marcelo Tosatti The problem: On -rt, an emulated LAPIC timer instances has the following path: 1) hard interrupt 2) ksoftirqd is scheduled 3) ksoftirqd wakes up vcpu thread 4) vcpu thread is scheduled This extra context switch introduces unnecessary latency in the LAPIC path for a KVM guest. The solution: Allow waking up vcpu thread from hardirq context, thus avoiding the need for ksoftirqd to be scheduled. Normal waitqueues make use of spinlocks, which on -RT are sleepable locks. Therefore, waking up a waitqueue waiter involves locking a sleeping lock, which is not allowed from hard interrupt context. cyclictest command line: This patch reduces the average latency in my tests from 14us to 11us. Daniel writes: Paolo asked for numbers from kvm-unit-tests/tscdeadline_latency benchmark on mainline. The test was run 1000 times on tip/sched/core 4.4.0-rc8-01134-g0905f04: ./x86-run x86/tscdeadline_latency.flat -cpu host with idle=poll. The test seems not to deliver really stable numbers though most of them are smaller. Paolo write: "Anything above ~10000 cycles means that the host went to C1 or lower---the number means more or less nothing in that case. The mean shows an improvement indeed." Before: min max mean std count 1000.000000 1000.000000 1000.000000 1000.000000 mean 5162.596000 2019270.084000 5824.491541 20681.645558 std 75.431231 622607.723969 89.575700 6492.272062 min 4466.000000 23928.000000 5537.926500 585.864966 25% 5163.000000 1613252.750000 5790.132275 16683.745433 50% 5175.000000 2281919.000000 5834.654000 23151.990026 75% 5190.000000 2382865.750000 5861.412950 24148.206168 max 5228.000000 4175158.000000 6254.827300 46481.048691 After min max mean std count 1000.000000 1000.00000 1000.000000 1000.000000 mean 5143.511000 2076886.10300 5813.312474 21207.357565 std 77.668322 610413.09583 86.541500 6331.915127 min 4427.000000 25103.00000 5529.756600 559.187707 25% 5148.000000 1691272.75000 5784.889825 17473.518244 50% 5160.000000 2308328.50000 5832.025000 23464.837068 75% 5172.000000 2393037.75000 5853.177675 24223.969976 max 5222.000000 3922458.00000 6186.720500 42520.379830 [Patch was originaly based on the swait implementation found in the -rt tree. Daniel ported it to mainline's version and gathered the benchmark numbers for tscdeadline_latency test.] Signed-off-by: Daniel Wagner Acked-by: Peter Zijlstra (Intel) Cc: Marcelo Tosatti Cc: Paolo Bonzini --- arch/arm/kvm/arm.c | 8 ++++---- arch/arm/kvm/psci.c | 4 ++-- arch/mips/kvm/mips.c | 8 ++++---- arch/powerpc/include/asm/kvm_host.h | 4 ++-- arch/powerpc/kvm/book3s_hv.c | 23 +++++++++++------------ arch/s390/include/asm/kvm_host.h | 2 +- arch/s390/kvm/interrupt.c | 4 ++-- arch/x86/kvm/lapic.c | 6 +++--- include/linux/kvm_host.h | 5 +++-- virt/kvm/async_pf.c | 4 ++-- virt/kvm/kvm_main.c | 17 ++++++++--------- 11 files changed, 42 insertions(+), 43 deletions(-) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index dda1959..08e49c4 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -506,18 +506,18 @@ static void kvm_arm_resume_guest(struct kvm *kvm) struct kvm_vcpu *vcpu; kvm_for_each_vcpu(i, vcpu, kvm) { - wait_queue_head_t *wq = kvm_arch_vcpu_wq(vcpu); + struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu); vcpu->arch.pause = false; - wake_up_interruptible(wq); + swake_up(wq); } } static void vcpu_sleep(struct kvm_vcpu *vcpu) { - wait_queue_head_t *wq = kvm_arch_vcpu_wq(vcpu); + struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu); - wait_event_interruptible(*wq, ((!vcpu->arch.power_off) && + swait_event_interruptible(*wq, ((!vcpu->arch.power_off) && (!vcpu->arch.pause))); } diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c index a9b3b90..c2b1315 100644 --- a/arch/arm/kvm/psci.c +++ b/arch/arm/kvm/psci.c @@ -70,7 +70,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu) { struct kvm *kvm = source_vcpu->kvm; struct kvm_vcpu *vcpu = NULL; - wait_queue_head_t *wq; + struct swait_queue_head *wq; unsigned long cpu_id; unsigned long context_id; phys_addr_t target_pc; @@ -119,7 +119,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu) smp_mb(); /* Make sure the above is visible */ wq = kvm_arch_vcpu_wq(vcpu); - wake_up_interruptible(wq); + swake_up(wq); return PSCI_RET_SUCCESS; } diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 8bc3977..341f6a1 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -445,8 +445,8 @@ int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, dvcpu->arch.wait = 0; - if (waitqueue_active(&dvcpu->wq)) - wake_up_interruptible(&dvcpu->wq); + if (swait_active(&dvcpu->wq)) + swake_up(&dvcpu->wq); return 0; } @@ -1174,8 +1174,8 @@ static void kvm_mips_comparecount_func(unsigned long data) kvm_mips_callbacks->queue_timer_int(vcpu); vcpu->arch.wait = 0; - if (waitqueue_active(&vcpu->wq)) - wake_up_interruptible(&vcpu->wq); + if (swait_active(&vcpu->wq)) + swake_up(&vcpu->wq); } /* low level hrtimer wake routine */ diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 9d08d8c..c98afa5 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -289,7 +289,7 @@ struct kvmppc_vcore { struct list_head runnable_threads; struct list_head preempt_list; spinlock_t lock; - wait_queue_head_t wq; + struct swait_queue_head wq; spinlock_t stoltb_lock; /* protects stolen_tb and preempt_tb */ u64 stolen_tb; u64 preempt_tb; @@ -629,7 +629,7 @@ struct kvm_vcpu_arch { u8 prodded; u32 last_inst; - wait_queue_head_t *wqp; + struct swait_queue_head *wqp; struct kvmppc_vcore *vcore; int ret; int trap; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index baeddb0..f1187bb 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -114,11 +114,11 @@ static bool kvmppc_ipi_thread(int cpu) static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu) { int cpu; - wait_queue_head_t *wqp; + struct swait_queue_head *wqp; wqp = kvm_arch_vcpu_wq(vcpu); - if (waitqueue_active(wqp)) { - wake_up_interruptible(wqp); + if (swait_active(wqp)) { + swake_up(wqp); ++vcpu->stat.halt_wakeup; } @@ -701,8 +701,8 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) tvcpu->arch.prodded = 1; smp_mb(); if (vcpu->arch.ceded) { - if (waitqueue_active(&vcpu->wq)) { - wake_up_interruptible(&vcpu->wq); + if (swait_active(&vcpu->wq)) { + swake_up(&vcpu->wq); vcpu->stat.halt_wakeup++; } } @@ -1459,7 +1459,7 @@ static struct kvmppc_vcore *kvmppc_vcore_create(struct kvm *kvm, int core) INIT_LIST_HEAD(&vcore->runnable_threads); spin_lock_init(&vcore->lock); spin_lock_init(&vcore->stoltb_lock); - init_waitqueue_head(&vcore->wq); + init_swait_queue_head(&vcore->wq); vcore->preempt_tb = TB_NIL; vcore->lpcr = kvm->arch.lpcr; vcore->first_vcpuid = core * threads_per_subcore; @@ -2531,10 +2531,9 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) { struct kvm_vcpu *vcpu; int do_sleep = 1; + DECLARE_SWAITQUEUE(wait); - DEFINE_WAIT(wait); - - prepare_to_wait(&vc->wq, &wait, TASK_INTERRUPTIBLE); + prepare_to_swait(&vc->wq, &wait, TASK_INTERRUPTIBLE); /* * Check one last time for pending exceptions and ceded state after @@ -2548,7 +2547,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) } if (!do_sleep) { - finish_wait(&vc->wq, &wait); + finish_swait(&vc->wq, &wait); return; } @@ -2556,7 +2555,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) trace_kvmppc_vcore_blocked(vc, 0); spin_unlock(&vc->lock); schedule(); - finish_wait(&vc->wq, &wait); + finish_swait(&vc->wq, &wait); spin_lock(&vc->lock); vc->vcore_state = VCORE_INACTIVE; trace_kvmppc_vcore_blocked(vc, 1); @@ -2612,7 +2611,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) kvmppc_start_thread(vcpu, vc); trace_kvm_guest_enter(vcpu); } else if (vc->vcore_state == VCORE_SLEEPING) { - wake_up(&vc->wq); + swake_up(&vc->wq); } } diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h index 8959ebb..b0c8ad0 100644 --- a/arch/s390/include/asm/kvm_host.h +++ b/arch/s390/include/asm/kvm_host.h @@ -467,7 +467,7 @@ struct kvm_s390_irq_payload { struct kvm_s390_local_interrupt { spinlock_t lock; struct kvm_s390_float_interrupt *float_int; - wait_queue_head_t *wq; + struct swait_queue_head *wq; atomic_t *cpuflags; DECLARE_BITMAP(sigp_emerg_pending, KVM_MAX_VCPUS); struct kvm_s390_irq_payload irq; diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index f88ca72..9ffc732 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -966,13 +966,13 @@ no_timer: void kvm_s390_vcpu_wakeup(struct kvm_vcpu *vcpu) { - if (waitqueue_active(&vcpu->wq)) { + if (swait_active(&vcpu->wq)) { /* * The vcpu gave up the cpu voluntarily, mark it as a good * yield-candidate. */ vcpu->preempted = true; - wake_up_interruptible(&vcpu->wq); + swake_up(&vcpu->wq); vcpu->stat.halt_wakeup++; } } diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 36591fa..3a045f3 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1195,7 +1195,7 @@ static void apic_update_lvtt(struct kvm_lapic *apic) static void apic_timer_expired(struct kvm_lapic *apic) { struct kvm_vcpu *vcpu = apic->vcpu; - wait_queue_head_t *q = &vcpu->wq; + struct swait_queue_head *q = &vcpu->wq; struct kvm_timer *ktimer = &apic->lapic_timer; if (atomic_read(&apic->lapic_timer.pending)) @@ -1204,8 +1204,8 @@ static void apic_timer_expired(struct kvm_lapic *apic) atomic_inc(&apic->lapic_timer.pending); kvm_set_pending_timer(vcpu); - if (waitqueue_active(q)) - wake_up_interruptible(q); + if (swait_active(q)) + swake_up(q); if (apic_lvtt_tscdeadline(apic)) ktimer->expired_tscdeadline = ktimer->tscdeadline; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 861f690..5276fe0 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -218,7 +219,7 @@ struct kvm_vcpu { int fpu_active; int guest_fpu_loaded, guest_xcr0_loaded; unsigned char fpu_counter; - wait_queue_head_t wq; + struct swait_queue_head wq; struct pid *pid; int sigset_active; sigset_t sigset; @@ -782,7 +783,7 @@ static inline bool kvm_arch_has_assigned_device(struct kvm *kvm) } #endif -static inline wait_queue_head_t *kvm_arch_vcpu_wq(struct kvm_vcpu *vcpu) +static inline struct swait_queue_head *kvm_arch_vcpu_wq(struct kvm_vcpu *vcpu) { #ifdef __KVM_HAVE_ARCH_WQP return vcpu->arch.wqp; diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 3531599..73c1a2a 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -97,8 +97,8 @@ static void async_pf_execute(struct work_struct *work) * This memory barrier pairs with prepare_to_wait's set_current_state() */ smp_mb(); - if (waitqueue_active(&vcpu->wq)) - wake_up_interruptible(&vcpu->wq); + if (swait_active(&vcpu->wq)) + swake_up(&vcpu->wq); mmput(mm); kvm_put_kvm(vcpu->kvm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a11cfd2..f8417d0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -216,8 +216,7 @@ int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) vcpu->kvm = kvm; vcpu->vcpu_id = id; vcpu->pid = NULL; - vcpu->halt_poll_ns = 0; - init_waitqueue_head(&vcpu->wq); + init_swait_queue_head(&vcpu->wq); kvm_async_pf_vcpu_init(vcpu); vcpu->pre_pcpu = -1; @@ -1990,7 +1989,7 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu) void kvm_vcpu_block(struct kvm_vcpu *vcpu) { ktime_t start, cur; - DEFINE_WAIT(wait); + DECLARE_SWAITQUEUE(wait); bool waited = false; u64 block_ns; @@ -2015,7 +2014,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) kvm_arch_vcpu_blocking(vcpu); for (;;) { - prepare_to_wait(&vcpu->wq, &wait, TASK_INTERRUPTIBLE); + prepare_to_swait(&vcpu->wq, &wait, TASK_INTERRUPTIBLE); if (kvm_vcpu_check_block(vcpu) < 0) break; @@ -2024,7 +2023,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) schedule(); } - finish_wait(&vcpu->wq, &wait); + finish_swait(&vcpu->wq, &wait); cur = ktime_get(); kvm_arch_vcpu_unblocking(vcpu); @@ -2056,11 +2055,11 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu) { int me; int cpu = vcpu->cpu; - wait_queue_head_t *wqp; + struct swait_queue_head *wqp; wqp = kvm_arch_vcpu_wq(vcpu); - if (waitqueue_active(wqp)) { - wake_up_interruptible(wqp); + if (swait_active(wqp)) { + swake_up(wqp); ++vcpu->stat.halt_wakeup; } @@ -2161,7 +2160,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) continue; if (vcpu == me) continue; - if (waitqueue_active(&vcpu->wq) && !kvm_arch_vcpu_runnable(vcpu)) + if (swait_active(&vcpu->wq) && !kvm_arch_vcpu_runnable(vcpu)) continue; if (!kvm_vcpu_eligible_for_directed_yield(vcpu)) continue; -- 2.5.0 >From wagi@monom.org Fri Feb 19 00:51:53 2016 Return-Path: X-Original-To: paulmck@localhost Delivered-To: paulmck@localhost Received: from paulmck-ThinkPad-W541 (localhost [127.0.0.1]) by paulmck-ThinkPad-W541 (Postfix) with ESMTP id 3B84C16C024E for ; Fri, 19 Feb 2016 00:51:53 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com [9.63.16.69] by paulmck-ThinkPad-W541 with IMAP (fetchmail-6.3.26) for (single-drop); Fri, 19 Feb 2016 00:51:53 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com ([unix socket]) by g01zcilapp002 (Cyrus v2.3.11) with LMTPA; Fri, 19 Feb 2016 03:47:19 -0500 X-Sieve: CMU Sieve 2.3 Received: from localhost (localhost [127.0.0.1]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id EC4B426003 for ; Fri, 19 Feb 2016 03:47:18 -0500 (EST) X-Virus-Scanned: amavisd-new at linux.ibm.com X-Spam-Flag: NO X-Spam-Score: 0 X-Spam-Level: X-Spam-Status: No, score=0 tagged_above=-9999 required=6.2 tests=[none] autolearn=disabled Received: from g01zcilapp002.ahe.pok.ibm.com ([127.0.0.1]) by localhost (g01zcilapp002.ahe.pok.ibm.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id j4iGA7KJoy_B for ; Fri, 19 Feb 2016 03:47:18 -0500 (EST) Received: from g01zcilapp001.ahe.pok.ibm.com (g01zcilapp001.ahe.pok.ibm.com [9.63.16.68]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id 086C926002 for ; Fri, 19 Feb 2016 03:47:18 -0500 (EST) Received: from VMSDVMA.POK.IBM.COM (vmsdvma.pok.ibm.com [9.63.66.59]) by g01zcilapp001.ahe.pok.ibm.com (Postfix) with ESMTP id 055B813E002 for ; Fri, 19 Feb 2016 03:47:18 -0500 (EST) Received: by VMSDVMA.POK.IBM.COM (IBM VM SMTP Level 630) via spool with SMTP id 3154 ; Fri, 19 Feb 2016 03:47:17 EST Received: by vmsdvma.vnet.ibm.com (xagsmtp 2.0.1) via smtp2 with spool id 6237; Fri, 19 Feb 2016 03:47:17 -0500 Received: from b01cxnp23033.gho.pok.ibm.com [9.57.198.28] by VMSDVMA.POK.IBM.COM (IBM VM SMTP Level 630) via TCP with ESMTP ; Fri, 19 Feb 2016 03:47:17 EST Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by b01cxnp23033.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u1J8lHOt26804460 for ; Fri, 19 Feb 2016 08:47:17 GMT Received: from d01av04.pok.ibm.com (localhost [127.0.0.1]) by d01av04.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u1J8lHrn028743 for ; Fri, 19 Feb 2016 03:47:17 -0500 Received: from e12.ny.us.ibm.com (e12.pok.ibm.com [146.89.104.199]) by d01av04.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u1J8lHLT028706 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Fri, 19 Feb 2016 03:47:17 -0500 Received: from localhost by e12.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 19 Feb 2016 03:47:17 -0500 Received: from hotel311.server4you.de (85.25.146.15) by e12.ny.us.ibm.com (158.87.18.12) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256/256) Fri, 19 Feb 2016 03:47:07 -0500 X-ISS-IPR: 1001:75 (85.25.146.15) X-IBM-Helo: hotel311.server4you.de X-IBM-MailFrom: wagi@monom.org X-IBM-RcptTo: paulmck@linux.vnet.ibm.com Received: from hotel311.server4you.de (localhost [127.0.0.1]) by filter.mynetwork.local (Postfix) with ESMTP id 2AEB7194181D; Fri, 19 Feb 2016 09:47:04 +0100 (CET) Received: from localhost (mail.bmw-carit.de [62.245.222.98]) by hotel311.server4you.de (Postfix) with ESMTPSA id 0A2CF1941C58; Fri, 19 Feb 2016 09:47:04 +0100 (CET) From: Daniel Wagner To: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org Cc: Marcelo Tosatti , Paolo Bonzini , "Paul E. McKenney" , Paul Gortmaker , "Peter Zijlstra (Intel)" , Thomas Gleixner , Steven Rostedt , Boqun Feng , Daniel Wagner Subject: [PATCH v8 4/5] rcu: Do not call rcu_nocb_gp_cleanup() while holding rnp->lock Date: Fri, 19 Feb 2016 09:46:40 +0100 Message-Id: <1455871601-27484-5-git-send-email-wagi@monom.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1455871601-27484-1-git-send-email-wagi@monom.org> References: <1455871601-27484-1-git-send-email-wagi@monom.org> X-ZLA-Header: unknown; 0 X-ZLA-DetailInfo: BA=6.00004205; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZH=6.00000000; ZP=6.00000000; ZU=6.00000002; UDB=6.00301139; UTC=2016-02-19 08:47:08 x-cbid: 16021908-0049-0000-0000-00000856BCD2 X-IBM-ISS-SpamDetectors: Score=0.415652; FLB=0; FLI=0; BY=0.078859; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0; RB=0; SC=0.415652; ST=0; TS=0; UL=0; ISC= X-IBM-ISS-DetailInfo: BY=3.00004938; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000145; SDB=6.00661893; UDB=6.00301139; UTC=2016-02-19 08:47:15 X-TM-AS-MML: disable X-Xagent-Gateway: vmsdvma.vnet.ibm.com (XAGSMTP4 at VMSDVMA) Status: O Content-Length: 7527 Lines: 189 From: Daniel Wagner rcu_nocb_gp_cleanup() is called while holding rnp->lock. Currently, this is okay because the wake_up_all() in rcu_nocb_gp_cleanup() will not enable the IRQs. lockdep is happy. By switching over using swait this is not true anymore. swake_up_all() enables the IRQs while processing the waiters. __do_softirq() can now run and will eventually call rcu_process_callbacks() which wants to grap nrp->lock. Let's move the rcu_nocb_gp_cleanup() call outside the lock before we switch over to swait. If we would hold the rnp->lock and use swait, lockdep reports following: ================================= [ INFO: inconsistent lock state ] 4.2.0-rc5-00025-g9a73ba0 #136 Not tainted --------------------------------- inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage. rcu_preempt/8 [HC0[0]:SC0[0]:HE1:SE1] takes: (rcu_node_1){+.?...}, at: [] rcu_gp_kthread+0xb97/0xeb0 {IN-SOFTIRQ-W} state was registered at: [] __lock_acquire+0xd5f/0x21e0 [] lock_acquire+0xdf/0x2b0 [] _raw_spin_lock_irqsave+0x59/0xa0 [] rcu_process_callbacks+0x141/0x3c0 [] __do_softirq+0x14d/0x670 [] irq_exit+0x104/0x110 [] smp_apic_timer_interrupt+0x46/0x60 [] apic_timer_interrupt+0x70/0x80 [] rq_attach_root+0xa6/0x100 [] cpu_attach_domain+0x16d/0x650 [] build_sched_domains+0x942/0xb00 [] sched_init_smp+0x509/0x5c1 [] kernel_init_freeable+0x172/0x28f [] kernel_init+0xe/0xe0 [] ret_from_fork+0x3f/0x70 irq event stamp: 76 hardirqs last enabled at (75): [] _raw_spin_unlock_irq+0x30/0x60 hardirqs last disabled at (76): [] _raw_spin_lock_irq+0x1f/0x90 softirqs last enabled at (0): [] copy_process.part.26+0x602/0x1cf0 softirqs last disabled at (0): [< (null)>] (null) other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(rcu_node_1); lock(rcu_node_1); *** DEADLOCK *** 1 lock held by rcu_preempt/8: #0: (rcu_node_1){+.?...}, at: [] rcu_gp_kthread+0xb97/0xeb0 stack backtrace: CPU: 0 PID: 8 Comm: rcu_preempt Not tainted 4.2.0-rc5-00025-g9a73ba0 #136 Hardware name: Dell Inc. PowerEdge R820/066N7P, BIOS 2.0.20 01/16/2014 0000000000000000 000000006d7e67d8 ffff881fb081fbd8 ffffffff818379e0 0000000000000000 ffff881fb0812a00 ffff881fb081fc38 ffffffff8110813b 0000000000000000 0000000000000001 ffff881f00000001 ffffffff8102fa4f Call Trace: [] dump_stack+0x4f/0x7b [] print_usage_bug+0x1db/0x1e0 [] ? save_stack_trace+0x2f/0x50 [] mark_lock+0x66d/0x6e0 [] ? check_usage_forwards+0x150/0x150 [] mark_held_locks+0x78/0xa0 [] ? _raw_spin_unlock_irq+0x30/0x60 [] trace_hardirqs_on_caller+0x168/0x220 [] trace_hardirqs_on+0xd/0x10 [] _raw_spin_unlock_irq+0x30/0x60 [] swake_up_all+0xb7/0xe0 [] rcu_gp_kthread+0xab1/0xeb0 [] ? trace_hardirqs_on_caller+0xff/0x220 [] ? _raw_spin_unlock_irq+0x41/0x60 [] ? rcu_barrier+0x20/0x20 [] kthread+0x104/0x120 [] ? _raw_spin_unlock_irq+0x30/0x60 [] ? kthread_create_on_node+0x260/0x260 [] ret_from_fork+0x3f/0x70 [] ? kthread_create_on_node+0x260/0x260 Signed-off-by: Daniel Wagner Acked-by: Peter Zijlstra (Intel) Cc: "Paul E. McKenney" Cc: Thomas Gleixner --- kernel/rcu/tree.c | 4 +++- kernel/rcu/tree.h | 3 ++- kernel/rcu/tree_plugin.h | 16 +++++++++++++--- 3 files changed, 18 insertions(+), 5 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index e41dd41..8e8c6ec 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1614,7 +1614,6 @@ static int rcu_future_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp) int needmore; struct rcu_data *rdp = this_cpu_ptr(rsp->rda); - rcu_nocb_gp_cleanup(rsp, rnp); rnp->need_future_gp[c & 0x1] = 0; needmore = rnp->need_future_gp[(c + 1) & 0x1]; trace_rcu_future_gp(rnp, rdp, c, @@ -2010,6 +2009,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp) int nocb = 0; struct rcu_data *rdp; struct rcu_node *rnp = rcu_get_root(rsp); + wait_queue_head_t *sq; WRITE_ONCE(rsp->gp_activity, jiffies); raw_spin_lock_irq_rcu_node(rnp); @@ -2046,7 +2046,9 @@ static void rcu_gp_cleanup(struct rcu_state *rsp) needgp = __note_gp_changes(rsp, rnp, rdp) || needgp; /* smp_mb() provided by prior unlock-lock pair. */ nocb += rcu_future_gp_cleanup(rsp, rnp); + sq = rcu_nocb_gp_get(rnp); raw_spin_unlock_irq(&rnp->lock); + rcu_nocb_gp_cleanup(sq); cond_resched_rcu_qs(); WRITE_ONCE(rsp->gp_activity, jiffies); rcu_gp_slow(rsp, gp_cleanup_delay); diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 83360b4..10dedfb 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -621,7 +621,8 @@ static void zero_cpu_stall_ticks(struct rcu_data *rdp); static void increment_cpu_stall_ticks(void); static bool rcu_nocb_cpu_needs_barrier(struct rcu_state *rsp, int cpu); static void rcu_nocb_gp_set(struct rcu_node *rnp, int nrq); -static void rcu_nocb_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp); +static wait_queue_head_t *rcu_nocb_gp_get(struct rcu_node *rnp); +static void rcu_nocb_gp_cleanup(wait_queue_head_t *sq); static void rcu_init_one_nocb(struct rcu_node *rnp); static bool __call_rcu_nocb(struct rcu_data *rdp, struct rcu_head *rhp, bool lazy, unsigned long flags); diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 9467a8b..631aff6 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -1811,9 +1811,9 @@ early_param("rcu_nocb_poll", parse_rcu_nocb_poll); * Wake up any no-CBs CPUs' kthreads that were waiting on the just-ended * grace period. */ -static void rcu_nocb_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp) +static void rcu_nocb_gp_cleanup(wait_queue_head_t *sq) { - wake_up_all(&rnp->nocb_gp_wq[rnp->completed & 0x1]); + wake_up_all(sq); } /* @@ -1829,6 +1829,11 @@ static void rcu_nocb_gp_set(struct rcu_node *rnp, int nrq) rnp->need_future_gp[(rnp->completed + 1) & 0x1] += nrq; } +static wait_queue_head_t *rcu_nocb_gp_get(struct rcu_node *rnp) +{ + return &rnp->nocb_gp_wq[rnp->completed & 0x1]; +} + static void rcu_init_one_nocb(struct rcu_node *rnp) { init_waitqueue_head(&rnp->nocb_gp_wq[0]); @@ -2502,7 +2507,7 @@ static bool rcu_nocb_cpu_needs_barrier(struct rcu_state *rsp, int cpu) return false; } -static void rcu_nocb_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp) +static void rcu_nocb_gp_cleanup(wait_queue_head_t *sq) { } @@ -2510,6 +2515,11 @@ static void rcu_nocb_gp_set(struct rcu_node *rnp, int nrq) { } +static wait_queue_head_t *rcu_nocb_gp_get(struct rcu_node *rnp) +{ + return NULL; +} + static void rcu_init_one_nocb(struct rcu_node *rnp) { } -- 2.5.0 >From wagi@monom.org Fri Feb 19 00:51:51 2016 Return-Path: X-Original-To: paulmck@localhost Delivered-To: paulmck@localhost Received: from paulmck-ThinkPad-W541 (localhost [127.0.0.1]) by paulmck-ThinkPad-W541 (Postfix) with ESMTP id 9D8F916C024E for ; Fri, 19 Feb 2016 00:51:51 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com [9.63.16.69] by paulmck-ThinkPad-W541 with IMAP (fetchmail-6.3.26) for (single-drop); Fri, 19 Feb 2016 00:51:51 -0800 (PST) Received: from g01zcilapp002.ahe.pok.ibm.com ([unix socket]) by g01zcilapp002 (Cyrus v2.3.11) with LMTPA; Fri, 19 Feb 2016 03:47:13 -0500 X-Sieve: CMU Sieve 2.3 Received: from localhost (localhost [127.0.0.1]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id 5A96626003 for ; Fri, 19 Feb 2016 03:47:13 -0500 (EST) X-Virus-Scanned: amavisd-new at linux.ibm.com X-Spam-Flag: NO X-Spam-Score: 0 X-Spam-Level: X-Spam-Status: No, score=0 tagged_above=-9999 required=6.2 tests=[none] autolearn=disabled Received: from g01zcilapp002.ahe.pok.ibm.com ([127.0.0.1]) by localhost (g01zcilapp002.ahe.pok.ibm.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 1_Q0it4NphRS for ; Fri, 19 Feb 2016 03:47:12 -0500 (EST) Received: from g01zcilapp001.ahe.pok.ibm.com (g01zcilapp001.ahe.pok.ibm.com [9.63.16.68]) by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id A507726004 for ; Fri, 19 Feb 2016 03:47:12 -0500 (EST) Received: from VMSDVM6.POK.IBM.COM (vmsdvm6.pok.ibm.com [9.57.5.25]) by g01zcilapp001.ahe.pok.ibm.com (Postfix) with ESMTP id 9FD2013E002 for ; Fri, 19 Feb 2016 03:47:12 -0500 (EST) Received: by VMSDVM6.POK.IBM.COM (IBM VM SMTP Level 630) via spool with SMTP id 0815 ; Fri, 19 Feb 2016 03:47:12 EST Received: by vmsdvm6.vnet.ibm.com (xagsmtp 2.0.1) via smtp5 with spool id 8549; Fri, 19 Feb 2016 03:47:12 -0500 Received: from b01cxnp23034.gho.pok.ibm.com [9.57.198.29] by VMSDVM6.POK.IBM.COM (IBM VM SMTP Level 630) via TCP with ESMTP ; Fri, 19 Feb 2016 03:47:12 EST Received: from d01av01.pok.ibm.com (d01av01.pok.ibm.com [9.56.224.215]) by b01cxnp23034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u1J8lCHX33423490 for ; Fri, 19 Feb 2016 08:47:12 GMT Received: from d01av01.pok.ibm.com (localhost [127.0.0.1]) by d01av01.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u1J8lBvV029426 for ; Fri, 19 Feb 2016 03:47:11 -0500 Received: from e11.ny.us.ibm.com (e16.pok.ibm.com [146.89.104.203]) by d01av01.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u1J8lBg7029405 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Fri, 19 Feb 2016 03:47:11 -0500 Received: from localhost by e11.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 19 Feb 2016 03:47:11 -0500 Received: from hotel311.server4you.de (85.25.146.15) by e11.ny.us.ibm.com (158.87.18.16) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256/256) Fri, 19 Feb 2016 03:47:08 -0500 X-ISS-IPR: 1001:75 (85.25.146.15) X-IBM-Helo: hotel311.server4you.de X-IBM-MailFrom: wagi@monom.org X-IBM-RcptTo: paulmck@linux.vnet.ibm.com Received: from hotel311.server4you.de (localhost [127.0.0.1]) by filter.mynetwork.local (Postfix) with ESMTP id AF1B01941C9A; Fri, 19 Feb 2016 09:47:04 +0100 (CET) Received: from localhost (mail.bmw-carit.de [62.245.222.98]) by hotel311.server4you.de (Postfix) with ESMTPSA id 899B61941C89; Fri, 19 Feb 2016 09:47:04 +0100 (CET) From: Daniel Wagner To: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org Cc: Marcelo Tosatti , Paolo Bonzini , "Paul E. McKenney" , Paul Gortmaker , "Peter Zijlstra (Intel)" , Thomas Gleixner , Steven Rostedt , Boqun Feng , Daniel Wagner Subject: [PATCH v8 5/5] rcu: use simple wait queues where possible in rcutree Date: Fri, 19 Feb 2016 09:46:41 +0100 Message-Id: <1455871601-27484-6-git-send-email-wagi@monom.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1455871601-27484-1-git-send-email-wagi@monom.org> References: <1455871601-27484-1-git-send-email-wagi@monom.org> X-ZLA-Header: unknown; 0 X-ZLA-DetailInfo: BA=6.00004205; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZH=6.00000000; ZP=6.00000000; ZU=6.00000002; UDB=6.00301139; UTC=2016-02-19 08:47:09 x-cbid: 16021908-0025-0000-0000-0000087F2371 X-IBM-ISS-SpamDetectors: Score=0.431554; FLB=0; FLI=0; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0; RB=0; SC=0.431554; ST=0; TS=0; UL=0; ISC= X-IBM-ISS-DetailInfo: BY=3.00004938; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000145; SDB=6.00661893; UDB=6.00301139; UTC=2016-02-19 08:47:10 X-TM-AS-MML: disable X-Xagent-Gateway: vmsdvm6.vnet.ibm.com (XAGSMTP5 at VMSDVM6) Status: RO Content-Length: 11699 Lines: 312 From: Paul Gortmaker As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don't need the extra overhead of the full wait queue features. Originally this was done for RT kernels[1], since we would get things like... BUG: sleeping function called from invalid context at kernel/rtmutex.c:659 in_atomic(): 1, irqs_disabled(): 1, pid: 8, name: rcu_preempt Pid: 8, comm: rcu_preempt Not tainted Call Trace: [] __might_sleep+0xd0/0xf0 [] rt_spin_lock+0x24/0x50 [] __wake_up+0x36/0x70 [] rcu_gp_kthread+0x4d2/0x680 [] ? __init_waitqueue_head+0x50/0x50 [] ? rcu_gp_fqs+0x80/0x80 [] kthread+0xdb/0xe0 [] ? finish_task_switch+0x52/0x100 [] kernel_thread_helper+0x4/0x10 [] ? __init_kthread_worker+0x60/0x60 [] ? gs_change+0xb/0xb ...and hence simple wait queues were deployed on RT out of necessity (as simple wait uses a raw lock), but mainline might as well take advantage of the more streamline support as well. [1] This is a carry forward of work from v3.10-rt; the original conversion was by Thomas on an earlier -rt version, and Sebastian extended it to additional post-3.10 added RCU waiters; here I've added a commit log and unified the RCU changes into one, and uprev'd it to match mainline RCU. Signed-off-by: Daniel Wagner Acked-by: Peter Zijlstra (Intel) Cc: Paul E. McKenney Cc: Paul Gortmaker Cc: Thomas Gleixner --- kernel/rcu/tree.c | 22 +++++++++++----------- kernel/rcu/tree.h | 13 +++++++------ kernel/rcu/tree_plugin.h | 26 +++++++++++++------------- 3 files changed, 31 insertions(+), 30 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 8e8c6ec..9fd5b62 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1634,7 +1634,7 @@ static void rcu_gp_kthread_wake(struct rcu_state *rsp) !READ_ONCE(rsp->gp_flags) || !rsp->gp_kthread) return; - wake_up(&rsp->gp_wq); + swake_up(&rsp->gp_wq); } /* @@ -2009,7 +2009,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp) int nocb = 0; struct rcu_data *rdp; struct rcu_node *rnp = rcu_get_root(rsp); - wait_queue_head_t *sq; + struct swait_queue_head *sq; WRITE_ONCE(rsp->gp_activity, jiffies); raw_spin_lock_irq_rcu_node(rnp); @@ -2094,7 +2094,7 @@ static int __noreturn rcu_gp_kthread(void *arg) READ_ONCE(rsp->gpnum), TPS("reqwait")); rsp->gp_state = RCU_GP_WAIT_GPS; - wait_event_interruptible(rsp->gp_wq, + swait_event_interruptible(rsp->gp_wq, READ_ONCE(rsp->gp_flags) & RCU_GP_FLAG_INIT); rsp->gp_state = RCU_GP_DONE_GPS; @@ -2124,7 +2124,7 @@ static int __noreturn rcu_gp_kthread(void *arg) READ_ONCE(rsp->gpnum), TPS("fqswait")); rsp->gp_state = RCU_GP_WAIT_FQS; - ret = wait_event_interruptible_timeout(rsp->gp_wq, + ret = swait_event_interruptible_timeout(rsp->gp_wq, rcu_gp_fqs_check_wake(rsp, &gf), j); rsp->gp_state = RCU_GP_DOING_FQS; /* Locking provides needed memory barriers. */ @@ -2248,7 +2248,7 @@ static void rcu_report_qs_rsp(struct rcu_state *rsp, unsigned long flags) WARN_ON_ONCE(!rcu_gp_in_progress(rsp)); WRITE_ONCE(rsp->gp_flags, READ_ONCE(rsp->gp_flags) | RCU_GP_FLAG_FQS); raw_spin_unlock_irqrestore(&rcu_get_root(rsp)->lock, flags); - rcu_gp_kthread_wake(rsp); + swake_up(&rsp->gp_wq); /* Memory barrier implied by swake_up() path. */ } /* @@ -2902,7 +2902,7 @@ static void force_quiescent_state(struct rcu_state *rsp) } WRITE_ONCE(rsp->gp_flags, READ_ONCE(rsp->gp_flags) | RCU_GP_FLAG_FQS); raw_spin_unlock_irqrestore(&rnp_old->lock, flags); - rcu_gp_kthread_wake(rsp); + swake_up(&rsp->gp_wq); /* Memory barrier implied by swake_up() path. */ } /* @@ -3531,7 +3531,7 @@ static void __rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp, raw_spin_unlock_irqrestore(&rnp->lock, flags); if (wake) { smp_mb(); /* EGP done before wake_up(). */ - wake_up(&rsp->expedited_wq); + swake_up(&rsp->expedited_wq); } break; } @@ -3782,7 +3782,7 @@ static void synchronize_sched_expedited_wait(struct rcu_state *rsp) jiffies_start = jiffies; for (;;) { - ret = wait_event_interruptible_timeout( + ret = swait_event_timeout( rsp->expedited_wq, sync_rcu_preempt_exp_done(rnp_root), jiffies_stall); @@ -3790,7 +3790,7 @@ static void synchronize_sched_expedited_wait(struct rcu_state *rsp) return; if (ret < 0) { /* Hit a signal, disable CPU stall warnings. */ - wait_event(rsp->expedited_wq, + swait_event(rsp->expedited_wq, sync_rcu_preempt_exp_done(rnp_root)); return; } @@ -4484,8 +4484,8 @@ static void __init rcu_init_one(struct rcu_state *rsp) } } - init_waitqueue_head(&rsp->gp_wq); - init_waitqueue_head(&rsp->expedited_wq); + init_swait_queue_head(&rsp->gp_wq); + init_swait_queue_head(&rsp->expedited_wq); rnp = rsp->level[rcu_num_lvls - 1]; for_each_possible_cpu(i) { while (i > rnp->grphi) diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 10dedfb..bbd235d 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -27,6 +27,7 @@ #include #include #include +#include #include /* @@ -243,7 +244,7 @@ struct rcu_node { /* Refused to boost: not sure why, though. */ /* This can happen due to race conditions. */ #ifdef CONFIG_RCU_NOCB_CPU - wait_queue_head_t nocb_gp_wq[2]; + struct swait_queue_head nocb_gp_wq[2]; /* Place for rcu_nocb_kthread() to wait GP. */ #endif /* #ifdef CONFIG_RCU_NOCB_CPU */ int need_future_gp[2]; @@ -399,7 +400,7 @@ struct rcu_data { atomic_long_t nocb_q_count_lazy; /* invocation (all stages). */ struct rcu_head *nocb_follower_head; /* CBs ready to invoke. */ struct rcu_head **nocb_follower_tail; - wait_queue_head_t nocb_wq; /* For nocb kthreads to sleep on. */ + struct swait_queue_head nocb_wq; /* For nocb kthreads to sleep on. */ struct task_struct *nocb_kthread; int nocb_defer_wakeup; /* Defer wakeup of nocb_kthread. */ @@ -478,7 +479,7 @@ struct rcu_state { unsigned long gpnum; /* Current gp number. */ unsigned long completed; /* # of last completed gp. */ struct task_struct *gp_kthread; /* Task for grace periods. */ - wait_queue_head_t gp_wq; /* Where GP task waits. */ + struct swait_queue_head gp_wq; /* Where GP task waits. */ short gp_flags; /* Commands for GP task. */ short gp_state; /* GP kthread sleep state. */ @@ -506,7 +507,7 @@ struct rcu_state { unsigned long expedited_sequence; /* Take a ticket. */ atomic_long_t expedited_normal; /* # fallbacks to normal. */ atomic_t expedited_need_qs; /* # CPUs left to check in. */ - wait_queue_head_t expedited_wq; /* Wait for check-ins. */ + struct swait_queue_head expedited_wq; /* Wait for check-ins. */ int ncpus_snap; /* # CPUs seen last time. */ unsigned long jiffies_force_qs; /* Time at which to invoke */ @@ -621,8 +622,8 @@ static void zero_cpu_stall_ticks(struct rcu_data *rdp); static void increment_cpu_stall_ticks(void); static bool rcu_nocb_cpu_needs_barrier(struct rcu_state *rsp, int cpu); static void rcu_nocb_gp_set(struct rcu_node *rnp, int nrq); -static wait_queue_head_t *rcu_nocb_gp_get(struct rcu_node *rnp); -static void rcu_nocb_gp_cleanup(wait_queue_head_t *sq); +static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp); +static void rcu_nocb_gp_cleanup(struct swait_queue_head *sq); static void rcu_init_one_nocb(struct rcu_node *rnp); static bool __call_rcu_nocb(struct rcu_data *rdp, struct rcu_head *rhp, bool lazy, unsigned long flags); diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 631aff6..080bd20 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -1811,9 +1811,9 @@ early_param("rcu_nocb_poll", parse_rcu_nocb_poll); * Wake up any no-CBs CPUs' kthreads that were waiting on the just-ended * grace period. */ -static void rcu_nocb_gp_cleanup(wait_queue_head_t *sq) +static void rcu_nocb_gp_cleanup(struct swait_queue_head *sq) { - wake_up_all(sq); + swake_up_all(sq); } /* @@ -1829,15 +1829,15 @@ static void rcu_nocb_gp_set(struct rcu_node *rnp, int nrq) rnp->need_future_gp[(rnp->completed + 1) & 0x1] += nrq; } -static wait_queue_head_t *rcu_nocb_gp_get(struct rcu_node *rnp) +static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp) { return &rnp->nocb_gp_wq[rnp->completed & 0x1]; } static void rcu_init_one_nocb(struct rcu_node *rnp) { - init_waitqueue_head(&rnp->nocb_gp_wq[0]); - init_waitqueue_head(&rnp->nocb_gp_wq[1]); + init_swait_queue_head(&rnp->nocb_gp_wq[0]); + init_swait_queue_head(&rnp->nocb_gp_wq[1]); } #ifndef CONFIG_RCU_NOCB_CPU_ALL @@ -1862,7 +1862,7 @@ static void wake_nocb_leader(struct rcu_data *rdp, bool force) if (READ_ONCE(rdp_leader->nocb_leader_sleep) || force) { /* Prior smp_mb__after_atomic() orders against prior enqueue. */ WRITE_ONCE(rdp_leader->nocb_leader_sleep, false); - wake_up(&rdp_leader->nocb_wq); + swake_up(&rdp_leader->nocb_wq); } } @@ -2074,7 +2074,7 @@ static void rcu_nocb_wait_gp(struct rcu_data *rdp) */ trace_rcu_future_gp(rnp, rdp, c, TPS("StartWait")); for (;;) { - wait_event_interruptible( + swait_event_interruptible( rnp->nocb_gp_wq[c & 0x1], (d = ULONG_CMP_GE(READ_ONCE(rnp->completed), c))); if (likely(d)) @@ -2102,7 +2102,7 @@ wait_again: /* Wait for callbacks to appear. */ if (!rcu_nocb_poll) { trace_rcu_nocb_wake(my_rdp->rsp->name, my_rdp->cpu, "Sleep"); - wait_event_interruptible(my_rdp->nocb_wq, + swait_event_interruptible(my_rdp->nocb_wq, !READ_ONCE(my_rdp->nocb_leader_sleep)); /* Memory barrier handled by smp_mb() calls below and repoll. */ } else if (firsttime) { @@ -2177,7 +2177,7 @@ wait_again: * List was empty, wake up the follower. * Memory barriers supplied by atomic_long_add(). */ - wake_up(&rdp->nocb_wq); + swake_up(&rdp->nocb_wq); } } @@ -2198,7 +2198,7 @@ static void nocb_follower_wait(struct rcu_data *rdp) if (!rcu_nocb_poll) { trace_rcu_nocb_wake(rdp->rsp->name, rdp->cpu, "FollowerSleep"); - wait_event_interruptible(rdp->nocb_wq, + swait_event_interruptible(rdp->nocb_wq, READ_ONCE(rdp->nocb_follower_head)); } else if (firsttime) { /* Don't drown trace log with "Poll"! */ @@ -2357,7 +2357,7 @@ void __init rcu_init_nohz(void) static void __init rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp) { rdp->nocb_tail = &rdp->nocb_head; - init_waitqueue_head(&rdp->nocb_wq); + init_swait_queue_head(&rdp->nocb_wq); rdp->nocb_follower_tail = &rdp->nocb_follower_head; } @@ -2507,7 +2507,7 @@ static bool rcu_nocb_cpu_needs_barrier(struct rcu_state *rsp, int cpu) return false; } -static void rcu_nocb_gp_cleanup(wait_queue_head_t *sq) +static void rcu_nocb_gp_cleanup(struct swait_queue_head *sq) { } @@ -2515,7 +2515,7 @@ static void rcu_nocb_gp_set(struct rcu_node *rnp, int nrq) { } -static wait_queue_head_t *rcu_nocb_gp_get(struct rcu_node *rnp) +static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp) { return NULL; } -- 2.5.0