[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170811045453.GB3730@linux.vnet.ibm.com>
Date: Thu, 10 Aug 2017 21:54:53 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Stephen Rothwell <sfr@...b.auug.org.au>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
"H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Linux-Next Mailing List <linux-next@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: linux-next: build failure after merge of the rcu tree
On Fri, Aug 11, 2017 at 02:43:52PM +1000, Stephen Rothwell wrote:
> Hi Paul,
>
> After merging the rcu tree, today's linux-next build (arm
> multi_v7_defconfig) failed like this:
>
> kernel/sched/core.c: In function 'do_task_dead':
> kernel/sched/core.c:3385:2: error: implicit declaration of function 'smp_mb__before_spinlock' [-Werror=implicit-function-declaration]
> smp_mb__before_spinlock();
> ^
> cc1: some warnings being treated as errors
>
> Caused by commit
>
> 4a6fc6107e90 ("sched: Replace spin_unlock_wait() with lock/unlock pair")
>
> Interacting with commit
>
> a9668cd6ee28 ("locking: Remove smp_mb__before_spinlock()")
>
> from the tip tree.
>
> I applied this patch for now, but I assume something better is required.
Looks like I need to rebase my patch on top of a9668cd6ee28, and
than put an smp_mb__after_spinlock() between the lock and the unlock.
Peter, any objections to that approach? Other suggestions?
Thanx, Paul
> From: Stephen Rothwell <sfr@...b.auug.org.au>
> Date: Fri, 11 Aug 2017 14:32:10 +1000
> Subject: [PATCH] sched: temporary hack for locking: Remove smp_mb__before_spinlock()
>
> Signed-off-by: Stephen Rothwell <sfr@...b.auug.org.au>
> ---
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 2bd00feaea15..a4f4ba2e3be6 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3382,7 +3382,7 @@ void __noreturn do_task_dead(void)
> * To avoid it, we have to wait for releasing tsk->pi_lock which
> * is held by try_to_wake_up()
> */
> - smp_mb__before_spinlock();
> + smp_mb();
> raw_spin_lock_irq(¤t->pi_lock);
> raw_spin_unlock_irq(¤t->pi_lock);
>
> --
> Cheers,
> Stephen Rothwell
>
Powered by blists - more mailing lists