[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVPk9Jp1vaf12xSc4cpypbYd_3bcSVkKtarSpoiPNK_MNg@mail.gmail.com>
Date: Sat, 7 Mar 2015 10:10:46 +0800
From: Ming Lei <ming.lei@...onical.com>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Jason Low <jason.low2@...com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
Sasha Levin <sasha.levin@...cle.com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Dave Jones <davej@...emonkey.org.uk>
Subject: Re: softlockups in multi_cpu_stop
On Sat, Mar 7, 2015 at 10:07 AM, Davidlohr Bueso <dave@...olabs.net> wrote:
> On Sat, 2015-03-07 at 09:55 +0800, Ming Lei wrote:
>> On Fri, 06 Mar 2015 14:15:37 -0800
>> Davidlohr Bueso <dave@...olabs.net> wrote:
>>
>> > On Fri, 2015-03-06 at 13:12 -0800, Jason Low wrote:
>> > > In owner_running() there are 2 conditions that would make it return
>> > > false: if the owner changed or if the owner is not running. However,
>> > > that patch continues spinning if there is a "new owner" but it does not
>> > > take into account that we may want to stop spinning if the owner is not
>> > > running (due to getting rescheduled).
>> >
>> > So you're rationale is that we're missing this need_resched:
>> >
>> > while (owner_running(sem, owner)) {
>> > /* abort spinning when need_resched */
>> > if (need_resched()) {
>> > rcu_read_unlock();
>> > return false;
>> > }
>> > }
>> >
>> > Because the owner_running() would return false, right? Yeah that makes
>> > sense, as missing a resched is a bug, as opposed to our heuristics being
>> > so painfully off.
>> >
>> > Sasha, Ming (Cc'ed), does this address the issues you guys are seeing?
>>
>> For the xfstest lockup, what matters is that the owner isn't running, since
>> the following simple change does fix the issue:
>
> I much prefer Jason's approach, which should also take care of the
> issue, as it includes the !owner->on_cpu stop condition to stop
> spinning.
But the check on owner->on_cpu should be moved outside the loop
because new owner can be scheduled out too, right?
>>
>> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
>> index 06e2214..5e08705 100644
>> --- a/kernel/locking/rwsem-xadd.c
>> +++ b/kernel/locking/rwsem-xadd.c
>> @@ -358,8 +358,9 @@ bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner)
>> }
>> rcu_read_unlock();
>>
>> - if (READ_ONCE(sem->owner))
>> - return true; /* new owner, continue spinning */
>> + owner = READ_ONCE(sem->owner);
>> + if (owner && owner->on_cpu)
>> + return true;
>>
>> /*
>> * When the owner is not set, the lock could be free or
>>
>>
>> Thanks,
>> Ming Lei
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists