[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CE2C58F938F4C44EA74FFF880BAA7E5E1C68AD12@BBYEXM01.pmc-sierra.internal>
Date: Tue, 24 Dec 2013 09:13:51 +0000
From: Suresh Thiagarajan <Suresh.Thiagarajan@...s.com>
To: Ingo Molnar <mingo@...nel.org>, Oleg Nesterov <oleg@...hat.com>
CC: Jason Seba <jason.seba42@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tomas Henzl <thenzl@...hat.com>, Jack Wang <xjtuwjp@...il.com>,
Viswas G <Viswas.G@...s.com>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"JBottomley@...allels.com" <JBottomley@...allels.com>,
"Vasanthalakshmi Tharmarajan" <Vasanthalakshmi.Tharmarajan@...s.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: spinlock_irqsave() && flags (Was: pm80xx: Spinlock fix)
On Tue, Dec 24, 2013 at 1:59 PM, Ingo Molnar <mingo@...nel.org> wrote:
>
> * Oleg Nesterov <oleg@...hat.com> wrote:
>
>> On 12/23, Ingo Molnar wrote:
>> >
>> > * Oleg Nesterov <oleg@...hat.com> wrote:
>> >
>> > > Initially I thought that this is obviously wrong, irqsave/irqrestore
>> > > assume that "flags" is owned by the caller, not by the lock. And
>> > > iirc this was certainly wrong in the past.
>> > >
>> > > But when I look at spinlock.c it seems that this code can actually
>> > > work. _irqsave() writes to FLAGS after it takes the lock, and
>> > > _irqrestore() has a copy of FLAGS before it drops this lock.
>> >
>> > I don't think that's true: if it was then the lock would not be
>> > irqsave, a hardware-irq could come in after the lock has been taken
>> > and before flags are saved+disabled.
>>
>> I do agree that this pattern is not safe, that is why I decided to ask.
>>
>> But, unless I missed something, with the current implementation
>> spin_lock_irqsave(lock, global_flags) does:
>>
>> unsigned long local_flags;
>>
>> local_irq_save(local_flags);
>> spin_lock(lock);
>>
>> global_flags = local_flags;
>>
>> so the access to global_flags is actually serialized by lock.
Below is a small pseudo code on protecting/serializing the flag for global access.
struct temp
{
...
spinlock_t lock;
unsigned long lock_flags;
};
void my_lock(struct temp *t)
{
unsigned long flag; // thread-private variable as suggested
spin_lock_irqsave(&t->lock, flag);
t->lock_flags = flag; //updating inside critical section now to serialize the access to flag
}
void my_unlock(struct temp *t)
{
unsigned long flag = t->lock_flags;
t->lock_flags = 0; //clearing it before getting out of critical section
spin_unlock_irqrestore(&t->lock, flag);
}
Here for unlocking, I could even use spin_unlock_irqrestore(&t->lock, t->lock_flags) directly instead of my_unlock() since t->lock_flags is updated only in my_lock and so there is no need to explicitly clear t->lock_flags. Please let me know if I miss anything here in serializing the global lock flag.
Thanks,
Suresh
>
> You are right, today that's true technically because IIRC due to Sparc
> quirks we happen to return 'flags' as a return value - still it's very
> ugly and it could break anytime if we decide to do more aggressive
> optimizations and actually directly save into 'flags'.
>
> Note that even today there's a narrow exception: on UP we happen to
> build it the other way around, so that we do:
>
> local_irq_save(global_flags);
> __acquire(lock);
>
> This does not matter for any real code because on UP there is no
> physical lock and __acquire() is empty code-wise, but any compiler
> driven locking analysis tool using __attribute__ __context__(), if
> built on UP, would see the unsafe locking pattern.
>
> Thanks,
>
> Ingo
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists